repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/official/model_monitoring/model_monitoring.ipynb
|
apache-2.0
|
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import pprint as pp
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow tensorflow_data_validation[visualization]
! rm -f /opt/conda/lib/python3.7/site-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade explainable_ai_sdk
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
# Import required packages.
import os
import random
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Vertex AI Model Monitoring with Explainable AI Feature Attributions
<table align="left">
<td>
<a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmaster%2Fnotebooks%2Fcommunity%2Fmodel_monitoring%2Fmodel_monitoring_feature_attribs.ipynb">
<img src="https://www.gstatic.com/cloud/images/navigation/vertex-ai.svg" alt="Google Cloud Notebooks">Open in Cloud Notebook
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/model_monitoring/model_monitoring_feature_attribs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Open in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/model_monitoring/model_monitoring_feature_attribs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
What is Vertex AI Model Monitoring?
Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:
software versioning
rigorous deployment processes
event logging
alerting/notication of situations requiring intervention
on-demand and automated diagnostic tracing
automated performance and functional testing
You should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.
Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:
How well do recent service requests match the training data used to build your model? This is called training-serving skew.
How significantly are service requests evolving over time? This is called drift detection.
Vertex Explainable AI adds another facet to model monitoring, which we call feature attribution monitoring. Explainable AI enables you to understand the relative contribution of each feature to a resulting prediction. In essence, it assesses the magnitude of each feature's influence.
If production traffic differs from training data, or varies substantially over time, either in terms of model predictions or feature attributions, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that you can anticipate problems before they affect your customer experiences or your revenue streams.
Objective
In this notebook, you will learn how to...
deploy a pre-trained model
configure model monitoring
generate some artificial traffic
understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature
Costs
This tutorial uses billable components of Google Cloud:
Vertext AI
BigQuery
Learn about Vertext AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
The example model
The model you'll use in this notebook is based on this blog post. The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:
identity - unique player identitity numbers
demographic features - information about the player, such as the geographic region in which a player is located
behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level
churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.
The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project.
Before you begin
Setup your dependencies
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
os.environ["GOOGLE_CLOUD_PROJECT"] = PROJECT_ID
"""
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
If you are running this notebook locally, you will need to install the Cloud SDK.
You'll use the gcloud command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your gcloud configuration settings.
For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.
End of explanation
"""
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
"""
Explanation: Login to your Google Cloud account and enable AI services
End of explanation
"""
# @title Utility functions
import copy
import os
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
from google.cloud.aiplatform_v1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1.types.io import BigQuerySource
from google.cloud.aiplatform_v1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1.types.prediction_service import (
ExplainRequest, PredictRequest)
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input, type="predict"):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
if type == "predict":
obj = PredictRequest
method = client.predict
elif type == "explain":
obj = ExplainRequest
method = client.explain
else:
raise Exception("unsupported request type:" + type)
params = {}
params = json_format.ParseDict(params, Value())
request = obj(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = None
try:
response = method(request)
except Exception as ex:
print(ex)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
"""
Explanation: Define some helper functions and data structures
Run the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
End of explanation
"""
builder = SavedModelMetadataBuilder(
"gs://mco-mm/churn", outputs_to_explain=["churned_probs"]
)
builder.save_metadata(".")
md = builder.get_metadata()
del md["tags"]
del md["framework"]
"""
Explanation: Generate model metadata for explainable AI
Run the following cell to extract metadata from the exported model, which is needed for generating the prediction explanations.
End of explanation
"""
import json
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-5:latest"
ENDPOINT = "us-central1-aiplatform.googleapis.com"
churn_model_path = "gs://mco-mm/churn"
request_data = {
"model": {
"displayName": "churn",
"artifactUri": churn_model_path,
"containerSpec": {"imageUri": IMAGE},
"explanationSpec": {
"parameters": {"sampledShapleyAttribution": {"pathCount": 5}},
"metadata": md,
},
}
}
with open("request_data.json", "w") as outfile:
json.dump(request_data, outfile)
output = !curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://{ENDPOINT}/v1/projects/{PROJECT_ID}/locations/{REGION}/models:upload \
-d @request_data.json 2>/dev/null
# print(output)
MODEL_ID = output[1].split()[1].split("/")[5]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
# If auto-testing this notebook, wait for model registration
if os.getenv("IS_TESTING"):
time.sleep(300)
"""
Explanation: Import your model
The churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another.
Run the next cell to import this model into your project. If you've already imported your model, you can skip this step.
End of explanation
"""
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
# print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
print(f"Model deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}.")
"""
Explanation: This request will return immediately but it spawns an asynchronous task that takes several minutes. Periodically check the Vertex Models page on the Cloud Console and don't continue with this lab until you see your newly created model there. It should like something like this:
<br>
<br>
<img src="https://storage.googleapis.com/mco-general/img/mm0.png" />
<br>
Deploy your endpoint
Now that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.
Run the next cell to deploy your model to an endpoint. This will take about ten minutes to complete.
End of explanation
"""
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
for i in resp.predictions:
vals = i["churned_values"]
probs = i["churned_probs"]
for i in range(len(vals)):
print(vals[i], probs[i])
plt.pie(probs, labels=vals)
plt.show()
pp.pprint(resp)
except Exception as ex:
print("prediction request failed", ex)
"""
Explanation: Run a prediction test
Now that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON, along with a pie chart summarizing the results.
Try this now by running the next cell and examine the results.
End of explanation
"""
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
features = []
scores = []
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT, type="explain")
for i in resp.explanations:
for j in i.attributions:
for k in j.feature_attributions:
features.append(k)
scores.append(j.feature_attributions[k])
features = [x for _, x in sorted(zip(scores, features))]
scores = sorted(scores)
fig, ax = plt.subplots()
fig.set_size_inches(9, 9)
ax.barh(features, scores)
fig.show()
# pp.pprint(resp)
except Exception as ex:
print("explanation request failed", ex)
"""
Explanation: Taking a closer look at the results, we see the following elements:
churned_values - a set of possible values (0 and 1) for the target field
churned_probs - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)
predicted_churn - based on the probabilities, the predicted value of the target field (1)
This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application.
Run an explanation test
We can also run a test of explainable AI on this endpoint. Run the next cell to send a test explanation request. If everything works as expected, you should receive a response encoding the feature importance of this prediction in a text representation called JSON, along with a bar chart summarizing the results.
Try this now by running the next cell and examine the results.
End of explanation
"""
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
ATTRIB_SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
ATTRIB_SKEW_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
ATTRIB_DRIFT_DEFAULT_THRESHOLDS = (
"country,cnt_user_engagement" # @param {type:"string"}
)
ATTRIB_DRIFT_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
"""
Explanation: Start your monitoring job
Now that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.
In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML.
Configure the following fields:
Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.
Monitor interval - time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds)
Target field - prediction target column name in training dataset
Skew detection threshold - skew threshold for each feature you want to monitor
Prediction drift threshold - drift threshold for each feature you want to monitor
Attribution Skew detection threshold - feature importance skew threshold
Attribution Prediction drift threshold - feature importance drift threshold
End of explanation
"""
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
attrib_skew_thresholds = get_thresholds(
ATTRIB_SKEW_DEFAULT_THRESHOLDS, ATTRIB_SKEW_CUSTOM_THRESHOLDS
)
attrib_drift_thresholds = get_thresholds(
ATTRIB_DRIFT_DEFAULT_THRESHOLDS, ATTRIB_DRIFT_CUSTOM_THRESHOLDS
)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds,
attribution_score_skew_thresholds=attrib_skew_thresholds,
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds,
attribution_score_drift_thresholds=attrib_drift_thresholds,
)
explanation_config = ModelMonitoringObjectiveConfig.ExplanationConfig(
enable_feature_attributes=True
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
explanation_config=explanation_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
"""
Explanation: Create your monitoring job
The following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
End of explanation
"""
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
"""
Explanation: After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:
<br>
<br>
<img src="https://storage.googleapis.com/mco-general/img/mm6.png" />
<br>
As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to see an example of the layout of these measurements in Cloud Storage. If you substitute the Cloud Storage URL in your job creation email, you can view the structure and content of the data files for your own monitoring job.
End of explanation
"""
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
start = 2
end = 3
for multiplier in range(start, end + 1):
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {
"cnt_level_start_quickplay": (
lambda x: x * multiplier,
lambda x: x / multiplier,
)
}
perturb_cat = {"Japan": max(COUNTRY.values()) * multiplier}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
if multiplier < end:
print("sleeping...")
time.sleep(60)
"""
Explanation: You will notice the following components in these Cloud Storage paths:
cloud-ai-platform-.. - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.
[model_monitoring|instance_schemas]/job-.. - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification.
instance_schemas/job-../analysis - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).
instance_schemas/job-../predict - This is the first prediction made to your model after the current monitoring job was enabled.
model_monitoring/job-../serving - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.
model_monitoring/job-../training - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data.
model_monitoring/job-../feature_attribution_score - This folder is used to record data relevant to feature attribution calculations. It contains an ongoing summary of feature attribution scores relative to training data.
You can create monitoring jobs with other user interfaces
In the previous cells, you created a monitoring job using the Python client library. You can also use the gcloud command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function.
Generate test data to trigger alerting
Now you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. This cell runs two five minute tests, one minute apart, so it should take roughly eleven minutes to complete the test.
The first test sends 300 fabricated requests (one per second for five minutes) while perturbing two features of interest (cnt_level_start_quickplay and country) by a factor of two. The second test does the same thing but perturbs the selected feature distributions by a factor of three. By perturbing data in two experiments, we're able to trigger both skew and drift alerts.
After running this test, it takes at least an hour to assess and report skew and drift alerts so feel free to proceed with the notebook now and you'll see how to examine the resulting alerts later.
End of explanation
"""
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
"""
Explanation: Interpret your results
While waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience.
Here's what a sample email alert looks like...
<img src="https://storage.googleapis.com/mco-general/img/mm7.png" />
This email is warning you that the cnt_level_start_quickplay, cnt_user_engagement, and country feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the cnt_user_engagement and country feature attribution values are skewed relative to your training data, again, as per your threshold specification.
Monitoring results in the Cloud Console
You can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities.
Monitoring Status
You can verify that a given endpoint has an active model monitoring job via the Endpoint summary page:
<img src="https://storage.googleapis.com/mco-general/img/mm1.png" />
Monitoring Alerts
You can examine the alert details by clicking into the endpoint of interest, and selecting the alerts panel:
<img src="https://storage.googleapis.com/mco-general/img/mm2.png" />
Feature Value Distributions
You can also examine the recorded training and production feature distributions by drilling down into a given feature, like this:
<img src="https://storage.googleapis.com/mco-general/img/mm9.png" />
which yields graphical representations of the feature distrubution during both training and production, like this:
<img src="https://storage.googleapis.com/mco-general/img/mm8.png" />
Clean up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
"""
|
daniestevez/jupyter_notebooks
|
GPS_timing/GPS timing.ipynb
|
gpl-3.0
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
plt.rcParams['font.size'] = 14
plt.rcParams['figure.facecolor'] = 'w'
plt.rcParams['figure.figsize'] = (10, 5)
"""
Explanation: GPS timing
This notebook shows how to process the output of GNSS-DSP-tools to measure the time of transmission of a GPS signal.
End of explanation
"""
chip_rate = 1.023e6 # GPS L1 C/A chip rate
samp_rate = 4e6 # This should match the IQ recording sample rate
"""
Explanation: Some parameters:
End of explanation
"""
code_offset = 151.6 # This should match the code offset used in track-gps-l1.py
samples_skipped = int(samp_rate*0.001*((1023-code_offset)/1023))
samples_skipped
"""
Explanation: The track-gps-l1.py script skips some samples at the beginning of the IQ file in order to start at the beginning of a PRN repetition. Here we calculate how many samples were skipped.
End of explanation
"""
with open('track.txt') as f:
track = f.readlines()
iq = np.array([float(a.split()[1]) + 1j*float(a.split()[2]) for a in track])
samp = np.array([int(a.split()[-1]) for a in track]) + samples_skipped
carrier_f = np.array([float(a.split()[3]) for a in track])
code_cyc = np.array([int(a.split()[-5]) for a in track])
code_p = np.array([float(a.split()[-4]) for a in track])
samp_seconds = samp / samp_rate
"""
Explanation: Read some of the columns of the output of track-gps-l1.py into numpy arrays.
End of explanation
"""
plt.plot(samp_seconds, carrier_f)
plt.title('Carrier frequency')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)');
"""
Explanation: The carrier frequency can be used to check if the tracking loops are locked. After a second or so, the carrier frequency should follow the Doppler of the satellite (plus receiver clock drift).
End of explanation
"""
plt.plot(samp_seconds, iq.real, '.', label='I')
plt.plot(samp_seconds, iq.imag, '.', label='Q')
plt.title('Integrate & dump output')
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Amplitude');
"""
Explanation: The IQ dumps from the integrate & dump output also indicate the quality of the tracking, and we will use them to find the start of the subframes and read the TOW (time on week).
End of explanation
"""
t0 = 8000 # Use data mid-way the recording
length = 4000 # Use only 4 seconds of data
dumps_per_symbol = 20
symbol_sums = np.empty(dumps_per_symbol)
for j in range(dumps_per_symbol):
symbols = np.average(iq.real[t0+j:][:length].reshape(-1, dumps_per_symbol),
axis=1)
symbol_sums[j] = np.average(np.abs(symbols))
plt.plot(symbol_sums, 'o-')
plt.xticks(np.arange(dumps_per_symbol))
plt.grid()
plt.xlabel('Offset (dumps)')
plt.ylabel('Amplitude')
plt.title('Symbol amplitude');
"""
Explanation: Symbols are 20 dumps long. We find where symbols start by integrating the symbols for all possible offsets and choosing the one that maximizes the amplitude.
End of explanation
"""
symbol_offset = np.argmax(symbol_sums)
symbol_offset
"""
Explanation: The offset at which symbols start is given by the peak in the plot above.
End of explanation
"""
symbols = iq[symbol_offset:]
symbols = symbols[:symbols.size//dumps_per_symbol*dumps_per_symbol]
symbols = np.average(symbols.reshape(-1, dumps_per_symbol), axis=1)
symbols_seconds = samp_seconds[symbol_offset:][::dumps_per_symbol][:symbols.size]
plt.plot(symbols_seconds, symbols.real, '.', label='I')
plt.plot(symbols_seconds, symbols.imag, '.', label='Q')
plt.title('Symbols')
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Amplitude');
"""
Explanation: We now integrate and plot the symbols.
End of explanation
"""
preamble = 2*np.array([1, 0, 0, 0, 1, 0, 1, 1]) - 1
bits = np.sign(symbols.real)
corr_preamble = np.correlate(bits, preamble) / preamble.size
plt.plot(corr_preamble)
plt.title('Correlation of bits and preamble')
plt.ylabel('Correlation (normalized)')
plt.xlabel('Symbol number');
"""
Explanation: We compute and plot the correlation between the bits (the sign of the real part of the symbols) and the 8-bit preamble that is transmitted at the beginning of each 6-second subframe. Finding the beginning of the subframes is tricky because this 8-bit pattern can also occur elsewhere, and the bit stream may be inverted (due to 180º phase ambiguity in the Costas loop).
End of explanation
"""
np.where(corr_preamble == 1)[0]
np.where(corr_preamble == -1)[0]
preamble_start = 92
"""
Explanation: We try to search for correlations that achieve the maximum amplitude and that differ by 300 symbols (6 seconds) to guess which correlations really correspond to the preamble.
End of explanation
"""
def get_tow(bits, preamble_start):
# handle inversion according to the last bit of the previous word
tow = bits[preamble_start + 30:][:17] * (-1) * bits[preamble_start + 29]
tow = ((tow + 1) / 2).astype('uint8')
tow = np.packbits(tow)
tow = ((np.int32(tow[0]) << 16) | (np.int32(tow[1]) << 8) | np.int32(tow[2])) >> 7
tow = tow * 6
return tow
"""
Explanation: To get the TOW, we need to read a 17-bit number somewhere in the subframe (in the 2nd word of the subframe), and XOR it with the last bit of the previous word. The TOW is given in units of 6 seconds, so we multiply it by 6 to obtain seconds.
End of explanation
"""
tow0 = get_tow(bits, preamble_start)
tow0
tow1 = get_tow(bits, preamble_start + 300)
tow1
"""
Explanation: We get the TOWs of two subframes to check that we're reading it correctly. The two TOWs should differ by 6 seconds.
End of explanation
"""
# 6 seconds accounts for the fact that the TOW field
# gives the TOW at the start of the next subframe
tow_first_dump = tow0 - 6 - preamble_start / 50 - symbol_offset / 1000
tow_first_dump
"""
Explanation: We now compute the TOW at the first dump in the track-gps-l1.py output, taking into account that the TOW given in a subframe corresponds to the TOW at the start of the next subframe, and counting in which dump this subframe starts.
End of explanation
"""
(tow_first_dump // (24 * 3600), (tow_first_dump % (24 * 3600)) // 3600,
(tow_first_dump % 3600) // 60, tow_first_dump % 60)
"""
Explanation: We now decompose the TOW as (day of week, hour, minute, seconds). This also serves as a cross-check to see if we have calculated it correctly. Remember that GPS time is ahead of UTC by 18 seconds.
End of explanation
"""
code_chips = code_cyc + code_p
code_secs = code_chips / chip_rate
plt.plot(samp_seconds, code_secs)
plt.ylabel('Code phase (s)')
plt.xlabel('Time (s)')
plt.title('Unwrapped code phase');
"""
Explanation: The unwrapped code phase counts how the satellite transmission time (with an initial ambiguity of 1ms) evolves over time. Since the DLL takes some seconds to lock, the measurement is not accurate over the first seconds.
End of explanation
"""
sel_fit = samp_seconds >= 5
poly_code = np.polyfit(samp_seconds[sel_fit], code_secs[sel_fit], 2)
fig, axs = plt.subplots(2, 1, sharex=True)
axs[0].plot(samp_seconds, code_secs - np.polyval(poly_code, samp_seconds))
axs[1].plot(samp_seconds, code_secs - np.polyval(poly_code, samp_seconds))
axs[1].set_ylim((-1e-9, 1e-9))
axs[0].set_ylabel('Difference (s)')
axs[1].set_ylabel('Difference (s)')
plt.xlabel('Time (s)')
plt.suptitle('Difference between code phase and degree 2 polynomial fit');
"""
Explanation: We fit a polynomial of degree 2 ignoring the first 5 seconds in order to extrapolate back a more accurate value for the code phase at the start of the recording.
End of explanation
"""
(np.polyval(poly_code, 0) % 1e-3) * chip_rate - code_offset
"""
Explanation: A sanity check for the extrapolated code phase at the start of the recording. In units of chips it should be close to the code offset that we have indicated to track-gps-l1.py.
End of explanation
"""
# Here -1e-3 accounts for the fact that code_secs starts counting
# at the PRN repetition that starts just before the recording start,
# while tow_first_dump refers to the first PRN repetition that starts
# after the recording start.
tx_time_start = np.polyval(poly_code, 0) + tow_first_dump - 1e-3
tx_time_start
"""
Explanation: We use this extrapolation and the TOW at the first dump to compute the satellite transmission time at the beginning of the recording.
End of explanation
"""
time_of_flight = 0.074602692582
gps_time_start = tx_time_start + time_of_flight
gps_time_start
"""
Explanation: Now we use the compute binary, which uses RTKLIB to compute the time of flight of the signal of the satellite at the start of the recording, using the known position of the antenna and the satellite ephemerides. The calculated time of flight is inserted here and used to derive the GPS time at the start of the recording.
End of explanation
"""
usrp_timestamp = 41542.2147593125
gps_time_start - usrp_timestamp
"""
Explanation: We now compare the GPS time with the timestamp that UHD has included in the recording metadata using the PPS synchronization (we have converted it to TOW taking into account the 18 second difference between GPS and UTC).
End of explanation
"""
# G02
tf0_2 = 0.079009097743
code_offset2 = 758.8
(usrp_timestamp - tf0_2) % 1e-3 * 1.023e6 - code_offset2
# G11
tf0_11 = 0.078571217012
code_offset11 = 185.8
(usrp_timestamp - tf0_11) % 1e-3 * 1.023e6 - code_offset11
# G12
tf0_12 = 0.074602813604
code_offset12 = 151.6
(usrp_timestamp - tf0_12) % 1e-3 * 1.023e6 - code_offset12
# G22
tf0_22 = 0.073296200402
code_offset22 = 465.0
(usrp_timestamp - tf0_22) % 1e-3 * 1.023e6 - code_offset22
# G25
tf0_25 = 0.067335350355
code_offset25 = 425.1
(usrp_timestamp - tf0_25) % 1e-3 * 1.023e6 - code_offset25
# G31
tf0_31 = 0.074203745832
code_offset31 = 559.7
(usrp_timestamp - tf0_31) % 1e-3 * 1.023e6 - code_offset31
# G32
tf0_32 = 0.073085740178
code_offset32 = 680.3
(usrp_timestamp - tf0_32) % 1e-3 * 1.023e6 - code_offset32
"""
Explanation: Another cross-check. Now that we know that the UHD timestamp is reasonably accurate, we use it to compute the time of flight of each satellite at the beginning of the recording. We check the difference between the code offset reported by acquire-gps-l1.py and the code offset computed by subtracting from the UHD timestamp the time of flight. We show the result in units of chips. All the satellites should show a very similar result (with a fraction of a chip). Moreover, the results should be small (a few chips), since the UHD timestamp error is a few microseconds.
Note that G11, which was marked as unhealthy, is off by ~2 chips.
End of explanation
"""
|
ClaudioVZ/Metodos_numericos_I
|
01_Raices_de_ecuaciones_de_una_variable/06_Secante.ipynb
|
gpl-2.0
|
def diferencia_atras(f, x_0, x_1):
pendiente = (f(x_0) - f(x_1))/(x_0 - x_1)
return pendiente
def raiz(f, a, b):
c = b - f(b)/diferencia_atras(f, a, b)
return b, c
"""
Explanation: Método de la secante
El método de la secante es una extensión del método de Newton-Raphson, la derivada de la función se calcula usando una diferencia finita hacia atrás
\begin{equation}
f'(x_{i}) = \frac{f(x_{i-1}) - f(x_{i})}{x_{i-1} - x_{i}}
\end{equation}
y se reemplaza en la fórmula del método de Newton-Raphson
\begin{equation}
x_{i+1} = x_{i} - \frac{1}{f'(x_{i})} f(x_{i}) = x_{i} - \frac{x_{i-1} - x_{i}}{f(x_{i-1}) - f(x_{i})} f(x_{i})
\end{equation}
Algoritmo
x_-1 es la raiz aproximada anterior
x_0 es la raiz aproximada actual
x_1 = x_0 - f(x_0)*(x_-1 - x_0)/f(x_-1) - f(x_0)
x_2 = x_1 - f(x_1)*(x_0 - x_1)/f(x_0) - f(x_1)
x_3 = x_2 - f(x_2)*(x_1 - x_2)/f(x_1) - f(x_2)
...
Ejemplo 1
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x = 0$ y $x = -1$ como valores iniciales
Iteración 0
Raíz aproximada anterior
\begin{equation}
x_{-1} = 0
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{0} = -1
\end{equation}
Error relativo
\begin{equation}
e_{r} = ?
\end{equation}
Iteración 1
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{-1}) &= f(0) = 3 \
f(x_{0}) &= f(-1) = 1
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{0} = -1
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{1} = x_{0} - \frac{x_{-1} - x_{0}}{f(x_{-1}) - f(x_{0})} f(x_{0}) = -1 - \frac{0 - (-1)}{3 - 1} 1 = -1.5
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{1} - x_{0}}{x_{1}}\bigg| \times 100\% = \bigg|\frac{-1.5 - (-1)}{-1.5}\bigg| \times 100\% = 33.33\%
\end{equation}
Iteración 2
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{0}) &= f(-1) = 1 \
f(x_{1}) &= f(-1.5) = -7.96875
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{1} = -1.5
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{2} = x_{1} - \frac{x_{0} - x_{1}}{f(x_{0}) - f(x_{1})} f(x_{1}) = -1.5 - \frac{-1 - (-1.5)}{1 - (-7.96875)} (-7.96875) = -1.055749
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{2} - x_{1}}{x_{2}}\bigg| \times 100\% = \bigg|\frac{-1.055749 - (-1.5)}{-1.055749}\bigg| \times 100\% = 42.08\%
\end{equation}
Iteración 3
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{1}) &= f(-1.5) = -7.96875 \
f(x_{2}) &= f(-1.055749) = 0.511650
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{2} = -1.055749
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{3} = x_{2} - \frac{x_{1} - x_{2}}{f(x_{1}) - f(x_{2})} f(x_{2}) = -1.055749 - \frac{-1.5 - (-1.055749)}{-7.96875 - 0.511650} 0.511650 = -1.082552
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{3} - x_{2}}{x_{3}}\bigg| \times 100\% = \bigg|\frac{-1.082552 - (-1.055749)}{-1.082552}\bigg| \times 100\% = 2.48\%
\end{equation}
Implementación de funciones auxiliares
Seudocódigo para la derivada
pascal
function diferencia_atras(f(x), x_0, x_1)
f'(x) = f(x_0) - f(x_1)/x_0 - x_1
return f'(x)
end function
Seudocódigo para obtener las últimas dos raices
pascal
function raiz(f(x), a, b):
c = b - f(b)/diferencia_atras(f(x), a, b)
return b, c
end function
End of explanation
"""
def secante(f, x_0, x_1):
print("{0:s} \t {1:15s} \t {2:15s} \t {3:15s}".format('i', 'x anterior', 'x actual', 'error relativo %'))
x_anterior = x_0
x_actual = x_1
i = 0
print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15s}".format(i, x_anterior, x_actual, '???????????????'))
error_permitido = 0.000001
while True:
x_anterior, x_actual = raiz(f, x_anterior, x_actual)
if x_actual != 0:
error_relativo = abs((x_actual - x_anterior)/x_actual)*100
i = i + 1
print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15.11f}".format(i, x_anterior, x_actual, error_relativo))
if (error_relativo < error_permitido) or (i>=20):
break
print('\nx =', x_actual)
"""
Explanation: Implementación no vectorizada
Seudocódigo
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
error_permitido = 0.000001
while(True)
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
if x_raiz_actual != 0
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
end if
if error_relativo < error_permitido
exit
end if
end while
mostrar x_actual
end function
o también
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
for 1 to maxima_iteracion do
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
end for
mostrar x_actual
end function
End of explanation
"""
def f(x):
# f(x) = x^5 + x^3 + 3
y = x**5 + x**3 + 3
return y
diferencia_atras(f, 0, -1)
raiz(f, 0, -1)
secante(f, 0, -1)
"""
Explanation: Ejemplo 2
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -1$
End of explanation
"""
secante(f, 0, -0.5)
"""
Explanation: Ejemplo 3
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -0.5$
End of explanation
"""
|
rice-solar-physics/hot_plasma_single_nanoflares
|
notebooks/plot_state_space.ipynb
|
bsd-2-clause
|
import os
import sys
import pickle
import numpy as np
import astropy.constants as const
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from matplotlib import ticker
%matplotlib inline
plt.rcParams.update({'figure.figsize' : [8,8]})
"""
Explanation: Plot Temperature, Density, and Pressure State Space
Here, we show the state space plot for an EBTEL run where only the electrons are heated and the pulse duration, $\tau=200$ s.
End of explanation
"""
with open(__depends__[0],'rb') as f:
ebtel_results = pickle.load(f)
"""
Explanation: Load in the EBTEL results.
End of explanation
"""
fig = plt.figure()
ax = fig.gca()
axn = ax.twinx()
#total pressure--single fluid
linep = ax.plot(ebtel_results[2]['T'],2.*const.k_B.cgs.value*ebtel_results[2]['n']*ebtel_results[2]['T'],
color=sns.color_palette('deep')[0],linestyle='solid',label=r'$p$')
#total pressure--two fluid
linep_tot = ax.plot(ebtel_results[2]['Tee'],
const.k_B.cgs.value*ebtel_results[2]['ne']*ebtel_results[2]['Tee']+const.k_B.cgs.value*ebtel_results[2]['ne']*ebtel_results[2]['Tei'],
color=sns.color_palette('deep')[0],linestyle='dotted',label=r'$p_e+p_i$')
#electron pressure
linepe = ax.plot(ebtel_results[2]['Tee'],
const.k_B.cgs.value*ebtel_results[2]['ne']*ebtel_results[2]['Tee'],
color=sns.color_palette('deep')[0],linestyle='dashed',label=r'$p_e$')
#ion pressure
linepi = ax.plot(ebtel_results[2]['Tee'],
const.k_B.cgs.value*ebtel_results[2]['ne']*ebtel_results[2]['Tei'],
color=sns.color_palette('deep')[0],linestyle='-.',label=r'$p_i$')
#density--single-fluid
linensf = axn.plot(ebtel_results[2]['T'],ebtel_results[2]['n'],
color=sns.color_palette('deep')[2],linestyle='solid',label=r'$n_{sf}$')
#density--two-fluid
linentf = axn.plot(ebtel_results[2]['Tee'],ebtel_results[2]['ne'],
color=sns.color_palette('deep')[2],linestyle='dashed',label=r'$n_{tf}$')
#axes properties
#limits
ax.set_xlim([10**5.5,10**7.2])
axn.set_xlim([10**5.5,10**7.2])
#scale
ax.set_yscale('log')
axn.set_yscale('log')
ax.set_xscale('log')
axn.set_xscale('log')
#labels
ax.set_xlabel(r'$T$ $\mathrm{(K)}$')
ax.set_ylabel(r'$p$ $(\mathrm{dyne}$ $\mathrm{cm}^{-2})$')
axn.set_ylabel(r'$n$ $(\mathrm{cm}^{-3})$')
#legend
lines = linep + linep_tot + linepe + linepi + linensf + linentf
labels = []
[labels.append(l.get_label()) for l in lines]
ax.legend(lines,labels,loc=2,ncol=2)
#show
plt.savefig(__dest__)
plt.show()
"""
Explanation: Build the plot.
End of explanation
"""
|
neoscreenager/JupyterNotebookWhirlwindTourOfPython
|
indic_nlp_examples.ipynb
|
gpl-3.0
|
# The path to the local git repo for Indic NLP library
INDIC_NLP_LIB_HOME="e:\indic_nlp_library"
# The path to the local git repo for Indic NLP Resources
INDIC_NLP_RESOURCES="e:\indic_nlp_resources"
"""
Explanation: Indic NLP Library
The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text.
The library provides the following functionalities:
Text Normalization
Script Conversion
Romanization
Indicization
Script Information
Phonetic Similarity
Syllabification
Tokenization
Word Segmenation
Transliteration
Translation
The data resources required by the Indic NLP Library are hosted in a different repository. These resources are required for some modules. You can download from the Indic NLP Resources project.
Pre-requisites
Python 2.7+
Morfessor 2.0 Python Library
Getting Started
----- Set these variables -----
End of explanation
"""
import sys
sys.path.append('{}/src'.format(INDIC_NLP_LIB_HOME))
"""
Explanation: Add Library to Python path
End of explanation
"""
from indicnlp import common
common.set_resources_path(INDIC_NLP_RESOURCES)
"""
Explanation: Export environment variable
export INDIC_RESOURCES_PATH=<path>
OR
set it programmatically
We will use that method for this demo
End of explanation
"""
from indicnlp import loader
loader.load()
"""
Explanation: Initialize the Indic NLP library
End of explanation
"""
from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
input_text=u"\u0958 \u0915\u093c"
remove_nuktas=False
factory=IndicNormalizerFactory()
normalizer=factory.get_normalizer("hi",remove_nuktas)
output_text=normalizer.normalize(input_text)
print output_text
print 'Length before normalization: {}'.format(len(input_text))
print 'Length after normalization: {}'.format(len(output_text))
"""
Explanation: Let's actually try out some of the API methods in the Indic NLP library
Many of the API functions require a language code. We use 2-letter ISO 639-1 codes. Some languages do not have assigned 2-letter codes. We use the following two-letter codes for such languages:
Konkani: kK
Manipuri: mP
Bodo: bD
Text Normalization
Text written in Indic scripts display a lot of quirky behaviour on account of varying input methods, multiple representations for the same character, etc.
There is a need to canonicalize the representation of text so that NLP applications can handle the data in a consistent manner. The canonicalization primarily handles the following issues:
- Non-spacing characters like ZWJ/ZWNL
- Multiple representations of Nukta based characters
- Multiple representations of two part dependent vowel signs
- Typing inconsistencies: e.g. use of pipe (|) for poorna virama
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import UnicodeIndicTransliterator
input_text=u'राजस्थान'
print UnicodeIndicTransliterator.transliterate(input_text,"hi","tm")
"""
Explanation: Script Conversion
Convert from one Indic script to another. This is a simple script which exploits the fact that Unicode points of various Indic scripts are at corresponding offsets from the base codepoint for that script. The following scripts are supported:
Devanagari (Hindi,Marathi,Sanskrit,Konkani,Sindhi,Nepali), Assamese, Bengali, Oriya, Gujarati, Gurumukhi (Punjabi), Sindhi, Tamil, Telugu, Kannada, Malayalam
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
input_text=u'राजस्थान'
lang='hi'
print ItransTransliterator.to_itrans(input_text,lang)
"""
Explanation: Romanization
Convert script text to Roman text in the ITRANS notation
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
# input_text=u'rajasthAna'
input_text=u'pitL^In'
lang='hi'
x=ItransTransliterator.from_itrans(input_text,lang)
print x
for y in x:
print '{:x}'.format(ord(y))
"""
Explanation: Indicization (ITRANS to Indic Script)
Let's call conversion of ITRANS-transliteration to an Indic script as Indicization!
End of explanation
"""
from indicnlp.script import indic_scripts as isc
c=u'क'
lang='hi'
isc.get_phonetic_feature_vector(c,lang)
"""
Explanation: Script Information
Indic scripts have been designed keeping phonetic principles in nature and the design and organization of the scripts makes it easy to obtain phonetic information about the characters.
Get Phonetic Feature Vector
With each script character, a phontic feature vector is associated, which encodes the phontic properties of the character. This is a bit vector which is can be obtained as shown below:
End of explanation
"""
sorted(isc.PV_PROP_RANGES.iteritems(),key=lambda x:x[1][0])
"""
Explanation: This fields in this bit vector are (from left to right):
End of explanation
"""
from indicnlp.langinfo import *
c=u'क'
lang='hi'
print 'Is vowel?: {}'.format(is_vowel(c,lang))
print 'Is consonant?: {}'.format(is_consonant(c,lang))
print 'Is velar?: {}'.format(is_velar(c,lang))
print 'Is palatal?: {}'.format(is_palatal(c,lang))
print 'Is aspirated?: {}'.format(is_aspirated(c,lang))
print 'Is unvoiced?: {}'.format(is_unvoiced(c,lang))
print 'Is nasal?: {}'.format(is_nasal(c,lang))
"""
Explanation: You can check the phonetic information database files in Indic NLP resources to know the definition of each of the bits.
For Tamil Script: database
For other Indic Scripts: database
Query Phonetic Properties
Note: This interface below will be deprecated soon and a new interface will be available soon.
End of explanation
"""
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
c1=u'क'
c2=u'ख'
c3=u'भ'
lang='hi'
print u'Similarity between {} and {}'.format(c1,c2)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c2,lang)
)
print
print u'Similarity between {} and {}'.format(c1,c3)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c3,lang)
)
"""
Explanation: Get Phonetic Similarity
Using the phonetic feature vectors, we can define phonetic similarity between the characters (and underlying phonemes). The library implements some measures for phonetic similarity between the characters (and underlying phonemes). These can be defined using the phonetic feature vectors discussed earlier, so users can implement additional similarity measures.
The implemented similarity measures are:
cosine
dice
jaccard
dot_product
sim1 (Kunchukuttan et al., 2016)
softmax
References
Anoop Kunchukuttan, Pushpak Bhattacharyya, Mitesh Khapra. Substring-based unsupervised transliteration with phonetic and contextual knowledge. SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016) . 2016.
End of explanation
"""
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.cosine,slang,tlang,normalize=False)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
"""
Explanation: You may have figured out that you can also compute similarities of characters belonging to different scripts.
You can also get a similarity matrix which contains the similarities between all pairs of characters (within the same script or across scripts).
Let's see how we can compare the characters across Devanagari and Malayalam scripts
End of explanation
"""
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.sim1,slang,tlang,normalize=True)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
"""
Explanation: Some similarity functions like sim do not generate values in the range [0,1] and it may be more convenient to have the similarity values in the range [0,1]. This can be achieved by setting the normalize paramter to True
End of explanation
"""
from indicnlp.syllable import syllabifier
w=u'जगदीशचंद्र'
lang='ta'
print u' '.join(syllabifier.orthographic_syllabify(w,lang))
"""
Explanation: Orthographic Syllabification
Orthographic Syllabification is an approximate syllabification process for Indic scripts, where CV+ units are defined to be orthographic syllables.
See the following paper for details:
Anoop Kunchukuttan, Pushpak Bhattacharyya. Orthographic Syllable as basic unit for SMT between Related Languages. Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). 2016.
End of explanation
"""
from indicnlp.tokenize import indic_tokenize
indic_string=u'अनूप,अनूप?।फोन'
print u'Input String: {}'.format(indic_string)
print u'Tokens: '
for t in indic_tokenize.trivial_tokenize(indic_string):
print t
"""
Explanation: Tokenization
A trivial tokenizer which just tokenizes on the punctuation boundaries. This also includes punctuations for the Indian language scripts (the purna virama and the deergha virama). It returns a list of tokens.
End of explanation
"""
from indicnlp.morph import unsupervised_morph
from indicnlp import common
analyzer=unsupervised_morph.UnsupervisedMorphAnalyzer('mr')
indic_string=u'आपल्या हिरड्यांच्या आणि दातांच्यामध्ये जीवाणू असतात .'
analyzes_tokens=analyzer.morph_analyze_document(indic_string.split(' '))
for w in analyzes_tokens:
print w
"""
Explanation: Word Segmentation
Unsupervised morphological analysers for various Indian language. Given a word, the analyzer returns the componenent morphemes.
The analyzer can recognize inflectional and derivational morphemes.
The following languages are supported:
Hindi, Punjabi, Marathi, Konkani, Gujarati, Bengali, Kannada, Tamil, Telugu, Malayalam
Support for more languages will be added soon.
End of explanation
"""
import urllib2
from django.utils.encoding import *
from django.utils.http import *
text=iri_to_uri(urlquote('anoop, ratish kal fone par baat karenge'))
url=u'http://www.cfilt.iitb.ac.in/indicnlpweb/indicnlpws/transliterate_bulk/en/hi/{}/statistical'.format(text)
response=urllib2.urlopen(url).read()
print response
"""
Explanation: Transliteration
We use the BrahmiNet REST API for transliteration.
End of explanation
"""
|
usantamaria/ipynb_para_docencia
|
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
|
mit
|
"""
IPython Notebook v4.0 para python 3.0
Librerías adicionales:
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
"""
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Algoritmos y Funciones
Objetivos
Conocer los conceptos de algoritmo, código y pseudo-código.
Conectar los conceptos anteriores con la generación de algoritmos y funciones en Python.
Motivación
Imagine que Ud. es un trabaja en una compañía de seguros para la cual es necesario evaluar constantemente el nivel de riesgo de un cliente en base a sus antecedentes antes de negociar un producto. ¿Será posible automatizar el proceso con el fin de trabajar menos, mejorar los tiempos de evaluación y hacer el proceso más eficiente?
La respuesta a esta y muchas otras preguntas se encuentra en la creación de algoritmos de programación.
0.1 Instrucciones
Instrucciones de instalación y utilización de un ipython notebook.
Recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-Enter.
End of explanation
"""
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, N):
if N%i==0:
es_primo = False
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} es compuesto".format(N))
"""
Explanation: 1. Definiciones y conceptos básicos.
Entenderemos por algoritmo a una serie de pasos que persiguen un objetivo específico. Intuitivamente lo podemos relacionar con una receta de cocina: una serie de pasos bien definidos (sin dejar espacio para la confusión del usuario) que deben ser realizados en un orden específico para obtener un determinado resultado.
En general un buen algoritmo debe poseer las siguientes características:
No debe ser ambiguo en su implementación para cualquier usuario.
Debe definir adecuadamente datos de entrada (inputs).
Debe producir datos de salida (outputs) específicos.
Debe poder realizarse en un número finito de pasos y por ende, en un tiempo finito. ( Ver The Halting Problem ).
Por otra parte, llamaremos código a la materialización, en base a la implementación en la sintaxis adecuada de un determinado lenguaje de programación, de un determinado algoritmo. Entonces, para escribir un buen código y que sea eficiente debe tratar de respetar las ideas anteriores: se debe desarrollar en un número finito de pasos, ocupar adecuadamente las estructuras propias del lenguaje, se debe poder ingresar y manipular adecuadamente los datos de entrada y finalmente entregar el resultado deseado.
A diferencia de lo anterior, una idea un poco menos estructurada es el concepto de pseudo-código. Entenderemos por el anterior a la descripción informal de un determinado algoritmo en un determinado lenguaje de programación. Sin embargo, no debe perder las características esenciales de un algoritmo como claridad en los pasos, inputs y outputs bien definidos, etc. de tal forma que permita la implementación directa de éste en el computador.
Una vez implementado un algoritmo viene el proceso de revisión de éste. Para realizar adecuadamente lo anterior se recomienda contestar las siguentes preguntas:
1. ¿Mi algoritmo funciona para todos los posibles datos de entrada?
2. ¿Cuánto tiempo tomará en correr mi algoritmo? ¿Cuánta memoria ocupa en mi computador?
3. Ya que sé que mi algoritmo funciona: ¿es posible mejorarlo? ¿Puedo hacer que resuelva mi problema más rápido?
2. Un ejemplo sencillo: Programa para números primos.
A continuación estudiamos el problema de determinar si un número entero $N\geq 2$ es primo o no.
Consideremos los siguientes números: 8191 (primo), 8192 (compuesto), 49979687 (primo), 49979689 (compuesto).
2.1 Primer programa
Nuestro primer algoritmo para determinar si un numero es primo es: verificar que ningún número entre $2$ y $N-1$ sea divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si ningún resto es cero, entonces el numero $N$ es primo. En otro caso, el número no es primo.
El código es el siguiente:
End of explanation
"""
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, N):
if N%i==0:
es_primo = False
break
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} es compuesto".format(N))
"""
Explanation: 2.2 Segundo Programa
Al utilizar números grandes ($N=10^7$, por ejemplo) nos damos cuenta que el algoritmo anterior tarda mucho tiempo en ejecutar, y que recorre todos los numeros. Sin embargo, si se encuentra un divisor ya sabemos que el número no es primo, pudiendo detener inmediatamente el algoritmo. Esto se consigue utilizando únicamente una línea extra, con una sentencia break.
El algoritmo para verificar si un numero no primo es: verificar si algún numero entre $2$ y $N-1$ es divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo.
Mientras que el código es el siguiente:
End of explanation
"""
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, int(N**.5)):
if N%i==0:
es_primo = False
break
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} no es primo".format(N))
"""
Explanation: La ejecución de números grandes compuestos se detiene en el primer múltiplo cuando el número es compuesto. Sin embargo, para numeros grandes y primos tarda bastante.
2.3 Tercer Programa
Un último truco que podemos utilizar para verificar más rápidamente si un número es primo es verificar únicamente parte del rango de los múltiplos. Esto se explica mejor con un ejemplo. Consideremos el número 16: los multiplos son 2, 4 y 8. Como el número es compuesto, nuestro algoritmo anterior detecta rápidamente que 2 es un factor, detiene el algoritmo e indica que el número 12 no es primo. Consideremos ahora el numero 17: es un número primo y no tiene factores, por lo que el algoritmo revisa los numeros 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 y 16. Sin embargo, sólo es necesario revisar los números 2, 3, 4, 5 y 6 porque para que exista un factor mayor a 6, tiene que simultáneamente haber un factor menor a 6 tal que la multiplicación sea 17. Esto es, basta revisar los factores más pequeños, donde la cota está dada por el entero más cercano a $\sqrt{17}$ o en general, $\sqrt{N}$.
El algoritmo para verificar si un numero no primo es: verificar si algún numero entero entre $2$ y $\sqrt{N}$ es divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división para cada número entre $2$ y $\sqrt{N-1}$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo.
Mientras que el código es el siguiente:
End of explanation
"""
def sin_inputs_ni_outputs():
print "Hola mundo"
def sin_inputs():
return "42"
def sin_outputs(a,b):
print a
print b
def con_input_y_output(a,b):
return a+b
def con_tuti(a,b,c=2):
return a+b*c
"""
Explanation: 3. Midiendo la complejidad
Como dijimos anteriormente luego de hacer que un algoritmo funcione, una de las preguntas más importantes es la revisión de éste haciendo énfasis en la medición del tiempo que necesita para resolver el problema. Así, la primera interrogante es: ¿cómo podemos medir el tiempo que tarda un algoritmo en relación al tamaño del problema que resuelve? Esto se denomina usualmente como complejidad temporal o, en inglés, como time complexity o escalability.
Sin embargo es importante notar que medir la complejidad temporal de un algoritmo puede resultar un poco complejo puesto que: (a) El tiempo que toma al computador realizar las distintas operaciones en general es heterogeneo, es decir, realizar una suma es mucho más rápido que hacer una división, (b) Computadores distintos puede realizar un determinado experimento en tiempos distintos.
La notación estándar para la complejidad de un algoritmo es mediante la letra mayúscula O, por lo que la complejidad de alguna función la podemos expresar por O("función"), lo que podemos interpretar como que el número de operaciones es proporcional a la función por una determinada constante. Las complejidades más importantes son:
O(1) es un algoritmo de complejidad temporal constante, es decir, el número de operaciones del algoritmo realmente no varía mucho si el tamaño del problema crece.
O(log(n)) es la complejidad logarítmica.
O(n) significa que la complejidad del problema es lineal, es decir, doblar el tamaño del problema dobla el tamaño requerido para su solución.
O($n^2$) significa complejidad cuadrática, es decir, doblar el tamaño del problema cuatriplica el tiempo requerido para su solución.
O($2^n$) y en general O($a^n$), $a>1$, posee complejidad exponencial.
En nuestros algoritmos anteriormente desarrollados:
1. El primer algoritmo tiene complejidad $O(N)$: siempre tarda lo mismo.
2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en el mejor de los casos O($1$) y O($\sqrt{N}$) en el peor de los casos (como 25, o cualquier numero primo al cuadrado), pero si es primo tarda O($N$), pues verifica todos los posibles múltiplos.
2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en ambos casos a lo más O($\sqrt{N}$), pues verifica solo los multiplos menores.
Desafío
A
B
C
Funciones
Cuando un algoritmo se utiliza muy seguido, resulta conveniente encapsular su utilización en una función. Ahora bien, resulta importante destacar que en informática una función no tiene el mismo significado que en matemáticas. Una función (en python) es simplemente una sucesión de acciones que se ejecutan sobre un conjunto de variables de entrada para producir un conjunto de variables de salida.
La definición de funciones se realiza de la siguiente forma:
def nombre_de_funcion(variable_1, variable_2, variable_opcional_1=valor_por_defecto_1, ...):
accion_1
accion_2
return valor_1, valor_2
A continuación algunos ejemplos.
End of explanation
"""
sin_inputs_ni_outputs()
"""
Explanation: La función sin_inputs_ni_outputs se ejecuta sin recibir datos de entrada ni producir datos de salida (Y no es muy útil).
End of explanation
"""
x = sin_inputs()
print("El sentido de la vida, el universo y todo lo demás es: "+x)
"""
Explanation: La función sin_inputs se ejecuta sin recibir datos de entrada pero si produce datos de salida.
End of explanation
"""
print con_input_y_output("uno","dos")
print con_input_y_output(1,2)
print con_input_y_output(1.0, 2)
print con_input_y_output(1.0, 2.0)
"""
Explanation: La función con_input_y_output se ejecuta con datos de entrada y produce datos de salida. Cabe destacar que como python no utiliza tipos de datos, la misma función puede aplicarse a distintos tipos de datos mientras la lógica aplicada dentro de la función tenga sentido (y no arroje errores).
End of explanation
"""
print con_tuti(1,2)
print con_tuti("uno","dos")
print con_tuti(1,2,c=3)
print con_tuti(1,2,3)
print con_tuti("uno","dos",3)
"""
Explanation: La función con_tuti se ejecuta con datos de entrada y valores por defecto, y produce datos de salida.
End of explanation
"""
|
catalyst-cooperative/pudl
|
test/validate/notebooks/validate_plants_steam_ferc1.ipynb
|
mit
|
%load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
"""
Explanation: Validation of FERC Form 1 Large Steam Plants
This notebook runs sanity checks on the FERC Form 1 large steam plants table (plants_steam_ferc1). These are the same tests which are run by the plants_steam_ferc1 validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
End of explanation
"""
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine)
plants_steam_ferc1 = (
pudl_out.plants_steam_ferc1().
assign(
water_limited_ratio=lambda x: x.water_limited_capacity_mw / x.capacity_mw,
not_water_limited_ratio=lambda x: x.not_water_limited_capacity_mw / x.capacity_mw,
peak_demand_ratio=lambda x: x.peak_demand_mw / x.capacity_mw,
capability_ratio=lambda x: x.plant_capability_mw / x.capacity_mw,
)
)
"""
Explanation: Pull plants_steam_ferc1 and calculate some useful values
First we pull the original (post-ETL) FERC 1 large plants data out of the PUDL database using an output object. The FERC Form 1 data only exists at annual resolution, so there's no inter-frequency aggregation to think about.
End of explanation
"""
pudl.validate.plot_vs_self(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_self)
"""
Explanation: Validating Historical Distributions
As a sanity check of the testing process itself, we can check to see whether the entire historical distribution has attributes that place it within the extremes of a historical subsampling of the distribution. In this case, we sample each historical year, and look at the range of values taken on by some quantile, and see whether the same quantile for the whole of the dataset fits within that range
End of explanation
"""
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_capacity)
"""
Explanation: Validation Against Fixed Bounds
Some of the variables reported in this table have a fixed range of reasonable values, like the heat content per unit of a given fuel type. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* Tails: are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* Middle: Is the central value of the distribution where it should be?
Plant Capacities
End of explanation
"""
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_expenses)
"""
Explanation: CapEx & OpEx
End of explanation
"""
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_capacity_ratios)
"""
Explanation: Plant Capacity Ratios
End of explanation
"""
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_connected_hours)
"""
Explanation: Plant Connected Hours
Currently expected to fail: ~10% of all plants have > 8760 hours.
End of explanation
"""
testcol = "plant_hours_connected_while_generating"
self_tests = [x for x in pudl.validate.plants_steam_ferc1_self if x["data_col"] == testcol]
pudl.validate.plot_vs_self(plants_steam_ferc1, self_tests)
"""
Explanation: Validate an Individual Column
If there's a particular column that is failing the validation, you can check several different validation cases with something like this cell:
End of explanation
"""
|
seg/2016-ml-contest
|
DiscerningHaggis/Discerning_Haggis_Facies_Classification_sub1.ipynb
|
apache-2.0
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
sns.set(style='whitegrid',
rc={'lines.linewidth': 2.5,
'figure.figsize': (10, 8),
'text.usetex': False,
# 'font.family': 'sans-serif',
# 'font.sans-serif': 'Optima LT Std',
})
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import confusion_matrix
from scipy.stats import truncnorm
"""
Explanation: Discerning Haggis 2016-ml-contest submission
Author: Carlos Alberto da Costa Filho, University of Edinburgh
Load libraries
End of explanation
"""
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
"""
Explanation: Convenience functions
End of explanation
"""
# Loading Data
validationFull = pd.read_csv('../validation_data_nofacies.csv')
training_data = pd.read_csv('../facies_vectors.csv')
# Treat Data
training_data.fillna(training_data.mean(),inplace=True)
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
# Color Data
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
"""
Explanation: Load, treat and color data
End of explanation
"""
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
"""
Explanation: Condition dataset
End of explanation
"""
X_train, X_cv_test, y_train, y_cv_test = train_test_split(scaled_features,
correct_facies_labels, test_size=0.4, random_state=42)
X_cv, X_test, y_cv, y_test = train_test_split(X_cv_test, y_cv_test,
test_size=0.5, random_state=42)
"""
Explanation: Test, train and cross-validate
Up to here, there have been no secrets, just reusing the standard code to load the data. Now, instead of doing the usual test/train split, I create another dataset, the cross-validate set. The split will be 60% train, 20% cross-validate and 20% test.
It which will be used as the "test set", to tune the neural network parameters. My actual test set will only be used to predict the performance of my neural network at the end.
End of explanation
"""
lower, upper = 1, 500
mu, sigma = (upper-lower)/2, (upper-lower)/2
sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma,
loc=mu, scale=sigma)
samples = 30
sizes_L1 = [ int(d) for d in sizes_rv.rvs(samples) ]
sizes_L2 = []
sizes_L3 = []
for sL1 in sizes_L1:
lower, upper = 1, sL1+1
mu, sigma = (upper-lower)/2+1, (upper-lower)/2+1
sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma,
loc=mu, scale=sigma)
sL2 = int(sizes_rv.rvs(1)[0])
sizes_L2.append(sL2)
lower, upper = 1, sL2+1
mu, sigma = (upper-lower)/2+1, (upper-lower)/2+1
sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma,
loc=mu, scale=sigma)
sL3 = int(sizes_rv.rvs(1)[0])
sizes_L3.append(sL3)
sizes = sorted(set(zip(sizes_L1, sizes_L2, sizes_L3)),
key=lambda s: sum(s))
"""
Explanation: Tuning
Selecting model size
I create a number of model sizes, all with 3 hidden layers. The first and largest hidden layer is normally distributed between 1 to 500 nodes. The second ranges from 1 to the number of first nodes. The third ranges from 1 to the number of second nodes.
These different sizes will be used to train several unregularized networks.
End of explanation
"""
train_error = np.array([])
cv_error = np.array([])
train_adj_error = np.array([])
cv_adj_error = np.array([])
minerr = 1
for i, s in enumerate(sizes):
clf = MLPClassifier(solver='lbfgs', alpha=0,
hidden_layer_sizes=s)
clf.fit(X_train,y_train)
# Compute errors
conf_cv = confusion_matrix(y_cv, clf.predict(X_cv))
conf_tr = confusion_matrix(y_train, clf.predict(X_train))
train_error = np.append(train_error, 1-accuracy(conf_tr))
cv_error = np.append(cv_error, 1-accuracy(conf_cv))
train_adj_error = np.append(train_adj_error,
1-accuracy_adjacent(conf_tr, adjacent_facies))
cv_adj_error = np.append(cv_adj_error,
1-accuracy_adjacent(conf_cv, adjacent_facies))
print('[ %3d%% done ] ' % (100*(i+1)/len(sizes),), end="")
if cv_error[-1] < minerr:
minerr = cv_error[-1]
print('CV error = %d%% with' % (100*minerr,), s)
else:
print()
"""
Explanation: Training with several model sizes
This takes a few minutes.
End of explanation
"""
sizes_sum = [ np.sum(s) for s in sizes ]
p = np.poly1d(np.polyfit(sizes_sum, cv_error, 2))
f, ax = plt.subplots(figsize=(5,5))
ax.scatter(sizes_sum, cv_error, c='k', label='Cross-validate')
ax.plot(range(1, max(sizes_sum)+1), p(range(1, max(sizes_sum)+1)))
ax.set_ylim([min(cv_error)-.1, max(cv_error)+.1])
ax.set_xlabel('Sum of nodes')
ax.set_ylabel('Error')
plt.legend()
"""
Explanation: Plot performance of neural networks vs sum of nodes
End of explanation
"""
minsum = range(1, max(sizes_sum)+1)[np.argmin(p(range(1, max(sizes_sum)+1)))]
minsize = (int(minsum*4/7),int(minsum*2/7),int(minsum*1/7))
print(minsize)
"""
Explanation: Choose best size from parabolic fit
When I create neural network sizes, the first parameter $n_1$ normally distributed between 1 and 500. Its mean is ~250. The number of nodes in the second layer, $n_2$ depends on the first: it is between 1 and $n_1+1$. Also, its mean is $n_1/2$. The third layer is analogous: between 1 and $n_2/+1$ and with mean $n_2/2$. This is an empirical relationship I use to loosely "parametrize" the number of nodes in each hidden layer.
Knowing the optimal sum, I simply choose the number of nodes whose means would result in this sum, according to my empirical relationships. This gives the following optimal size:
End of explanation
"""
alphas = np.append([0], np.sqrt(10)**np.arange(-10, 4.0, 1))
train_error = np.array([])
cv_error = np.array([])
train_adj_error = np.array([])
cv_adj_error = np.array([])
minerr = 1
for i, a in enumerate(alphas):
clf = MLPClassifier(solver='lbfgs', alpha=a,
hidden_layer_sizes=minsize)
clf.fit(X_train,y_train)
# Compute errors
conf_cv = confusion_matrix(y_cv, clf.predict(X_cv))
conf_tr = confusion_matrix(y_train, clf.predict(X_train))
train_error = np.append(train_error, 1-accuracy(conf_tr))
cv_error = np.append(cv_error, 1-accuracy(conf_cv))
train_adj_error = np.append(train_adj_error,
1-accuracy_adjacent(conf_tr, adjacent_facies))
cv_adj_error = np.append(cv_adj_error,
1-accuracy_adjacent(conf_cv, adjacent_facies))
print('[ %3d%% done ] ' % (100*(i+1)/len(alphas),), end="")
if cv_error[-1] < minerr:
minerr = cv_error[-1]
print('CV error = %d%% with %g' % (100*minerr, a))
else:
print()
"""
Explanation: Choose regularization valus
Here we will choose the regularization value using the same approach as before. This takes a few minutes.
End of explanation
"""
p = np.poly1d(np.polyfit(np.log(alphas[1:]), cv_error[1:], 2))
f, ax = plt.subplots(figsize=(5,5))
ax.scatter(np.log(alphas[1:]), cv_error[1:], c='k', label='Cross-validate')
ax.plot(np.arange(-12, 4.0, .1), p(np.arange(-12, 4.0, .1)))
ax.set_xlabel(r'$\log(\alpha)$')
ax.set_ylabel('Error')
plt.legend()
"""
Explanation: Plot performance of neural networks vs regularization
End of explanation
"""
minalpha = np.arange(-12, 4.0, .1)[np.argmin(p(np.arange(-12, 4.0, .1)))]
# minalpha = np.log(alphas)[np.argmin(cv_error)] # This chooses the minimum
minalpha = np.sqrt(10)**minalpha
print(minalpha)
"""
Explanation: Choose best regularization parameter from parabolic fit
End of explanation
"""
clf = MLPClassifier(solver='lbfgs', alpha=minalpha,
hidden_layer_sizes=minsize)
clf.fit(X_train,y_train)
conf_te = confusion_matrix(y_test, clf.predict(X_test))
print('Predicted accuracy %.d%%' % (100*accuracy(conf_te),))
"""
Explanation: Predict accuracy
Now I train a neural network with the obtained values and predict its accuracy using the test set.
End of explanation
"""
pd.DataFrame({'alpha':minalpha, 'layer1': minsize[0], 'layer2': minsize[1],
'layer3': minsize[2]}, index=[0]).to_csv('DHparams.csv')
"""
Explanation: Save neural network parameters
End of explanation
"""
clf_final = MLPClassifier(solver='lbfgs', alpha=minalpha,
hidden_layer_sizes=minsize)
clf_final.fit(scaled_features,correct_facies_labels)
validation_features = validationFull.drop(['Formation', 'Well Name', 'Depth'], axis=1)
scaled_validation = scaler.transform(validation_features)
validation_output = clf_final.predict(scaled_validation)
validationFull['Facies']=validation_output
validationFull.to_csv('well_data_with_facies_DH.csv')
"""
Explanation: Retrain and predict
Finally we train a neural network using all data available, and apply it to our blind test.
End of explanation
"""
|
sympy/scipy-2017-codegen-tutorial
|
notebooks/02-code-printers.ipynb
|
bsd-3-clause
|
from sympy import *
init_printing()
"""
Explanation: Code printers
The most basic form of code generation are the code printers. The convert SymPy expressions into the target language.
The most common languages are C, C++, Fortran, and Python, but over a dozen languages are supported. Here, we will quickly go over each supported language.
End of explanation
"""
x = symbols('x')
expr = abs(sin(x**2))
expr
ccode(expr)
fcode(expr)
julia_code(expr)
jscode(expr)
mathematica_code(expr)
octave_code(expr)
from sympy.printing.rust import rust_code
rust_code(expr)
rcode(expr)
from sympy.printing.cxxcode import cxxcode
cxxcode(expr)
"""
Explanation: Let us use the function $$|\sin(x^2)|.$$
End of explanation
"""
# Write your answer here
"""
Explanation: Exercise: Codegen your own function
Come up with a symbolic expression and try generating code for it in each language. Note, some languages don't support everything. What works and what doesn't? What things are the same across languages and what things are different?
Reminder: If you click a cell and press b it will add a new cell below it.
End of explanation
"""
%%javascript
require.config({
paths: {
'chartjs': '//cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart'
}
});
"""
Explanation: Exercise: Plotting SymPy Functions with JavaScript
One use case that works nicely with the Jupyter notebook is plotting mathematical functions using JavaScript plotting libraries. There are a variety of plotting libraries available and the notebook makes it relatively easy to use. Here we will use Chart.js to plot functions of a single variable. We can use the %%javascript magic to type JavaScript directly into a notebook cell. In this cell we load in the Chart.js library:
End of explanation
"""
from scipy2017codegen.plotting import js_template
print(js_template.format(top_function='***fill me in!***',
bottom_function='***fill me in!***',
chart_id='***fill me in!***'))
"""
Explanation: We've also prepared some Javascript to do the plotting. This code will take two mathematical expressions written in Javascript and plot the functions.
End of explanation
"""
from IPython.display import Javascript
x = symbols('x')
f1 = sin(x)
f2 = cos(x)
Javascript(js_template.format(top_function=jscode(f1),
bottom_function=jscode(f2),
chart_id='sincos'))
"""
Explanation: Now SymPy functions can be plotted by filling in the two missing expressions in the above code and then calling the Javascript display function on that code.
End of explanation
"""
from scipy2017codegen.plotting import batman_equations
top, bottom = batman_equations()
top
bottom
# Write your answer here
"""
Explanation: Exercise: Batman!
Plot the equations below for top and bottom.
There are all kind of functions that can be plotted, but one particularly interesting set of functions are called the Batman Equations. We've provided the piecewise versions of these functions written in SymPy below. Try plotting these with the JS plotter we've created.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/thu/cmip6/models/ciesm/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: THU
Source ID: CIESM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
kit-cel/wt
|
nt1/vorlesung/3_mod_demod/pulse_shaping.ipynb
|
gpl-2.0
|
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 16}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 8) )
"""
Explanation: Content and Objectives
Show pulse shaping (rect and raised-cosine) for random data
Spectra are determined based on the theoretical pulse shape as well as for the random signals when applying estimation
Import
End of explanation
"""
########################
# find impulse response of an RC filter
########################
def get_rc_ir(K, n_up, t_symbol, beta):
'''
Determines coefficients of an RC filter
Formula out of: K.-D. Kammeyer, Nachrichtenübertragung
At poles, l'Hospital was used
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter coefficients
'''
# check that IR length is odd
assert K % 2 == 1, 'Length of the impulse response should be an odd number'
# map zero r to close-to-zero
if beta == 0:
beta = 1e-32
# initialize output length and sample time
rc = np.zeros( K )
t_sample = t_symbol / n_up
# time indices and sampled time
k_steps = np.arange( -(K-1) / 2.0, (K-1) / 2.0 + 1 )
t_steps = k_steps * t_sample
for k in k_steps.astype(int):
if t_steps[k] == 0:
rc[ k ] = 1. / t_symbol
elif np.abs( t_steps[k] ) == t_symbol / ( 2.0 * beta ):
rc[ k ] = beta / ( 2.0 * t_symbol ) * np.sin( np.pi / ( 2.0 * beta ) )
else:
rc[ k ] = np.sin( np.pi * t_steps[k] / t_symbol ) / np.pi / t_steps[k] \
* np.cos( beta * np.pi * t_steps[k] / t_symbol ) \
/ ( 1.0 - ( 2.0 * beta * t_steps[k] / t_symbol )**2 )
return rc
"""
Explanation: Function for determining the impulse response of an RC filter
End of explanation
"""
# modulation scheme and constellation points
M = 2
constellation_points = [ 0, 1 ]
# symbol time and number of symbols
t_symb = 1.0
n_symb = 100
# parameters of the RRC filter
beta = .33
n_up = 8 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2 * syms_per_filt * n_up + 1 # length of the fir filter
# parameters for frequency regime
N_fft = 512
Omega = np.linspace( -np.pi, np.pi, N_fft)
f_vec = Omega / ( 2 * np.pi * t_symb / n_up )
"""
Explanation: Parameters
End of explanation
"""
# get RC pulse and rectangular pulse,
# both being normalized to energy 1
rc = get_rc_ir( K_filt, n_up, t_symb, beta )
rc /= np.linalg.norm( rc )
rect = np.append( np.ones( n_up ), np.zeros( len( rc ) - n_up ) )
rect /= np.linalg.norm( rect )
# get pulse spectra
RC_PSD = np.abs( np.fft.fftshift( np.fft.fft( rc, N_fft ) ) )**2
RC_PSD /= n_up
RECT_PSD = np.abs( np.fft.fftshift( np.fft.fft( rect, N_fft ) ) )**2
RECT_PSD /= n_up
"""
Explanation: Signals and their spectra
End of explanation
"""
# number of realizations along which to average the psd estimate
n_real = 10
# initialize two-dimensional field for collecting several realizations along which to average
S_rc = np.zeros( (n_real, N_fft ), dtype=complex )
S_rect = np.zeros( (n_real, N_fft ), dtype=complex )
# loop for multiple realizations in order to improve spectral estimation
for k in range( n_real ):
# generate random binary vector and
# modulate the specified modulation scheme
data = np.random.randint( M, size = n_symb )
s = [ constellation_points[ d ] for d in data ]
# apply RC filtering/pulse-shaping
s_up_rc = np.zeros( n_symb * n_up )
s_up_rc[ : : n_up ] = s
s_rc = np.convolve( rc, s_up_rc)
# apply RECTANGULAR filtering/pulse-shaping
s_up_rect = np.zeros( n_symb * n_up )
s_up_rect[ : : n_up ] = s
s_rect = np.convolve( rect, s_up_rect)
# get spectrum using Bartlett method
S_rc[k, :] = np.fft.fftshift( np.fft.fft( s_rc, N_fft ) )
S_rect[k, :] = np.fft.fftshift( np.fft.fft( s_rect, N_fft ) )
# average along realizations
RC_PSD_sim = np.average( np.abs( S_rc )**2, axis=0 )
RC_PSD_sim /= np.max( RC_PSD_sim )
RECT_PSD_sim = np.average( np.abs( S_rect )**2, axis=0 )
RECT_PSD_sim /= np.max( RECT_PSD_sim )
"""
Explanation: Real data-modulated Tx-signal
End of explanation
"""
plt.subplot(221)
plt.plot( np.arange( np.size( rc ) ) * t_symb / n_up, rc, linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( rect ) ) * t_symb / n_up, rect, linewidth=2.0, label='Rect' )
plt.ylim( (-.1, 1.1 ) )
plt.grid( True )
plt.legend( loc='upper right' )
#plt.title( '$g(t), s(t)$' )
plt.ylabel('$g(t)$')
plt.subplot(222)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD ), linewidth=2.0, label='RC theory' )
plt.plot( f_vec, 10*np.log10( RECT_PSD ), linewidth=2.0, label='Rect theory' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid( True )
plt.legend( loc='upper right' )
plt.ylabel( '$|G(f)|^2$' )
plt.ylim( (-60, 10 ) )
plt.subplot(223)
plt.plot( np.arange( np.size( s_rc[:20*n_up])) * t_symb / n_up, s_rc[:20*n_up], linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( s_rect[:20*n_up])) * t_symb / n_up, s_rect[:20*n_up], linewidth=2.0, label='Rect' )
#plt.plot( np.arange( np.size( s_up_rc[:20*n_up])) * t_symb / n_up, s_up_rc[:20*n_up], 'o', linewidth=2.0, label='Syms' )
plt.ylim( (-0.1, 1.1 ) )
plt.grid(True)
plt.legend(loc='upper right')
plt.xlabel('$t/T$')
plt.ylabel('$s(t)$')
plt.subplot(224)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD_sim ), linewidth=2.0, label='RC' )
plt.plot( f_vec, 10*np.log10( RECT_PSD_sim ), linewidth=2.0, label='Rect' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid(True);
plt.xlabel('$fT$');
plt.ylabel( '$|S(f)|^2$' )
plt.legend(loc='upper right')
plt.ylim( (-60, 10 ) )
plt.savefig('rect_pulse_shape.pdf',bbox_inches='tight')
"""
Explanation: Plotting
End of explanation
"""
|
flowmatters/veneer-py
|
doc/examples/functions/CreatingFunctionsAndVariables.ipynb
|
isc
|
import veneer
v = veneer.Veneer()
%matplotlib inline
"""
Explanation: Example for bulk function management
Shows:
Creating multiple modelled variables
Creating multiple functions of the same form, each using one of the newly created modelled variables
Applying multiple functions
End of explanation
"""
v.network().plot()
set(v.model.catchment.runoff.get_models())
v.model.find_states('TIME.Models.RainfallRunoff.AWBM.AWBM')
v.model.catchment.runoff.create_modelled_variable?
"""
Explanation: Demonstration model
End of explanation
"""
# Save the result!
variables = v.model.catchment.runoff.create_modelled_variable('Baseflow store')
"""
Explanation: NOTE: When creating modelled variables we need to use the names that appear in the Project Explorer.
Also note that not everything will be available. If its not in the Project Explorer, you probably can't use it for a modelled variable
End of explanation
"""
variables
# variables['created'] are the variable names that we want to insert into the functions
variables['created']
name_params = list(v.model.catchment.runoff.enumerate_names())
name_params
v.model.functions.create_functions?
# Again, save the result...
functions = v.model.functions.create_functions('$funky_%s_%s','1.1 * %s',variables['created'],name_params)
"""
Explanation: The result of the function call is very important. It tells us what was created and the names.
The names will be based on the target variable (Baseflow store) and the names (plural) of the target object, in this case, catchment and FU.
End of explanation
"""
functions
functions['created']
"""
Explanation: Result of create_functions includes a list of created functions
End of explanation
"""
# Applying functions in some nonsensical manner...
v.model.catchment.runoff.apply_function('A2',functions['created'])
"""
Explanation: Note You can see all these in Edit | Functions
But the dockable 'Function Manager' doesn't tend to update (at least as of 4.3)
We apply the function against a particular target (eg v.model.catchment.runoff).
Because we've done all this against one target (v.model.catchment.runoff) we can assume that everything is in the same order, so the following bulk application can work.
End of explanation
"""
|
dunkhong/grr
|
colab/examples/demo.ipynb
|
apache-2.0
|
%load_ext grr_colab.ipython_extension
import grr_colab
"""
Explanation: GRR Colab
End of explanation
"""
grr_colab.flags.FLAGS.set_default('grr_http_api_endpoint', 'http://localhost:8000/')
grr_colab.flags.FLAGS.set_default('grr_admin_ui_url', 'http://localhost:8000/')
grr_colab.flags.FLAGS.set_default('grr_auth_api_user', 'admin')
grr_colab.flags.FLAGS.set_default('grr_auth_password', 'admin')
"""
Explanation: Specifying GRR Colab flags:
End of explanation
"""
df = %grr_search_clients -u admin
df[['online', 'online.pretty', 'client_id', 'last_seen_ago', 'last_seen_at.pretty']]
"""
Explanation: Magics API
GRR magics allow to search for clients and then to choose a single client to work with. The results of magics are represented as pandas dataframes unless they are primitives.
Searching clients
You can search for clients by specifying username, hostname, client labels etc. The results are sorted by the last seen column.
End of explanation
"""
df = %grr_search_online_clients -u admin
df[['online', 'online.pretty', 'client_id', 'last_seen_ago', 'last_seen_at.pretty']]
"""
Explanation: There is a shortcut for searching for online only clients directly so that you don't need to filter the dataframe.
End of explanation
"""
df[['last_seen_at', 'last_seen_at.pretty']]
"""
Explanation: Every datetime field has two representations: the original one that is microseconds and the pretty one that is pandas timestamp.
End of explanation
"""
client_id = df['client_id'][0]
%grr_set_client -c {client_id}
%grr_id
"""
Explanation: Setting current clients
To work with a client you need to select a client first. It means that you are able to work only with a single client simultaneously using magic commands (there is no such restriction for Pyhton API). To set a client you need either a hostname (works in case of one client set up for that hostname) or a client ID which you can get from the search clients dataframe.
End of explanation
"""
%grr_request_approval -r "For testing" -a admin
"""
Explanation: An attempt to set a client with a hostname that has multiple clients will lead to an exception.
Requesting approvals
If you don't have valid approvals for the selected client, you will get an error while attempting to run a flow on it. You can request an approval with magic commands specifying the reason and list of approvers.
End of explanation
"""
%grr_pwd
%grr_cd tmp/foo/bar
%grr_pwd
%grr_cd ../baz
%grr_pwd
"""
Explanation: This function will not wait until the approval is granted. If you need your code to wait until it's granted, use grr_request_approval_and_wait instead.
Exploring filesystem
In addition to the selected client, working directory is also saved. It means that you can use relative paths instead of absolute. Note that the existence of directories is not checked and you will not get an error if you try to cd into directory that does not exist.
Initially you are in the root directory.
End of explanation
"""
df = %grr_ls
df
"""
Explanation: You can ls the current directory and any other directories specified by relative and absolute paths.
Note. The most file-related magics start flows and fetch live data from the client. It means that the client has to be online in order for them to work.
End of explanation
"""
df[['st_mode', 'st_mode.pretty']]
%grr_ls ../baz/dir2
%grr_ls /tmp/foo
"""
Explanation: Stat mode has two representations: number and UNIX-style:
End of explanation
"""
%grr_stat file1
"""
Explanation: To see some metadata of a file you can just call grr_stat function.
End of explanation
"""
%grr_stat "file*"
"""
Explanation: You can use globbing for stat:
End of explanation
"""
%grr_head file1 -c 30
"""
Explanation: You can print the first bytes of a file:
End of explanation
"""
%grr_head file1 -c 30 -o 20
"""
Explanation: Alghough there is no offset in original bash head command you can specify offset in grr_head:
End of explanation
"""
%grr_ls /tmp/foo/baz -C
%grr_head file1 -C
"""
Explanation: Some of the functions like grr_head and grr_ls have --cached (-C for short) option which indicates that no calls to the client should be performed. In this case the data will be fetched from the cached data on the server. Server cached data is updated only during calls to the client so it is not always up-to-date but accessing it is way faster.
End of explanation
"""
%grr_grep "line" file1
%grr_grep -F "line" file1
%grr_grep -X "6c696e65" file1
"""
Explanation: Grepping files is also possible. --fixed-string (-F for short) option indicates that pattern to search for is not a regular expression. --hex-string (-X for short) option allows to pass hex strings as a pattern.
End of explanation
"""
%grr_fgrep "line" "file*"
%grr_fgrep -X "6c696e65" file1
"""
Explanation: There is a shortcut for --fixed-strings option. Globbing is also available here.
End of explanation
"""
%grr_wget file1
"""
Explanation: If the file is too large and you'd like to download it then use wget:
End of explanation
"""
%grr_wget file1 -C
"""
Explanation: You can also download a cached version:
End of explanation
"""
%grr_ls -P os -C
"""
Explanation: You can specify path type with --path-type flag (-P for short) for all filesystem related magics. The available values are os (default), tsk, registry.
End of explanation
"""
%grr_hostname
"""
Explanation: System information
Names of the functions are the same as in bash for simplicity.
Printing hostname of the client:
End of explanation
"""
ifaces = %grr_ifconfig
"""
Explanation: Getting network interfaces info:
End of explanation
"""
ifaces[['mac_address', 'mac_address.pretty']][1:]
"""
Explanation: For mac address fields there are also two columns: one with the original bytes type but not representable and pretty one with string representation of mac address.
End of explanation
"""
ifaces['addresses'][1]
"""
Explanation: If a field contains a collection then the cell in the dataframe is represented as another dataframe. IP adress fields also have two representations.
End of explanation
"""
%grr_uname -m
%grr_uname -r
"""
Explanation: For uname command only two options are available: --machine that prints the machine architecture and --kernel-release.
End of explanation
"""
df = %grr_interrogate
df[['client_id', 'system_info.system', 'system_info.machine']]
"""
Explanation: To get the client summary you can simply call interrogate flow.
End of explanation
"""
ps = %grr_ps
ps[:5]
"""
Explanation: There is also possible to get info about processes that are running on client machine:
End of explanation
"""
%grr_osqueryi "SELECT pid, name, cmdline, state, nice, threads FROM processes WHERE pid >= 440 and pid < 600;"
"""
Explanation: To fetch some system information you can also use osquery. Osquery tables are also converted to dataframes.
End of explanation
"""
import os
pid = os.getpid()
data = "dadasdasdasdjaskdakdaskdakjdkjadkjakjjdsgkngksfkjadsjnfandankjd"
rule = 'rule TextExample {{ strings: $text_string = "{data}" condition: $text_string }}'.format(data=data)
df = %grr_yara '{rule}' -p {pid}
df[['process.pid', 'process.name', 'process.exe']]
"""
Explanation: Running YARA for scanning processes is also available.
End of explanation
"""
%grr_set_flow_timeout 60
"""
Explanation: Configuring flow timeout
The default flow timeout is 30 seconds. It's time the function waits for a flow to complete. You can configure this timeout with grr_set_flow_timeout specifying number of seconds to wait. For examples, this will set the timeout to a minute:
End of explanation
"""
%grr_set_no_flow_timeout
"""
Explanation: To tell functions to wait for the flows forever until they are completed:
End of explanation
"""
%grr_set_default_flow_timeout
"""
Explanation: To set timeout to default value of 30 seconds:
End of explanation
"""
%grr_set_flow_timeout 0
"""
Explanation: Setting timeout to 0 tells functions not to wait at all and exit immediately after the flow starts.
End of explanation
"""
df = %grr_list_artifacts
df[:2]
"""
Explanation: In case timeout is exceeded (or you set 0 timeout) you will se such error with a link to Admin UI.
Collecting artifacts
You can first list all the artifacts that you can collect:
End of explanation
"""
%grr_collect "DebianVersion"
"""
Explanation: To collect an artifact you just need to provide its name:
End of explanation
"""
clients = grr_colab.Client.search(user='admin')
clients
clients[0].id
"""
Explanation: Python API
Getting a client
Using Python API you can work with multiple clients simultaneously. You don't need to select a client to work with, instead you simply get a client object.
Use search method to search for clients. You can specify ip, mac, host, version, user, and labels search criteria. As a result you will get a list of client objects so that you can pick one of them to work with.
End of explanation
"""
client = grr_colab.Client.with_id('C.dc3782aeab2c5b4c')
"""
Explanation: If you know a client ID or a hostname (in case there is one client installed for this hostname) you can get a client object using one of these values:
End of explanation
"""
client.id
"""
Explanation: Client properties
There is a bunch of simple client properties to get some info about the client. Unlike magic API this API returns objects but not dataframes for non-primitive values.
Getting the client ID:
End of explanation
"""
client.hostname
"""
Explanation: Getting the client hostname:
End of explanation
"""
client.ifaces[1:]
client.ifaces[1].ifname
"""
Explanation: Getting network interfaces info:
End of explanation
"""
for iface in client.ifaces:
print(iface.ifname)
"""
Explanation: This is a collection of interface objects so you can iterate over it and access interface object fields:
End of explanation
"""
client.knowledgebase
client.knowledgebase.os_release
"""
Explanation: Getting the knowledge base for the client:
You can also access its fields:
End of explanation
"""
client.arch
"""
Explanation: Getting an architecture of a machine that client runs on:
End of explanation
"""
client.kernel
"""
Explanation: Getting kernel version string:
End of explanation
"""
client.labels
"""
Explanation: Getting a list of labels that are associated with this client:
End of explanation
"""
client.first_seen
client.last_seen
"""
Explanation: First seen and last seen times are saved as datetime objets:
End of explanation
"""
client.request_approval(approvers=['admin'], reason='Test reason')
"""
Explanation: Requesting approvals
As in magics API here you also need to request an approval before running flows on a client. To do this simply call request_approval method providing a reason for the approval and list of approvers.
End of explanation
"""
# Wait forever
grr_colab.set_no_flow_timeout()
# Exit immediately
grr_colab.set_flow_timeout(0)
# Wait for one minute
grr_colab.set_flow_timeout(60)
#Wait for 30 seconds
grr_colab.set_default_flow_timeout()
"""
Explanation: This method does not wait until the approval is granted. If you need to wait, use request_approval_and_wait method that has the same signature.
Running flows
To set the flow timeout use set_flow_timeout function. 30 seconds is the default value. 0 means exit immediately after the flow started. You can also reset timeout and set it to a default value of 30 seconds.
End of explanation
"""
summary = client.interrogate()
summary.system_info.system
"""
Explanation: Below are examples of flows that you can run.
Interrogating a client:
End of explanation
"""
ps = client.ps()
ps[:1]
ps[0]
ps[0].exe
"""
Explanation: Listing processes on a client:
End of explanation
"""
files = client.ls('/tmp/foo/baz')
files
for f in files:
print(f.pathspec.path)
"""
Explanation: Listing files in a directory. Here you need to provide the absolute path to the directory because there is no state.
End of explanation
"""
files = client.ls('/tmp/foo', max_depth=3)
files
for f in files:
print(f.pathspec.path)
"""
Explanation: Recursive listing of a directory is also possible. To do this specify the max depth of the recursion.
End of explanation
"""
files = client.glob('/tmp/foo/baz/file*')
files
"""
Explanation: Globbing files:
End of explanation
"""
matches = client.grep(path='/tmp/foo/baz/file*', pattern=b'line')
matches
for match in matches:
print(match.pathspec.path, match.offset, match.data)
matches = client.grep(path='/tmp/foo/baz/file*', pattern=b'\x6c\x69\x6e\x65')
matches
"""
Explanation: Grepping files with regular expressions:
End of explanation
"""
matches = client.fgrep(path='/tmp/foo/baz/file*', literal=b'line')
matches
"""
Explanation: Grepping files by exact match:
End of explanation
"""
client.wget('/tmp/foo/baz/file1')
"""
Explanation: Downloading files:
End of explanation
"""
table = client.osquery('SELECT pid, name, nice FROM processes WHERE pid < 5')
table
header = ' '.join(str(col.name).rjust(10) for col in table.header.columns)
print(header)
print('-' * len(header))
for row in table.rows:
print(' '.join(map(lambda _: _.rjust(10), row.values)))
"""
Explanation: Osquerying a client:
End of explanation
"""
artifacts = grr_colab.list_artifacts()
artifacts[0]
"""
Explanation: Listing artifacts:
End of explanation
"""
client.collect('DebianVersion')
"""
Explanation: To collect an artifact you just need to provide its name:
End of explanation
"""
import os
pid = os.getpid()
data = "dadasdasdasdjaskdakdaskdakjdkjadkjakjjdsgkngksfkjadsjnfandankjd"
rule = 'rule TextExample {{ strings: $text_string = "{data}" condition: $text_string }}'.format(data=data)
matches = client.yara(rule, pids=[pid])
print(matches[0].process.pid, matches[0].process.name)
"""
Explanation: Running YARA:
End of explanation
"""
with client.open('/tmp/foo/baz/file1') as f:
print(f.readline())
with client.open('/tmp/foo/baz/file1') as f:
for line in f:
print(line)
with client.open('/tmp/foo/baz/file1') as f:
print(f.read(22))
f.seek(0)
print(f.read(22))
print(f.read())
"""
Explanation: Working with files
You can read ans seek files interacting with them like fith usual python files.
End of explanation
"""
files = client.cached.ls('/tmp/foo/baz')
files
files = client.cached.ls('/tmp/foo/baz', max_depth=2)
files
with client.cached.open('/tmp/foo/baz/file1') as f:
for line in f:
print(line)
client.cached.wget('/tmp/foo/baz/file1')
"""
Explanation: Cached data
To fetch server cached data use cached property of a client object.
You can list files in directory (recursively also) and read and dowload files as above:
End of explanation
"""
client.cached.refresh('/tmp/foo/baz')
"""
Explanation: You can also refresh filesystem metadata that is cached on the server by calling refresh method (that will refresh the contents of the directory and not its subdirectories):
End of explanation
"""
client.cached.refresh('/tmp/foo/baz', max_depth=2)
### Path types
"""
Explanation: To refresh a directory recursively specify max_depth parameter:
End of explanation
"""
client.os.ls('/tmp/foo')
client.os.cached.ls('/tmp/foo')
"""
Explanation: To specify path type, just use one of the client properties: client.os (the same as just using client), client.tsk, client.registry.
End of explanation
"""
|
besser82/shogun
|
doc/ipython-notebooks/intro/Introduction.ipynb
|
bsd-3-clause
|
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
#To import all Shogun classes
from shogun import *
import shogun as sg
"""
Explanation: Machine Learning with Shogun
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de/">herrstrathmann.de</a>
In this notebook we will see how machine learning problems are generally represented and solved in Shogun. As a primer to Shogun's many capabilities, we will see how various types of data and its attributes are handled and also how prediction is done.
Introduction
Using datasets
Feature representations
Labels
Preprocessing data
Supervised Learning with Shogun's Machine interface
Evaluating performance and Model selection
Example: Regression
Introduction
Machine learning concerns the construction and study of systems that can learn from data via exploiting certain types of structure within these. The uncovered patterns are then used to predict future data, or to perform other kinds of decision making. Two main classes (among others) of Machine Learning algorithms are: predictive or supervised learning and descriptive or Unsupervised learning. Shogun provides functionality to address those (and more) problem classes.
End of explanation
"""
#Load the file
data_file=LibSVMFile(os.path.join(SHOGUN_DATA_DIR, 'uci/diabetes/diabetes_scale.svm'))
"""
Explanation: In a general problem setting for the supervised learning approach, the goal is to learn a mapping from inputs $x_i\in\mathcal{X} $ to outputs $y_i \in \mathcal{Y}$, given a labeled set of input-output pairs $ \mathcal{D} = {(x_i,y_i)}^{\text N}{i=1} $$\subseteq \mathcal{X} \times \mathcal{Y}$. Here $ \mathcal{D}$ is called the training set, and $\text N$ is the number of training examples. In the simplest setting, each training input $x_i$ is a $\mathcal{D}$ -dimensional vector of numbers, representing, say, the height and weight of a person. These are called $\textbf {features}$, attributes or covariates. In general, however, $x_i$ could be a complex structured object, such as an image.<ul><li>When the response variable $y_i$ is categorical and discrete, $y_i \in$ {1,...,C} (say male or female) it is a classification problem.</li><li>When it is continuous (say the prices of houses) it is a regression problem.</li></ul>
For the unsupervised learning
approach we are only given inputs, $\mathcal{D} = {(x_i)}^{\text N}{i=1}$ , and the goal is to find “interesting
patterns” in the data.
Using datasets
Let us consider an example, we have a dataset about various attributes of individuals and we know whether or not they are diabetic. The data reveals certain configurations of attributes that correspond to diabetic patients and others that correspond to non-diabetic patients. When given a set of attributes for a new patient, the goal is to predict whether the patient is diabetic or not. This type of learning problem falls under Supervised learning, in particular, classification.
Shogun provides the capability to load datasets of different formats using File.</br> A real world dataset: Pima Indians Diabetes data set is used now. We load the LibSVM format file using Shogun's LibSVMFile class. The LibSVM format is: $$\space \text {label}\space \text{attribute1:value1 attribute2:value2 }...$$$$\space.$$$$\space .$$ LibSVM uses the so called "sparse" format where zero values do not need to be stored.
End of explanation
"""
f=SparseRealFeatures()
trainlab=f.load_with_labels(data_file)
mat=f.get_full_feature_matrix()
#exatract 2 attributes
glucose_conc=mat[1]
BMI=mat[5]
#generate a numpy array
feats=array(glucose_conc)
feats=vstack((feats, array(BMI)))
print(feats, feats.shape)
"""
Explanation: This results in a LibSVMFile object which we will later use to access the data.
Feature representations
To get off the mark, let us see how Shogun handles the attributes of the data using Features class. Shogun supports wide range of feature representations. We believe it is a good idea to have different forms of data, rather than converting them all into matrices. Among these are: $\hspace {20mm}$<ul><li>String features: Implements a list of strings. Not limited to character strings, but could also be sequences of floating point numbers etc. Have varying dimensions. </li> <li>Dense features: Implements dense feature matrices</li> <li>Sparse features: Implements sparse matrices.</li><li>Streaming features: For algorithms working on data streams (which are too large to fit into memory) </li></ul>
SpareRealFeatures (sparse features handling 64 bit float type data) are used to get the data from the file. Since LibSVM format files have labels included in the file, load_with_labels method of SpareRealFeatures is used. In this case it is interesting to play with two attributes, Plasma glucose concentration and Body Mass Index (BMI) and try to learn something about their relationship with the disease. We get hold of the feature matrix using get_full_feature_matrix and row vectors 1 and 5 are extracted. These are the attributes we are interested in.
End of explanation
"""
#convert to shogun format
feats_train = features(feats)
"""
Explanation: In numpy, this is a matrix of 2 row-vectors of dimension 768. However, in Shogun, this will be a matrix of 768 column vectors of dimension 2. This is beacuse each data sample is stored in a column-major fashion, meaning each column here corresponds to an individual sample and each row in it to an atribute like BMI, Glucose concentration etc. To convert the extracted matrix into Shogun format, RealFeatures are used which are nothing but the above mentioned Dense features of 64bit Float type. To do this call the factory method, features with the matrix (this should be a 64bit 2D numpy array) as the argument.
End of explanation
"""
#Get number of features(attributes of data) and num of vectors(samples)
feat_matrix=feats_train.get_feature_matrix()
num_f=feats_train.get_num_features()
num_s=feats_train.get_num_vectors()
print('Number of attributes: %s and number of samples: %s' %(num_f, num_s))
print('Number of rows of feature matrix: %s and number of columns: %s' %(feat_matrix.shape[0], feat_matrix.shape[1]))
print('First column of feature matrix (Data for first individual):')
print(feats_train.get_feature_vector(0))
"""
Explanation: Some of the general methods you might find useful are:
get_feature_matrix(): The feature matrix can be accessed using this.
get_num_features(): The total number of attributes can be accesed using this.
get_num_vectors(): To get total number of samples in data.
get_feature_vector(): To get all the attribute values (A.K.A feature vector) for a particular sample by passing the index of the sample as argument.</li></ul>
End of explanation
"""
#convert to shogun format labels
labels=BinaryLabels(trainlab)
"""
Explanation: Assigning labels
In supervised learning problems, training data is labelled. Shogun provides various types of labels to do this through Clabels. Some of these are:<ul><li>Binary labels: Binary Labels for binary classification which can have values +1 or -1.</li><li>Multiclass labels: Multiclass Labels for multi-class classification which can have values from 0 to (num. of classes-1).</li><li>Regression labels: Real-valued labels used for regression problems and are returned as output of classifiers.</li><li>Structured labels: Class of the labels used in Structured Output (SO) problems</li></ul></br> In this particular problem, our data can be of two types: diabetic or non-diabetic, so we need binary labels. This makes it a Binary Classification problem, where the data has to be classified in two groups.
End of explanation
"""
n=labels.get_num_labels()
print('Number of labels:', n)
"""
Explanation: The labels can be accessed using get_labels and the confidence vector using get_values. The total number of labels is available using get_num_labels.
End of explanation
"""
preproc=PruneVarSubMean(True)
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
# Store preprocessed feature matrix.
preproc_data=feats_train.get_feature_matrix()
# Plot the raw training data.
figure(figsize=(13,6))
pl1=subplot(121)
gray()
_=scatter(feats[0, :], feats[1,:], c=labels, s=50)
vlines(0, -1, 1, linestyle='solid', linewidths=2)
hlines(0, -1, 1, linestyle='solid', linewidths=2)
title("Raw Training Data")
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
pl1.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
#Plot preprocessed data.
pl2=subplot(122)
_=scatter(preproc_data[0, :], preproc_data[1,:], c=labels, s=50)
vlines(0, -5, 5, linestyle='solid', linewidths=2)
hlines(0, -5, 5, linestyle='solid', linewidths=2)
title("Training data after preprocessing")
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
pl2.legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
gray()
"""
Explanation: Preprocessing data
It is usually better to preprocess data to a standard form rather than handling it in raw form. The reasons are having a well behaved-scaling, many algorithms assume centered data, and that sometimes one wants to de-noise data (with say PCA). Preprocessors do not change the domain of the input features. It is possible to do various type of preprocessing using methods provided by Preprocessor class. Some of these are:<ul><li>Norm one: Normalize vector to have norm 1.</li><li>PruneVarSubMean: Substract the mean and remove features that have zero variance. </li><li>Dimension Reduction: Lower the dimensionality of given simple features.<ul><li>PCA: Principal component analysis.</li><li>Kernel PCA: PCA using kernel methods.</li></ul></li></ul> The training data will now be preprocessed using CPruneVarSubMean. This will basically remove data with zero variance and subtract the mean. Passing a True to the constructor makes the class normalise the varaince of the variables. It basically dividies every dimension through its standard-deviation. This is the reason behind removing dimensions with constant values. It is required to initialize the preprocessor by passing the feature object to init before doing anything else. The raw and processed data is now plotted.
End of explanation
"""
#prameters to svm
C=0.9
svm=LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC)
#train
svm.train()
size=100
"""
Explanation: Horizontal and vertical lines passing through zero are included to make the processing of data clear. Note that the now processed data has zero mean.
<a id='supervised'>Supervised Learning with Shogun's <a href='http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Machine.html'>Machine</a> interface</a>
Machine is Shogun's interface for general learning machines. Basically one has to train() the machine on some training data to be able to learn from it. Then we apply() it to test data to get predictions. Some of these are: <ul><li>Kernel machine: Kernel based learning tools.</li><li>Linear machine: Interface for all kinds of linear machines like classifiers.</li><li>Distance machine: A distance machine is based on a a-priori choosen distance.</li><li>Gaussian process machine: A base class for Gaussian Processes. </li><li>And many more</li></ul>
Moving on to the prediction part, Liblinear, a linear SVM is used to do the classification (more on SVMs in this notebook). A linear SVM will find a linear separation with the largest possible margin. Here C is a penalty parameter on the loss function.
End of explanation
"""
x1=linspace(-5.0, 5.0, size)
x2=linspace(-5.0, 5.0, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid=features(array((ravel(x), ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#get output labels
z=predictions.get_values().reshape((size, size))
#plot
jet()
figure(figsize=(9,6))
title("Classification")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
gray()
"""
Explanation: We will now apply on test features to get predictions. For visualising the classification boundary, the whole XY is used as test data, i.e. we predict the class on every point in the grid.
End of explanation
"""
w=svm.get_w()
b=svm.get_bias()
x1=linspace(-2.0, 3.0, 100)
#solve for w.x+b=0
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=list(map(solve, x1))
#plot
figure(figsize=(7,6))
plot(x1,x2, linewidth=2)
title("Decision boundary using w and bias")
_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)
_=xlabel('Plasma glucose concentration')
_=ylabel('Body mass index')
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Non-diabetic", "Diabetic"], loc=2)
print('w :', w)
print('b :', b)
"""
Explanation: Let us have a look at the weight vector of the separating hyperplane. It should tell us about the linear relationship between the features. The decision boundary is now plotted by solving for $\bf{w}\cdot\bf{x}$ + $\text{b}=0$. Here $\text b$ is a bias term which allows the linear function to be offset from the origin of the used coordinate system. Methods get_w() and get_bias() are used to get the necessary values.
End of explanation
"""
#split features for training and evaluation
num_train=700
feats=array(glucose_conc)
feats_t=feats[:num_train]
feats_e=feats[num_train:]
feats=array(BMI)
feats_t1=feats[:num_train]
feats_e1=feats[num_train:]
feats_t=vstack((feats_t, feats_t1))
feats_e=vstack((feats_e, feats_e1))
feats_train = features(feats_t)
feats_evaluate = features(feats_e)
"""
Explanation: For this problem, a linear classifier does a reasonable job in distinguishing labelled data. An interpretation could be that individuals below a certain level of BMI and glucose are likely to have no Diabetes.
For problems where the data cannot be separated linearly, there are more advanced classification methods, as for example all of Shogun's kernel machines, but more on this later. To play with this interactively have a look at this: web demo
Evaluating performance and Model selection
How do you assess the quality of a prediction? Shogun provides various ways to do this using CEvaluation. The preformance is evaluated by comparing the predicted output and the expected output. Some of the base classes for performance measures are:
Binary class evaluation: used to evaluate binary classification labels.
Clustering evaluation: used to evaluate clustering.
Mean absolute error: used to compute an error of regression model.
Multiclass accuracy: used to compute accuracy of multiclass classification.
Evaluating on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. The dataset will now be split into two, we train on one part and evaluate performance on other using CAccuracyMeasure.
End of explanation
"""
label_t=trainlab[:num_train]
labels=BinaryLabels(label_t)
label_e=trainlab[num_train:]
labels_true=BinaryLabels(label_e)
svm=LibLinear(C, feats_train, labels)
svm.set_liblinear_solver_type(L2R_L2LOSS_SVC)
#train and evaluate
svm.train()
output=svm.apply(feats_evaluate)
#use AccuracyMeasure to get accuracy
acc=AccuracyMeasure()
acc.evaluate(output,labels_true)
accuracy=acc.get_accuracy()*100
print('Accuracy(%):', accuracy)
"""
Explanation: Let's see the accuracy by applying on test features.
End of explanation
"""
temp_feats = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
#rescale to 0...1
preproc=RescaleFeatures()
preproc.init(temp_feats)
temp_feats.add_preprocessor(preproc)
temp_feats.apply_preprocessor(True)
mat = temp_feats.get_feature_matrix()
dist_centres=mat[7]
lower_pop=mat[12]
feats=array(dist_centres)
feats=vstack((feats, array(lower_pop)))
print(feats, feats.shape)
#convert to shogun format features
feats_train = features(feats)
"""
Explanation: To evaluate more efficiently cross-validation is used. As you might have wondered how are the parameters of the classifier selected? Shogun has a model selection framework to select the best parameters. More description of these things in this notebook.
More predictions: Regression
This section will demonstrate another type of machine learning problem on real world data.</br> The task is to estimate prices of houses in Boston using the Boston Housing Dataset provided by StatLib library. The attributes are: Weighted distances to employment centres and percentage lower status of the population. Let us see if we can predict a good relationship between the pricing of houses and the attributes. This type of problems are solved using Regression analysis.
The data set is now loaded using LibSVMFile as in the previous sections and the attributes required (7th and 12th vector ) are converted to Shogun format features.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
size=100
x1=linspace(0, 1.0, size)
x2=linspace(0, 1.0, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid = features(array((ravel(x), ravel(y))))
#Train on data(both attributes) and predict
width=1.0
tau=0.5
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train)
kernel.init(feats_train, grid)
out = krr.apply().get_labels()
"""
Explanation: The tool we will use here to perform regression is Kernel ridge regression. Kernel Ridge Regression is a non-parametric version of ridge regression where the kernel trick is used to solve a related linear ridge regression problem in a higher-dimensional space, whose results correspond to non-linear regression in the data-space. Again we train on the data and apply on the XY grid to get predicitions.
End of explanation
"""
#create feature objects for individual attributes.
feats_test = features(x1.reshape(1,len(x1)))
feats_t0=array(dist_centres)
feats_train0 = features(feats_t0.reshape(1,len(feats_t0)))
feats_t1=array(lower_pop)
feats_train1 = features(feats_t1.reshape(1,len(feats_t1)))
#Regression with first attribute
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train0)
kernel.init(feats_train0, feats_test)
out0 = krr.apply().get_labels()
#Regression with second attribute
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
krr=KernelRidgeRegression(tau, kernel, labels)
krr.train(feats_train1)
kernel.init(feats_train1, feats_test)
out1 = krr.apply().get_labels()
#Visualization of regression
fig=figure(figsize(20,6))
#first plot with only one attribute
fig.add_subplot(131)
title("Regression with 1st attribute")
_=scatter(feats[0, :], labels.get_labels(), cmap=gray(), s=20)
_=xlabel('Weighted distances to employment centres ')
_=ylabel('Median value of homes')
_=plot(x1,out0, linewidth=3)
#second plot with only one attribute
fig.add_subplot(132)
title("Regression with 2nd attribute")
_=scatter(feats[1, :], labels.get_labels(), cmap=gray(), s=20)
_=xlabel('% lower status of the population')
_=ylabel('Median value of homes')
_=plot(x1,out1, linewidth=3)
#Both attributes and regression output
ax=fig.add_subplot(133, projection='3d')
z=out.reshape((size, size))
gray()
title("Regression")
ax.plot_wireframe(y, x, z, linewidths=2, alpha=0.4)
ax.set_xlabel('% lower status of the population')
ax.set_ylabel('Distances to employment centres ')
ax.set_zlabel('Median value of homes')
ax.view_init(25, 40)
"""
Explanation: The out variable now contains a relationship between the attributes. Below is an attempt to establish such relationship between the attributes individually. Separate feature instances are created for each attribute. You could skip the code and have a look at the plots directly if you just want the essence.
End of explanation
"""
|
aranzgeo/omf
|
notebooks/omf_cbi.ipynb
|
mit
|
import cbi
import cbi_plot
import z_order_utils
import numpy as np
%matplotlib inline
"""
Explanation: OMF.v2 Block Model Storage
Authors: Rowan Cockett, Franklin Koch <br>
Company: Seequent <br>
Date: March 3, 2019
Overview
The proposal below defines a storage algorithm for all sub block model formats in OMF.v2.
The storage & access algorithm is based on sparse matrix/array storage in linear algebra.
The algorithm for the Compressed Block Index format is largely similar between the various block model formats supported by OMF.v2:
Regular Block Model: No aditional storage information necessary.
Tensor Block Model: No aditional storage information necessary.
Regular Sub Block Model: Single storage array required to record sub-blocking and provide indexing into attribute arrays.
Octree Sub Block Model: Storage array required as well as storage for each Z-Order Curve per octree (discussed in detail below).
Arbitrary Sub Block Model: Storage array required as well as storage of sub-block centroids and sizes.
Summary
The compression format for a Regular Sub Block Model scales with parent block count rather than sub block count.
Storing an Octree Sub Block Model is 12 times more efficient than an Arbitrary Sub Block Model for the same structure. For example, an Octree Sub Block Model with 10M sub-blocks would save 3.52 GB of space.
Attributes for all sub-block types are stored on-disk in contiguous chunks per parent block, allowing for easy memory mapping of attributes, if necessary.
End of explanation
"""
rbm = cbi.RegularBlockModel()
rbm.block_size = [1.5, 2.5, 10.]
rbm.block_count = [3, 2, 1]
rbm.validate()
cbi_plot.plot_rbm(rbm)
"""
Explanation: Compressed Block Index
The Compressed Block Index format (or cbi in code) is a monotonically increasing integer array, which starts at 0 (cbi[0] := 0) and ends at the total number of blocks cbi[i * j * k] := num_blocks. For the n-th parent block, cbi[n+1] - cbi[n] will always be the number of sub-blocks per parent (prod(sub_block_count) for a Regular Sub Block Model). This can be used to determine if the n-th parent block is sub-blocked (i.e. is_sub_blocked[n]), as well as the index into any attribute array to retrive all of the attribute data for that parent block. That is, attribute[cbi[n] : cbi[n+1]] will always return the attributes for the n-th parent block, regardless of if the parent block is sub-blocked or not. The cbi indexing is also useful for the Octree and Arbitrary Sub Block Models, allowing additional topology information about the tree structure or arbitrary sub-blocks, respectively, to be stored in a single array.
The Compressed Block Index format means the total size for storage is a fixed length UInt32 array plus a small amount of metadata (i.e. nine extra numbers, name, description, etc.). That is, this compression format scales with the parent block count rather than the sub-block count. All other information can be derived from the cbi array (e.g. is_sub_blocked as a boolean and all indexing into the attribute arrays). Note: cbi could instead use Int64, for the index depending on the upper limit required for number of blocks.
The technique is to be used as underlying storage for Regular Sub Block Model, Octree Sub Block Model, and Arbitrary Sub Block Model. This index is not required for Tensor Block Model or Regular Block Model; however, it could be used as an optional property to have null-blocks (e.g. above the topography) that would decrease the storage of all array attributes. In this case, cbi[n] == cbi[n+1].
Note - For the final implementation, we may store compressed block count, so like [1, 1, 32, 1] instead of [0, 1, 2, 34, 35], and compressed block index is a computed sum. This has slight performance advantages, i.e. refining a parent block into sub-blocks only is O(1) rather than O(n), and storage size advantages, since we can likely use UInt16, constrained by number of sub-blocks per parent, not total sub-blocks.
All Block Models
All block models have been decided to be defined inside of a rotated coordinate frame. The implementation of this orientation uses three axis vectors (named axis_u, axis_v and axis_w) and a corner that defines a bounding box in the project coordinate reference system. These axis vectors must be orthogonal but are not opinionated about "handed-ness". The implementation is explicitly not (a) rotation matrices, which may have skew and or (b) defined as three rotations, which may be applied in other orders (e.g. ZYX vs YXZ) and not be consistent. The unwrapping of attributes and the ijk block index is relative to these axis, respectively, in the rotated coordinate frame. By convention, the axis vectors are normalized since their length does not have meaning. Total size of the block model is determined by summing parent block sizes on each dimension. However, it is not absolutely necessary for normalized lengths to be enforced by OMF.
Stored Properties
name - Name of the block model
description - Description of the block model
attributes - list of standard OMF.v1 attributes
axis_u (Vector3) Orientation of the i-axis in the project CRS
axis_v (Vector3) Orientation of the j-axis in the project CRS
axis_w (Vector3) Orientation of the k-axis in the project CRS
corner (Vector3) Minimum x/y/z in the project CRS
location - String representation of where attributes are defiend on the block model. Either "parent_blocks" or "sub_blocks" (if sub blocks are present in the block model class). This could be extended to "faces", "edges", and "Nodes" for Regular and Tensor Block Models
Attributes
All block models are stored with flat attribute arrays, allowing for efficient storage and access, as well as adhering to existing standards set out for all other Elements in the OMF.v1 format. The standard counting is column major ordering, following "Fortran" style indexing -- in numpy (Python) this uses array.flatten(order='F') where array is the 3D attribute array. To be explicit, inside a for-loop the i index always moves the fastest:
python
count = regular_block_model.block_count
index = 0
for k in range(count[2]):
for j in range(count[1]):
for i in range(count[0]):
print(index, (i, j, k))
index += 1
Regular Block Model
Stored Properties
block_size: a Vector3 (Float) that describes how large each block is
block_count: a Vector3 (Int) that describes how many blocks in each dimension
Note
For the final implementation we will use property names size_blocks/size_parent_blocks/size_sub_blocks and equivalently num_* - this enables slightly easier discoverability of properties across different element types.
End of explanation
"""
tbm = cbi.TensorBlockModel()
tbm.tensor_u = [2.5, 1.0, 1.0]
tbm.tensor_v = [3.5, 1.5]
tbm.tensor_w = [10.0]
tbm.validate()
print("block_count:", tbm.block_count)
print("num_blocks:", tbm.num_blocks)
cbi_plot.plot_tbm(tbm)
"""
Explanation: Tensor Block Model
Stored Properties
tensor_u: a Float64 array of spacings along axis_u
tensor_v: a Float64 array of spacings along axis_v
tensor_w: a Float64 array of spacings along axis_w
Note: block_size[0] for the i-th block is tensor_u[i] and block_count[0] is len(tensor_u). Counting for attributes is the same as Regular Block Model.
End of explanation
"""
rsbm = cbi.RegularSubBlockModel()
rsbm.parent_block_size = [1.5, 2.5, 10.]
rsbm.parent_block_count = [3, 2, 1]
rsbm.sub_block_count = [2, 2, 2]
rsbm.validate()
print("cbi:", rsbm.compressed_block_index)
print("num_blocks:", rsbm.num_blocks)
print("is_sub_blocked:", rsbm.is_sub_blocked)
print("sub_block_size:", rsbm.sub_block_size)
cbi_plot.plot_rsbm(rsbm)
rsbm.refine((0, 1, 0))
print("cbi:", rsbm.compressed_block_index)
print("num_blocks:", rsbm.num_blocks)
print("is_sub_blocked:", rsbm.is_sub_blocked)
print("sub_block_size:", rsbm.sub_block_size)
cbi_plot.plot_rsbm(rsbm)
"""
Explanation: Regular Sub Block Model
The RegularSubBlockModel requires storage of information to store the parent and sub block counts as well as the parent block sizes. Attribute ordering for sub-blocks within each parent block is also column-major ordering.
Stored Properties
parent_block_size: a Vector3 (Float) that describes how large each parent block is
parent_block_count: a Vector3 (Int) that describes how many parent blocks in each dimension
sub_block_count: a Vector3 (Int) that describes how many sub blocks in each dimension are contained within each parent block
compressed_block_index: a UInt32 array of length (i * j * k + 1) that defines the sub block topology
End of explanation
"""
osbm = cbi.OctreeSubBlockModel()
osbm.parent_block_size = [1.5, 2.5, 10.]
osbm.parent_block_count = [3, 2, 1]
osbm.validate();
print('cbi: ', osbm.compressed_block_index)
print('z_order_curves: ', osbm.z_order_curves)
print('num_blocks: ', osbm.num_blocks)
cbi_plot.plot_osbm(osbm)
# This part needs work in the implementation for a high level wrapper
osbm._refine_child((0, 1, 0), 0)
osbm._refine_child((0, 1, 0), 1)
print('cbi: ', osbm.compressed_block_index)
print('z_order_curves: ', osbm.z_order_curves)
print('num_blocks: ', osbm.num_blocks)
cbi_plot.plot_osbm(osbm)
"""
Explanation: Octree Sub Block Model
The Octree Sub Block Model is a "forest" of individual octrees, with the "root" of every octree positioined at the center of each parent block within a Regular Block Model. Each octree is stored as a Linear Octree with the space-filling curve chosen to be a Z-Order Curve (also known as a Morton curve). The Z-Order curve was chosen based on the efficient properties of bit-interleaving to produce a sorted integer array that defines both the attribute ordering and the topology of the sub blocks; this has been used successfully in HPC algorithms for "forests of octrees" (e.g. Parallel Forset of Octree's, PDF). Note, that the maximum level necessary for each octree must be decided upon in OMF.v2, the industry standard is up to eight refinements, and that has been proposed. The level information is stored in this integer through a left-shift binary operation (i.e. (z_order << 3) + level). For efficient access to the attributes, the Compressed Block Index is also stored.
Stored Properties
parent_block_size: a Vector3 (Float64) that describes how large each parent block is
parent_block_count: a Vector3 (Int16) that describes how many parent blocks in each dimension
compressed_block_index: a UInt32 array of length (i * j * k + 1) that defines delineation between octrees in the forest
z_order_curves: a UInt32 array of length num_blocks containing the Z-Order curves for all octrees. Unrefined parents have z-order curve of 0
See first three functions of discretize tree mesh for an implementation of z-order curve.
End of explanation
"""
osbm = cbi.OctreeSubBlockModel()
osbm.parent_block_size = [1.5, 2.5, 10.]
osbm.parent_block_count = [3, 2, 1]
print('Refine the (0, 0, 0) parent block.')
children = osbm._refine_child((0, 0, 0), 0)
print('The children are:')
print(children)
print('Refine the (0, 0, 0) parent block, sub-block (0, 0, 0, 1).')
children = osbm._refine_child((0, 0, 0), 1)
print('The children are:')
print(children)
"""
Explanation: Octree Pointers and Level
A Z-Order curve is used to encode each octree into a linear array. The below example shows visually how the pointer and level information is encoded into a single 32 bit integer. The key pieces are to decide how many levels are possible within each tree. Choosing the current industry standard of 8 levels, allows for 256 sub-blocks in each dimension. This can accomodate 16.7 million sub-blocks within each parent block. Note that the actual block model may have many more blocks than those in a single parent block.
The pointer of an octree sub-block has an ijk index, which is the sub-block corner relative to the parent block corner, the max dimension of each is 256. There is also a level that corresponds to the level of the octree -- 0 corresponds to the largest block size (i.e. same as the parent block) and 7 corresponds to the smallest block size.
The sub-blocks must be refined as an octree. That is, the root block has level=0 and width=256, and can be refined into 8 children - each with level=1 and width=128.
End of explanation
"""
pointer = [0, 128, 0]
level = 1
ind = z_order_utils.get_index(pointer, level)
pnt, lvl = z_order_utils.get_pointer(ind)
# assert that you get back what you put in:
assert (pointer == pnt) & (level == lvl)
print(ind)
print(pnt, lvl)
"""
Explanation: Linear Octree Encoding
The encoding into a linear octree is done through bit-interleaving of each location integer. This produces a Z-Order Curve, which is a space filling curve - it guarantees a unique index for each block, and has the nice property that blocks close together are stored close together in the attribute arrays.
<center><img src="zordercurve.png" style="width:250px"><br>Visualization of the space filling Z-Order Curve</center>
End of explanation
"""
z_order_utils._print_example(pointer, level);
"""
Explanation: The actual encoding is completed through bit-interleaving of the three ijk-indices and then adding the level via left-shifting the integer. This is visualized in text as:
End of explanation
"""
asbm = cbi.ArbitrarySubBlockModel()
asbm.parent_block_size = [1.5, 2.5, 10.]
asbm.parent_block_count = [3, 2, 1]
asbm.validate();
print('cbi: ', asbm.compressed_block_index)
print('num_blocks: ', asbm.num_blocks)
print('num_parent_blocks: ', asbm.num_parent_blocks)
# Nothing to plot to start with
def add_parent_block(asbm, ijk):
"""Nothing special about these, they are just sub-blocks."""
pbs = np.array(asbm.parent_block_size)
half = pbs / 2.0
offset = half + pbs * ijk
asbm._add_sub_blocks(ijk, offset, half*2)
# Something special for the first ones
asbm._add_sub_blocks(
(0, 0, 0), [0.75, 1.25, 2.5], [1.5, 2.5, 5.]
)
asbm._add_sub_blocks(
(0, 0, 0), [0.375, 1.25, 7.5], [0.75, 2.5, 5.]
)
asbm._add_sub_blocks(
(0, 0, 0), [1.175, 1.25, 7.5], [0.75, 2.5, 5.]
)
add_parent_block(asbm, (1, 0, 0))
add_parent_block(asbm, (2, 0, 0))
add_parent_block(asbm, (0, 1, 0))
add_parent_block(asbm, (1, 1, 0))
add_parent_block(asbm, (2, 1, 0))
print('cbi: ', asbm.compressed_block_index)
print('num_blocks: ', asbm.num_blocks)
print('num_parent_blocks: ', asbm.num_parent_blocks)
cbi_plot.plot_asbm(asbm)
"""
Explanation: Octree Storage Summary
The overall storage format reduces to two arrays, (1) csb has length equal to the number of parent blocks; (2) z_order_curves has length equal to the total number of sub-blocks. This parallels standard storage formats for sparse matrices as well as standard octree storage formats. The outcome is a storage format that is compact and allows for efficient access of, for example, all sub-blocks in a parent block. The contiguous data access allows for memory-mapped arrays, among other efficiencies. The format is also twelve times more efficient than the equivalent storage of an Arbtrary Sub Block Model (one UInt32 vs six Float64 arrays). For example, a 10M cell block model saves 3.52 GB of space stored in this format. The format also enforces consistency on the indexing of the attributes. These efficiencies, as well as classic algorithms possible for searching octree, can be taken advantage of in vendor applications both for visualization and for evaluation of other attributes.
Arbitrary Sub Block Model
The Arbitrary Sub Block Model is the most flexible and also least efficient storage format. The format allows for storage of arbitrary blocks that are contained within the parent block. The Arbitrary Sub Block Model does not enforce that sub-blocks fill the entire space of the parent block.
Stored Properties
parent_block_size: a Vector3 (Float64) that describes how large each parent block is
parent_block_count: a Vector3 (Int16) that describes how many parent blocks in each dimension
compressed_block_index: a UInt32 array of length (i * j * k + 1) that defines the sub block count
sub_block_centroids: a Float64 array containing the sub block centroids for all parent blocks - there are no assumptions about how the sub-blocks are ordered within each parent block
sub_block_sizes: a Float64 array containing the sub block sizes for all parent blocks
Centroids and Sizes
These are stored as a two Float64 arrays as [x_1, y_1, z_1, x_2, y_2, z_2, ...] to ensure centroids can easily be accessed through the cbi indexing as well as memory mapped per parent block. The sizes and centroids are normalized within the parent block, that is, 0 < centroid < and 0 < size <= 1. This has 2 advantages: (1) it's easy to tell if values are outside the parent block, and (2) given a large offset, this may allow smaller storage size.
Parent blocks without sub blocks
Since the cbi is used to index into sub block centroid/size arrays, non-sub-blocked parents require an entry in these arrays. Likely this is centroid [.5, .5, .5] and size [1, 1, 1].
Question: Centroid vs. Corner
Should we store the corner instead to be consistent with the orientation of the block model storage? Storing the corner means three less operations per centroid for checking if it is contained by the parent (although one more for access, as centroid seems to be the industry standard). We could even store opposing corners, instead of corner and size, which would enable exact comparisons to determine adjacent sub blocks.
There is no storage advantage to corners/corners vs. corners/sizes vs. centroids/sizes, especially if these are all normalized. Corners/sizes gives the most API consistency, since we store block model corner and block size. Regardless of which we store, all these properties should be exposed in the client libraries.
End of explanation
"""
|
shwsun/spot-analysis
|
plot_stock_market.ipynb
|
apache-2.0
|
print(__doc__)
# Author: Gael Varoquaux gael.varoquaux@normalesup.org
# License: BSD 3 clause
import datetime
import numpy as np
import matplotlib.pyplot as plt
try:
from matplotlib.finance import quotes_historical_yahoo_ochl
except ImportError:
# quotes_historical_yahoo_ochl was named quotes_historical_yahoo before matplotlib 1.4
from matplotlib.finance import quotes_historical_yahoo as quotes_historical_yahoo_ochl
from matplotlib.collections import LineCollection
from sklearn import cluster, covariance, manifold
"""
Explanation: Visualizing the stock market structure
This example employs several unsupervised learning techniques to extract
the stock market structure from variations in historical quotes.
The quantity that we use is the daily variation in quote price: quotes
that are linked tend to cofluctuate during a day.
.. _stock_market:
Learning a graph structure
We use sparse inverse covariance estimation to find which quotes are
correlated conditionally on the others. Specifically, sparse inverse
covariance gives us a graph, that is a list of connection. For each
symbol, the symbols that it is connected too are those useful to explain
its fluctuations.
Clustering
We use clustering to group together quotes that behave similarly. Here,
amongst the :ref:various clustering techniques <clustering> available
in the scikit-learn, we use :ref:affinity_propagation as it does
not enforce equal-size clusters, and it can choose automatically the
number of clusters from the data.
Note that this gives us a different indication than the graph, as the
graph reflects conditional relations between variables, while the
clustering reflects marginal properties: variables clustered together can
be considered as having a similar impact at the level of the full stock
market.
Embedding in 2D space
For visualization purposes, we need to lay out the different symbols on a
2D canvas. For this we use :ref:manifold techniques to retrieve 2D
embedding.
Visualization
The output of the 3 models are combined in a 2D graph where nodes
represents the stocks and edges the:
cluster labels are used to define the color of the nodes
the sparse covariance model is used to display the strength of the edges
the 2D embedding is used to position the nodes in the plan
This example has a fair amount of visualization-related code, as
visualization is crucial here to display the graph. One of the challenge
is to position the labels minimizing overlap. For this we use an
heuristic based on the direction of the nearest neighbor along each
axis.
End of explanation
"""
# Choose a time period reasonably calm (not too long ago so that we get
# high-tech firms, and before the 2008 crash)
d1 = datetime.datetime(2003, 1, 1)
d2 = datetime.datetime(2008, 1, 1)
# kraft symbol has now changed from KFT to MDLZ in yahoo
symbol_dict = {
'TOT': 'Total',
'XOM': 'Exxon',
'CVX': 'Chevron',
'COP': 'ConocoPhillips',
'VLO': 'Valero Energy',
'MSFT': 'Microsoft',
'IBM': 'IBM',
'TWX': 'Time Warner',
'CMCSA': 'Comcast',
'CVC': 'Cablevision',
'YHOO': 'Yahoo',
'DELL': 'Dell',
'HPQ': 'HP',
'AMZN': 'Amazon',
'TM': 'Toyota',
'CAJ': 'Canon',
'MTU': 'Mitsubishi',
'SNE': 'Sony',
'F': 'Ford',
'HMC': 'Honda',
'NAV': 'Navistar',
'NOC': 'Northrop Grumman',
'BA': 'Boeing',
'KO': 'Coca Cola',
'MMM': '3M',
'MCD': 'Mc Donalds',
'PEP': 'Pepsi',
'MDLZ': 'Kraft Foods',
'K': 'Kellogg',
'UN': 'Unilever',
'MAR': 'Marriott',
'PG': 'Procter Gamble',
'CL': 'Colgate-Palmolive',
'GE': 'General Electrics',
'WFC': 'Wells Fargo',
'JPM': 'JPMorgan Chase',
'AIG': 'AIG',
'AXP': 'American express',
'BAC': 'Bank of America',
'GS': 'Goldman Sachs',
'AAPL': 'Apple',
'SAP': 'SAP',
'CSCO': 'Cisco',
'TXN': 'Texas instruments',
'XRX': 'Xerox',
'LMT': 'Lookheed Martin',
'WMT': 'Wal-Mart',
'WBA': 'Walgreen',
'HD': 'Home Depot',
'GSK': 'GlaxoSmithKline',
'PFE': 'Pfizer',
'SNY': 'Sanofi-Aventis',
'NVS': 'Novartis',
'KMB': 'Kimberly-Clark',
'R': 'Ryder',
'GD': 'General Dynamics',
'RTN': 'Raytheon',
'CVS': 'CVS',
'CAT': 'Caterpillar',
'DD': 'DuPont de Nemours'}
symbols, names = np.array(list(symbol_dict.items())).T
quotes = [quotes_historical_yahoo_ochl(symbol, d1, d2, asobject=True)
for symbol in symbols]
print(quotes)
open = np.array([q.open for q in quotes]).astype(np.float)
close = np.array([q.close for q in quotes]).astype(np.float)
# The daily variations of the quotes are what carry most information
variation = close - open
"""
Explanation: Retrieve the data from Internet
End of explanation
"""
edge_model = covariance.GraphLassoCV()
# standardize the time series: using correlations rather than covariance
# is more efficient for structure recovery
X = variation.copy().T
X /= X.std(axis=0)
edge_model.fit(X)
"""
Explanation: Learn a graphical structure from the correlations
End of explanation
"""
_, labels = cluster.affinity_propagation(edge_model.covariance_)
n_labels = labels.max()
for i in range(n_labels + 1):
print('Cluster %i: %s' % ((i + 1), ', '.join(names[labels == i])))
"""
Explanation: Quote More precisely if one uses assume_centered=False, then the test set is supposed to have the same mean vector as the training set. If not so, both should be centered by the user, and assume_centered=True should be used.
Jethro: It means that here the test case should habe same mean vector as the training set.
Cluster using affinity propagation
End of explanation
"""
# We use a dense eigen_solver to achieve reproducibility (arpack is
# initiated with random vectors that we don't control). In addition, we
# use a large number of neighbors to capture the large-scale structure.
node_position_model = manifold.LocallyLinearEmbedding(
n_components=2, eigen_solver='dense', n_neighbors=6)
embedding = node_position_model.fit_transform(X.T).T
"""
Explanation: Find a low-dimension embedding for visualization: find the best position of
the nodes (the stocks) on a 2D plane
End of explanation
"""
plt.figure(1, facecolor='w', figsize=(10, 8))
plt.clf()
ax = plt.axes([0., 0., 1., 1.])
plt.axis('off')
# Display a graph of the partial correlations
partial_correlations = edge_model.precision_.copy()
d = 1 / np.sqrt(np.diag(partial_correlations))
partial_correlations *= d
partial_correlations *= d[:, np.newaxis]
non_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02)
# Plot the nodes using the coordinates of our embedding
plt.scatter(embedding[0], embedding[1], s=100 * d ** 2, c=labels,
cmap=plt.cm.spectral)
# Plot the edges
start_idx, end_idx = np.where(non_zero)
#a sequence of (*line0*, *line1*, *line2*), where::
# linen = (x0, y0), (x1, y1), ... (xm, ym)
segments = [[embedding[:, start], embedding[:, stop]]
for start, stop in zip(start_idx, end_idx)]
values = np.abs(partial_correlations[non_zero])
lc = LineCollection(segments,
zorder=0, cmap=plt.cm.hot_r,
norm=plt.Normalize(0, .7 * values.max()))
lc.set_array(values)
lc.set_linewidths(15 * values)
ax.add_collection(lc)
# Add a label to each node. The challenge here is that we want to
# position the labels to avoid overlap with other labels
for index, (name, label, (x, y)) in enumerate(
zip(names, labels, embedding.T)):
dx = x - embedding[0]
dx[index] = 1
dy = y - embedding[1]
dy[index] = 1
this_dx = dx[np.argmin(np.abs(dy))]
this_dy = dy[np.argmin(np.abs(dx))]
if this_dx > 0:
horizontalalignment = 'left'
x = x + .002
else:
horizontalalignment = 'right'
x = x - .002
if this_dy > 0:
verticalalignment = 'bottom'
y = y + .002
else:
verticalalignment = 'top'
y = y - .002
plt.text(x, y, name, size=10,
horizontalalignment=horizontalalignment,
verticalalignment=verticalalignment,
bbox=dict(facecolor='w',
edgecolor=plt.cm.spectral(label / float(n_labels)),
alpha=.6))
plt.xlim(embedding[0].min() - .15 * embedding[0].ptp(),
embedding[0].max() + .10 * embedding[0].ptp(),)
plt.ylim(embedding[1].min() - .03 * embedding[1].ptp(),
embedding[1].max() + .03 * embedding[1].ptp())
plt.show()
"""
Explanation: Visualization
End of explanation
"""
|
google-research/google-research
|
group_agnostic_fairness/data_utils/CreateUCIAdultDatasetFiles.ipynb
|
apache-2.0
|
from __future__ import division
import pandas as pd
import numpy as np
import json
import os,sys
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import numpy as np
"""
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
End of explanation
"""
pd.options.display.float_format = '{:,.2f}'.format
dataset_base_dir = './group_agnostic_fairness/data/uci_adult/'
"""
Explanation: Overview
Pre-processes UCI Adult (Census Income) dataset:
Download the Adult train and test data files can be downloaded from:
https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test
and save them in the ./group_agnostic_fairness/data/uci_adult folder.
Input:
./group_agnostic_fairness/data/uci_adult/adult.data
./group_agnostic_fairness/data/uci_adult/adult.test
Outputs: train.csv, test.csv, mean_std.json, vocabulary.json, IPS_exampleweights_with_label.json, IPS_exampleweights_without_label.json
End of explanation
"""
def convert_object_type_to_category(df):
"""Converts columns of type object to category."""
df = pd.concat([df.select_dtypes(include=[], exclude=['object']),
df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category')
], axis=1).reindex_axis(df.columns, axis=1)
return df
TRAIN_FILE = os.path.join(dataset_base_dir,'adult.data')
TEST_FILE = os.path.join(dataset_base_dir,'adult.test')
columns = [
"age", "workclass", "fnlwgt", "education", "education-num",
"marital-status", "occupation", "relationship", "race", "sex",
"capital-gain", "capital-loss", "hours-per-week", "native-country", "income"
]
target_variable = "income"
target_value = ">50K"
with open(TRAIN_FILE, "r") as TRAIN_FILE:
train_df = pd.read_csv(TRAIN_FILE,sep=',',names=columns)
with open(TEST_FILE, "r") as TEST_FILE:
test_df = pd.read_csv(TEST_FILE,sep=',',names=columns)
# Convert columns of type ``object`` to ``category``
train_df = convert_object_type_to_category(train_df)
test_df = convert_object_type_to_category(test_df)
"""
Explanation: Load original dataset
End of explanation
"""
IPS_example_weights_without_label = {
0: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex != 'Female')])), # 00: White Male
1: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex == 'Female')])), # 01: White Female
2: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex != 'Female')])), # 10: Black Male
3: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex == 'Female')])) # 11: Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_without_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_without_label))
output_file.close()
print(IPS_example_weights_without_label)
IPS_example_weights_with_label = {
0: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 000: Negative White Male
1: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 001: Negative White Female
2: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 010: Negative Black Male
3: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 011: Negative Black Female
4: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 100: Positive White Male
5: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 101: Positive White Female
6: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 110: Positive Black Male
7: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 111: Positive Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_with_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_with_label))
output_file.close()
print(IPS_example_weights_with_label)
"""
Explanation: Computing Invese propensity weights for each subgroup, and writes to directory.
IPS_example_weights_with_label.json: json dictionary of the format
{subgroup_id : inverse_propensity_score,...}. Used by IPS_reweighting_model approach.
End of explanation
"""
cat_cols = train_df.select_dtypes(include='category').columns
vocab_dict = {}
for col in cat_cols:
vocab_dict[col] = list(set(train_df[col].cat.categories)-{"?"})
output_file_path = os.path.join(dataset_base_dir,'vocabulary.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(vocab_dict))
output_file.close()
print(vocab_dict)
"""
Explanation: Construct vocabulary.json, and write to directory.
vocabulary.json: json dictionary of the format {feature_name: [feature_vocabulary]}, containing vocabulary for categorical features.
End of explanation
"""
temp_dict = train_df.describe().to_dict()
mean_std_dict = {}
for key, value in temp_dict.items():
mean_std_dict[key] = [value['mean'],value['std']]
output_file_path = os.path.join(dataset_base_dir,'mean_std.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(mean_std_dict))
output_file.close()
print(mean_std_dict)
"""
Explanation: Construct mean_std.json, and write to directory
mean_std.json: json dictionary of the format feature_name: [mean, std]},
containing mean and std for numerical features.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_read_noise_covariance_matrix.ipynb
|
bsd-3-clause
|
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
from os import path as op
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_evo = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
cov = mne.read_cov(fname_cov)
print(cov)
evoked = mne.read_evokeds(fname_evo)[0]
"""
Explanation: =========================================
Reading/Writing a noise covariance matrix
=========================================
Plot a noise covariance matrix.
End of explanation
"""
cov.plot(evoked.info, exclude='bads', show_svd=False)
"""
Explanation: Show covariance
End of explanation
"""
|
ibm-cds-labs/pixiedust
|
notebook/data-load-samples/Load from Object Storage - Python.ipynb
|
apache-2.0
|
import pixiedust
pixiedust.enableJobMonitor()
"""
Explanation: Loading data from Object Storage
You can load data from cloud storage such as Object Storage.
Prerequisites
Collect your Object Storage connection information:
Authorization URL (auth_url), e.g. https://identity.open.softlayer.com
Project ID (projectId)
Region (region), e.g. dallas
User id (userId)
Password (password)
<div class="alert alert-block alert-info">
If your Object Storage instance was provisioned in Bluemix you can find the connectivity information in the _Service Credentials_ tab.
</div>
Collect your data set information
Container name, e.g. my_sample_data
File name, e.g. my_data_set.csv
Import PixieDust and enable the Spark Job monitor
End of explanation
"""
# @hidden_cell
# Enter your ...
OS_AUTH_URL = 'https://identity.open.softlayer.com'
OS_USERID = '...'
OS_PASSWORD = '...'
OS_PROJECTID = '...'
OS_REGION = '...'
OS_SOURCE_CONTAINER = '...'
OS_FILENAME = '....csv'
"""
Explanation: Configure Object Storage connectivity
Customize this cell with your Object Storage connection information
End of explanation
"""
# no changes are required to this cell
from ingest import Connectors
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
objectstoreloadOptions = {
Connectors.BluemixObjectStorage.AUTH_URL : OS_AUTH_URL,
Connectors.BluemixObjectStorage.USERID : OS_USERID,
Connectors.BluemixObjectStorage.PASSWORD : OS_PASSWORD,
Connectors.BluemixObjectStorage.PROJECTID : OS_PROJECTID,
Connectors.BluemixObjectStorage.REGION : OS_REGION,
Connectors.BluemixObjectStorage.SOURCE_CONTAINER : OS_SOURCE_CONTAINER,
Connectors.BluemixObjectStorage.SOURCE_FILE_NAME : OS_FILENAME,
Connectors.BluemixObjectStorage.SOURCE_INFER_SCHEMA : '1'}
os_data = sqlContext.read.format("com.ibm.spark.discover").options(**objectstoreloadOptions).load()
"""
Explanation: Load CSV data
Load csv file from Object Storage into a Spark DataFrame.
End of explanation
"""
display(os_data)
"""
Explanation: Explore the loaded data using PixieDust
End of explanation
"""
|
mdeff/ntds_2016
|
project/reports/global_warming/E_Simou.ipynb
|
mit
|
import numpy as np
# Show matplotlib graphs inside the notebook.
%matplotlib inline
import os.path
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import plotly
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
from sklearn import linear_model
from statsmodels.tsa.arima_model import ARIMA
from myutils import makeTimeSeries
from myutils import differenciate
from myutils import test_stationarity
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: Final Project for "A Network Tour of Data Science"- Global Warming
Effrosyni Simou
1. Aim of the Project
Since the 2016 Presidential Elections in the USA, the interest of people with regards to climate change and the correct environmental policy has reached an all-time high. In this project the aim is to use a dataset with temperature data from 1750 to 2015 [1] and check whether global warming is a fact or a speculation. The dataset is nicely packaged and allows for slicing into interesting subsets (by country, by city, global temperatures e.t.c.). It was put together by Berkeley Earth, which is affiliated with Lawrence Berkeley National Laboratory.
2. Data Acquisition
End of explanation
"""
folder = os.path.join('data', 'temperatures','GlobalLandTemperatures')
filename_ByCity = os.path.join(folder, 'GlobalLandTemperaturesByCity.csv')
filename_ByCountry = os.path.join(folder, 'GlobalLandTemperaturesByCountry.csv')
filename_ByMajorCity = os.path.join(folder, 'GlobalLandTemperaturesByMajorCity.csv')
filename_ByState = os.path.join(folder, 'GlobalLandTemperaturesByState.csv')
filename_Global = os.path.join(folder, 'GlobalTemperatures.csv')
ByCity=pd.read_csv(filename_ByCity)
ByCountry=pd.read_csv(filename_ByCountry)
ByMajorCity=pd.read_csv(filename_ByMajorCity)
ByState=pd.read_csv(filename_ByState)
Global=pd.read_csv(filename_Global)
"""
Explanation: 2.1 Importing the data
End of explanation
"""
ByCity[:10000].to_html('ByCity.html')
ByCountry[:10000].to_html('ByCountry.html')
ByMajorCity[:10000].to_html('ByMajorCity.html')
ByState[:10000].to_html('ByState.html')
Global.to_html('Global.html')
"""
Explanation: 2.2 Looking at the data
End of explanation
"""
#Removing duplicates from ByCountry
ByCountry_clear = ByCountry[~ByCountry['Country'].isin(
['Denmark', 'France', 'Europe', 'Netherlands',
'United Kingdom'])]
#ByCountry_clear.loc[ByCountry_clear['Country'] == 'Denmark (Europe)']
ByCountry_clear = ByCountry_clear.replace(
['Denmark (Europe)', 'France (Europe)', 'Netherlands (Europe)', 'United Kingdom (Europe)'],
['Denmark', 'France', 'Netherlands', 'United Kingdom'])
#countries = np.unique(ByCountry_clear['Country'])
#np.set_printoptions(threshold=np.inf)
#print(countries)
#Removing duplicates from ByCity
ByCity_clear = ByCity[~ByCity['City'].isin(
['Guatemala'])]
ByCity_clear = ByCity_clear.replace(['Guatemala City'],['Guatemala'])
#cities = np.unique(ByCity_clear['City'])
#print(cities)
"""
Explanation: Export part of the dataset as HTML files for inspection ByCity, ByCountry,
ByMajorCity, ByState, Global .
As we can see by following the links above, there is a need to clean our data:
* There are missing data. For instance, in the case of the global temperatures there are no measurements for maximum/minimum land temperatures as well as no measurements for land and ocean temperatures before 1850.
* There are duplicates in our data. This makes sense since the dataset was created by combining 16 pre-existing archives. For instance, in the case of temperatures by country the temperatures for Denmark, France, Netherlands and United Kingdom are duplicate. Also, in the case of temperatures by city the temperatures for Guatemala City are duplicate.
* Older measurements are less reliable. The measurements in this dataset date as back as 1743. It is expected that older measurements will be noisy and therefore less reliable. We will visualize the uncertainty of the measurements in the next section.
2.3 Cleaning the data
2.3.1 Removing duplicates
End of explanation
"""
Global.dropna(subset=['LandAverageTemperature']).head()
"""
Explanation: 2.3.2 Working with the missing data
As far as the missing data is concerned we can chose to either:
* Ignore the missing values
* Use the values we have in order to fill in the missing values (e.g. pad, interpolate e.t.c.).
For example, if we chose to ignore the global temperature measurement for a month where the value for LandAverageTemperature is missing we can do it as follows:
End of explanation
"""
Global.dropna(axis=0).head()
"""
Explanation: Or, if we chose to ignore the global temperature measurements for which we don't have all of the 8 fields:
End of explanation
"""
Global.fillna(method='pad').head()
"""
Explanation: If we chose to fill in the missing values with the values of the previous corresponding measurement:
End of explanation
"""
mean_Global= []
mean_Global_uncertainty = []
years = np.unique(Global['dt'].apply(lambda x: x[:4]))
for year in years:
mean_Global.append(Global[Global['dt'].apply(
lambda x: x[:4]) == year]['LandAverageTemperature'].mean())
mean_Global_uncertainty.append(Global[Global['dt'].apply(
lambda x: x[:4]) == year]['LandAverageTemperatureUncertainty'].mean())
#print(years.dtype)
x=years.astype(int)
minimum=np.array(mean_Global) + np.array(mean_Global_uncertainty)
y=np.array(mean_Global)
maximum=np.array(mean_Global) - np.array(mean_Global_uncertainty)
plt.figure(figsize=(16,8))
plt.plot(x,minimum,'b')
plt.hold
plt.plot(x,y,'r')
plt.hold
plt.plot(x,maximum,'b')
plt.hold
plt.fill_between(x,y1=minimum,y2=maximum)
plt.xlabel('years',fontsize=16)
plt.xlim(1748,2017)
plt.ylabel('Temperature, °C',fontsize=16)
plt.title('Yearly Global Temperature',fontsize=24)
"""
Explanation: The method we will use will depend on the problem we will try to solve with our data.
2.3.3 Uncertainty of measuremets with time
End of explanation
"""
countries = np.unique(ByCountry_clear['Country'])
mean_temp = []
for country in countries:
mean_temp.append(ByCountry_clear[ByCountry_clear['Country'] == country]['AverageTemperature'].mean())
#when taking the mean the missing data are automatically ignored=>see data cleaning section
#use choropleth map provided by pyplot
data = [ dict(
type = 'choropleth',
locations = countries,
z = mean_temp,
locationmode = 'country names',
text = countries,
colorbar = dict(autotick = True, tickprefix = '',
title = '\n °C')
)
]
layout = dict(
title = 'Average Temperature in Countries',
geo = dict(
showframe = False,
showocean = True,
oceancolor = 'rgb(0,255,255)',
),
)
fig = dict(data=data, layout=layout)
py.iplot(fig,validate=False)
"""
Explanation: As it can be observed the uncertainty of the measurements in the 18th and 19th century was very high. Early data was collected by technicians using mercury thermometers, where any variation in the visit time impacted measurements. In the 1940s, the construction of airports caused many weather stations to be moved. In the 1980s, there was a move to electronic thermometers that are said to have a cooling bias. One can chose to ignore or give smaller weights to older, less reliable measurements. For the data exploitation part we will consider data from 1900 onward.
3. Data Exploration
3.1 Which countries are warmer?
We now draw a map with the average temperature of each country over all years. This serves as a quick way to check that our data make sense. We can see that the warmest countries are the ones along the Equator and that the coldest countries are Greenland, Canada and Russia. Countries for which the data was missing are depicted as white. One can hover above counties to see their name and average temperatures.
End of explanation
"""
years_in_MajorCities=np.unique(ByMajorCity['dt'].apply(lambda x: x[:4]))
cities = np.unique(ByMajorCity['City'])
dt=[years_in_MajorCities[-51],years_in_MajorCities[-1]]
T1=[]
T2=[]
lon=[]
lat=[]
for city in cities:
T1.append(ByMajorCity[(ByMajorCity['City'] == city) & (ByMajorCity['dt'].apply(lambda x: x[:4]) == dt[0])]['AverageTemperature'].mean())
T2.append(ByMajorCity[(ByMajorCity['City'] == city) & (ByMajorCity['dt'].apply(lambda x: x[:4]) == dt[1])]['AverageTemperature'].mean())
lon.append(ByMajorCity[ByMajorCity['City'] == city]['Longitude'].iloc[1])
lat.append(ByMajorCity[ByMajorCity['City'] == city]['Latitude'].iloc[1])
lon=np.array(lon)
lat=np.array(lat)
for i in range(0,lon.size):
if lon[i].endswith('W'):
west=lon[i]
west=float(west[:-1])
east=str(360-west)
lon[i]=east+'E'
for i in range(0,lat.size):
if lat[i].endswith('S'):
south=lat[i]
south=float(south[:-1])
north=str(1-south)
lat[i]=north+'N'
lon=pd.DataFrame(lon)
lat=pd.DataFrame(lat)
long=lon[0].apply(lambda x: x[:-1])
lati=lat[0].apply(lambda x: x[:-1])
dT=np.array(T2)-np.array(T1)
data = [ dict(
type = 'scattergeo',
lon = long,
lat = lati,
text=cities,
mode = 'markers',
marker = dict(
size = 8,
opacity = 0.8,
reversescale = True,
autocolorscale = False,
symbol = 'square',
line = dict(
width=1,
color='rgba(102, 102, 102)'
),
color = dT,
colorbar=dict(
title="\n °C"
)
))]
layout = dict(
title = 'Change in the temperature the last 50 years',
colorbar = True,
geo = dict(
showland = True,
landcolor = "rgb(250, 250, 250)",
subunitcolor = "rgb(217, 217, 217)",
countrycolor = "rgb(217, 217, 217)",
showocean = True,
oceancolor = 'rgb(0,255,255)',
),
)
fig = dict( data=data, layout=layout )
py.iplot( fig, validate=False)
"""
Explanation: 3.2 Which cities have experienced the biggest change of temperature the last 50 years?
We now look at the change of temperature in the major cities over the last 50 years. We subtract the oldest temperature $T_{old }$ from the most recent temperature $T_{new}$. Therefore if $dT=T_{new}-T_{old }>0 \rightarrow$ the temperature has increased. It can be observed that for almost all (95%) of the major cities there has been an increase in the temperature in the last 50 years.
One can zoom into the map and see the name and the coordinates of the cities.
End of explanation
"""
mean_Global=pd.DataFrame(mean_Global)
mean_Global['dt']=years
ts=makeTimeSeries(mean_Global)
#print(ts)
plt.figure(figsize=(16,8))
plt.plot(ts)
plt.xlabel('time',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
plt.title('Yearly Global Temperature',fontsize=24)
"""
Explanation: 4. Data Expoitation
We now want to build a model that predicts the global temperature based on the temperatures of the previous years. It can be obsereved from the figure below that the mean of the global temperature data has a positive trend. Therefore the yearly global temperature is a non-stationary process (the joint probability distribution changes when shifted in time). In order to produce reliable results and good prediction the process must be converted to a stationary process.
End of explanation
"""
X = ts[0]['1900':'2000'] #training set, temporal split
#print(X)
X_diff=differenciate(X)
#print(X_diff)
plt.figure(figsize=(16,8))
plt.plot(X_diff)
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
plt.title('Yearly Global Temperature (after differencing)',fontsize=24)
"""
Explanation: 4.1 Making the process a stationary process
4.1.1 Differencing
An easy way to detrend a time series is by differencing. For a non-stationary time series $X$, its corresponding time series after differencing $X_{diff}$ can be calculated as:<br><br>
$$
X_{diff}(i)=X(i)-X(i-1)
$$
$X_{diff}$ will obviously have one sample less than $X$.
End of explanation
"""
test_stationarity(X_diff)
"""
Explanation: Now we can check if in fact the process after differencing is stationary with the Dickey-Fuller Test. Here the null hypothesis is that the time series is non-stationary. The test results comprise of a Test Statistic and some Critical Values for different confidence levels. If the ‘Test Statistic’ is less than the ‘Critical Value’, we can reject the null hypothesis and say that the series is stationary.
End of explanation
"""
regresor = linear_model.LinearRegression()
y=np.array(X.dropna())
t=np.arange(y.size)
y=y.reshape(-1,1)
t=t.reshape(-1,1)
regresor.fit(t,y)
trend=regresor.predict(t)
# detrend
detrended = [y[i]-trend[i] for i in range(0, y.size)]
y=pd.DataFrame(y)
y.index=X.index
trend=pd.DataFrame(trend)
trend.index=X.index
detrended=pd.DataFrame(detrended)
detrended.index=X.index
print('Coefficients: \n', regresor.coef_)
print("Mean of error: %.2f" % np.mean((trend - y) ** 2))
# plot trend
plt.figure(figsize=(16,8))
plt.plot(y,color='blue',label='time series')
plt.plot(trend,color='green',label='trend')
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
plt.title('Trend of Yearly Global Temperature',fontsize=24)
plt.legend()
plt.show()
# plot detrended
plt.figure(figsize=(16,8))
plt.plot(detrended)
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
plt.title('Detrended Yearly Global Temperature',fontsize=24)
plt.show()
test_stationarity(detrended[0])
"""
Explanation: 4.1.2 Detrend by Model Fitting
Another way is to try to model the trend and then subtract it from the data.
End of explanation
"""
model = ARIMA(ts[0]['1900':'2000'], order=(1, 1, 2))
results_ARIMA = model.fit(disp=-1)
plt.figure(figsize=(16,8))
plt.plot(X_diff,color='blue',label='original')
plt.plot(results_ARIMA.fittedvalues, color='red',label='predicted')
plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-X_diff)**2),fontsize=20)
plt.legend(loc='best')
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
#print(predictions_ARIMA_diff.head())
"""
Explanation: Looking at the results we get from the Dickey-Fuller Test for the two methods of making the time series stationary, we can see that we got better results here through the method of differencing. Therefore, in what follows we will use $X_{diff}$. We could have gotten better results for the method based on modeling the trend if we had allowed a more complex model than the linear one.
4.2 Modeling
For the modeling we will use the Auto-Regressive Integrated Moving Averages (ARIMA) model. For our training set we will use the global temperatures from 1900 to 2000.
The ARIMA provided by statsmodels differenciates the time series. Therefore, we give first in the figure below the results for the differenciated time series.
End of explanation
"""
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
#print (predictions_ARIMA_diff_cumsum.head())
predictions_ARIMA = pd.Series(X.ix[0], index=X.index)
predictions_ARIMA = predictions_ARIMA.add(predictions_ARIMA_diff_cumsum,fill_value=0)
#predictions_ARIMA.head()
"""
Explanation: We now take it back to the original scale (no differencing).
End of explanation
"""
plt.figure(figsize=(16,8))
plt.plot(X,color='blue',label='original')
plt.plot(predictions_ARIMA,color='green',label='predicted')
plt.title('RMSE= %.4f'% np.sqrt(sum((predictions_ARIMA-X)**2)/len(X)),fontsize=24)
plt.legend(loc='best')
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
"""
Explanation: 5. Evaluation
5.1 In-sample performance
We now plot the actual and the predicted time series by our model for our training set. It is not a perfect prediction, but the root mean square error is relatively small.
End of explanation
"""
X_test = ts[0]['2001':] #test set, temporal split
#print(X_test)
preds=results_ARIMA.predict('2001-01-01','2015-01-01')
#preds.head
preds_cumsum = preds.cumsum()
preds=preds_cumsum+X[-1]
#print (preds)
#print(X_test)
plt.figure(figsize=(16,8))
plt.plot(X_test,color='blue',label='original')
plt.plot(preds, color='red',label='predicted')
plt.title('RMSE= %.4f'% np.sqrt(sum((preds-X_test)**2)/len(X_test)),fontsize=24)
plt.legend(loc='best')
plt.xlabel('years',fontsize=16)
plt.ylabel('Temperature, °C',fontsize=16)
"""
Explanation: 5.2 Out-of-sample performance
We now look at the accuracy of our model in predicting the future. We test on the temperatures from 2001 to 2015. Again, the model is not perfectly accurate, but the root mean square error is relatively small.
End of explanation
"""
|
ptpro3/ptpro3.github.io
|
Projects/Challenges/challenge_set_5_prashant.ipynb
|
mit
|
import pandas as pd
import patsy
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cross_validation import train_test_split
%matplotlib inline
df = pd.read_csv('2013_movies.csv')
df.head()
y, X = patsy.dmatrices('DomesticTotalGross ~ Budget + Runtime', data=df, return_type="dataframe")
X.head()
"""
Explanation: Topic: Challenge Set 5
Subject: Linear Regression and Train/Test Split
Date: 02/07/2017
Name: Prashant Tatineni
End of explanation
"""
model = sm.OLS(y, X['Intercept'])
fit = model.fit()
fit.summary()
"""
Explanation: Challenge 1
End of explanation
"""
records = range(89)
plt.scatter(records, y, color='g')
plt.scatter(records, fit.predict(X['Intercept']))
plt.hist((y['DomesticTotalGross'] - fit.predict(X['Intercept'])));
"""
Explanation: This model is representing the null hypothesis.
End of explanation
"""
model = sm.OLS(y, X[['Intercept','Budget']])
fit = model.fit()
fit.summary()
plt.scatter(X['Budget'], y, color='g')
plt.scatter(X['Budget'], fit.predict(X[['Intercept','Budget']]))
plt.scatter(X['Budget'], fit.predict(X[['Intercept','Budget']]) - y['DomesticTotalGross'])
"""
Explanation: Challenge 2
End of explanation
"""
y3, X3 = patsy.dmatrices('DomesticTotalGross ~ Rating', data=df, return_type="dataframe")
X3.head()
model = sm.OLS(y3, X3)
fit = model.fit()
fit.summary()
records3 = range(100)
plt.scatter(records3, y3, color='g')
plt.scatter(records3, fit.predict(X3))
plt.hist((y3['DomesticTotalGross'] - fit.predict(X3)));
"""
Explanation: For higher budget, higher grossing movies there is some spread in the data and the model's residuals are higher
Challenge 3
End of explanation
"""
y4, X4 = patsy.dmatrices('DomesticTotalGross ~ Budget + Runtime + Rating', data=df, return_type="dataframe")
X4.head()
model = sm.OLS(y4, X4)
fit = model.fit()
fit.summary()
plt.scatter(records, y4, color='g')
plt.scatter(records, fit.predict(X4))
"""
Explanation: Here, the model is using the 'rating' to predict Domestic gross. Since there's 4 ratings, it's predicting one of 4 domestic gross values.
Challenge 4
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X4, y4, test_size=0.25)
y_test.shape
model = sm.OLS(y_train, X_train)
fit = model.fit()
fit.summary()
records5 = range(23)
plt.scatter(records5, y_test, color='g')
plt.scatter(records5, fit.predict(X_test))
"""
Explanation: Challenge 5
End of explanation
"""
|
ProfessorKazarinoff/staticsite
|
content/code/sympy/sympy_solving_equations-polymer-density-problem-different-values.ipynb
|
gpl-3.0
|
from sympy import symbols, nonlinsolve
"""
Explanation: Sympy (sympy.org) is a Python package used for solving equations with symbolic math.
Using Python and SymPy we can write and solve equations that come up in Engineering.
The example problem below contains two equations with two unknown variables. You could use a pencil and paper to solve the problem, but we are going to use Python and programming to solve the problem.
Given:
The density of two different samples of a polymer $\rho_1$ and $\rho_2$ are measured.
$$ \rho_1 = 0.904 \ g/cm^3 $$
$$ \rho_2 = 0.895 \ g/cm^3 $$
The percent crystallinity of the two samples ($\%c_1 $ and $\%c_2$) is known.
$$ \%c_1 = 62.8 \% $$
$$ \%c_2 = 54.4 \% $$
The percent crystalinity of a polymer sample is related to the density of 100% amorphus regions ($\rho_a$) and 100% crystaline regions ($\rho_c$) according to:
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
Find:
Find the density of 100% amorphus regions ($\rho_a$) and the density of 100% crystaline regions ($\rho_c$) for this polymer.
Solution:
We are going to use Python and a package called SymPy to solve this problem. I recommend installing the Anaconda distribution of Python. If you install Anaconda, SymPy is included. If you downloaded Python from Python.org or if you are using a virtual environment, SymPy can be installed at a terminal using pip with the command below.
text
$ pip install sympy
We need a couple of functions from the SymPy package to solve this problem. We need the symbols() function to create symbolic math variables for the density of 100% amorphous and 100% crystalline regions ($\rho_a$ and $\rho_c$) and variables for the given information in the problem ($\%c_1 $, $\%c_2$, $\rho_1$ and $\rho_2$ ). We also need SymPy's nonlinsolve() function to solve a system of non-linear equations.
The symbols() function and the nonlinsolve() function can be imported from SymPy using the line below.
End of explanation
"""
pc, pa, p1, p2, c1, c2 = symbols('pc pa p1 p2 c1 c2')
"""
Explanation: Next we need to define six different variables:
$$\rho_c, \rho_a, \rho_1, \rho_2, c_1, c_2$$
Note commas are included in the symbols output, but there are no commas in the symbols input.
End of explanation
"""
expr1 = ( (pc*(p1-pa) ) / (p1*(pc-pa)) - c1)
expr2 = ( (pc*(p2-pa) ) / (p2*(pc-pa)) - c2)
"""
Explanation: Now we can create two SymPy expressions that represent our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero. The result of moving the %crystallinity term to the other side of the equation is shown below. Note how the second equation equals zero.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
$$ \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% - \%crystallinity = 0 $$
Substitue in $\rho_s = \rho_1$ and $\rho_s = \rho_2$ into the expression above. Also substitue in $\%crystallinity = \%c_1$ and $\%crystallinity = \%c_2$. The result is two equations, each equation is equal to zero.
$$ \frac{ \rho_c(\rho_1 - \rho_a) }{\rho_1(\rho_c - \rho_a) } \times 100 \% - \%c_1 = 0 $$
$$ \frac{ \rho_c(\rho_2 - \rho_a) }{\rho_2(\rho_c - \rho_a) } \times 100 \% - \%c_2 = 0 $$
Now we have two equations (the two equations above) which we can solve for two unknowns ($\rho_a$ and $\rho_s$). The two equations can be coded into SymPy expressions. The SymPy expressions contain the variables we defined earlier.
End of explanation
"""
expr1 = expr1.subs(p1, 0.904)
expr1 = expr1.subs(c1, 0.628)
print(expr1)
"""
Explanation: Next, we'll substitute in the known values $\rho_1 = 0.904$ and $c_1 = 0.628$ into our first expression expr1. Note you need to set the output of SymPy's .subs method to a variable. SymPy expressions are not modified in-place. You need to capture the output of the .subs method in a variable.
End of explanation
"""
expr2 = expr2.subs(p2, 0.895)
expr2 = expr2.subs(c2, 0.544)
print(expr2)
"""
Explanation: Now we'll substitue the second set of given values $\rho_2 = 0.895$ and $c_2 = 0.544$ into our second expression expr2.
End of explanation
"""
sol = nonlinsolve([expr1,expr2],[pa,pc])
print(sol)
"""
Explanation: We'll use SymPy's nonlinsolve() function to solve the two equations expr1 and expr2 for to unknows pa and pc. SymPy's nonlinsolve() function expects a list of expressions [expr1,expr2] followed by a list variables [pa,pc] to solve for.
End of explanation
"""
print(type(sol))
pa = sol.args[0][0]
pc = sol.args[0][1]
print(f' Density of 100% amorphous polymer, pa = {round(pa,2)} g/cm3')
print(f' Density of 100% crystaline polymer, pc = {round(pc,2)} g/cm3')
"""
Explanation: We see that the value of $\rho_a = 0.84079$ and $\rho_c = 0.94613$.
The solution is a SymPy FiniteSet object. To pull the values of $\rho_a$ and $\rho_c$ out of the FiniteSet, use the syntax sol.args[0][<var num>] to pull the answers out.
End of explanation
"""
print(pa)
print(pc)
"""
Explanation: Use SymPy to calculate a numerical result
Besides solving equations, SymPy expressions can also be used to calculate a numerical result. A numerical result can be calculated if all of the variables in an expression are set to floats or integers.
Let's solve the following problem with SymPy and calculate a numerical result.
Given:
The density of a 100\% amorphous polymer sample $\rho_a$ and the density of a 100% crystaline sample $\rho_c$ of the same polymer are measured.
$$ \rho_a = 0.84 \ g/cm^3 $$
$$ \rho_c = 0.95 \ g/cm^3 $$
The density of a sample $\rho_s$ of the same polymer is measured.
$$ \rho_s = 0.921 \ g/cm^3 $$
Find:
What is the \% crytallinity of the sample with a measured density $ \rho_s = 1.382 \ g/cm^3 $?
Solution
We have precise values for $ \rho_a $ and $ \rho_c $ from the previous problem. Let's see what the values of $ \rho_a $ and $ \rho_c $ are. We will use these more precise values that we calculated earlier to solve the problem.
End of explanation
"""
pc, pa, ps = symbols('pc pa ps')
"""
Explanation: Next, we will create three SymPy symbols objects. These three symbols objects will be used to build our expression.
End of explanation
"""
expr = ( pc*(ps-pa) ) / (ps*(pc-pa))
"""
Explanation: The expression that relates % crystallinity of a polymer sample to the density of 100% amorphus and 100% crystalline versions of the same polymer is below.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
We can build a SymPy expression that represents the equation above using the symbols objects (variables) we just defined.
End of explanation
"""
expr = expr.subs(pa, 0.840789786223278)
expr = expr.subs(pc, 0.946134313397929)
expr = expr.subs(ps, 0.921)
print(expr.evalf())
"""
Explanation: Now we can substitute our $ \rho_a $ and $ \rho_c $ from above. Note the SymPy's .subs() method does not modify an expression in place. We have to set the modified expression to a new variable before we can make another substitution. After the substitutions are complete, we can print out the numerical value of the expression. This is accomplished with SymPy's .evalf() method.
End of explanation
"""
print(f'The percent crystallinity of the sample is {round(expr*100,1)} percent')
"""
Explanation: As a final step, we can print out the answer using a Python f-string.
End of explanation
"""
|
EBIvariation/eva-cttv-pipeline
|
data-exploration/complex-events/notebooks/hgvs-follow-up-part2.ipynb
|
apache-2.0
|
from collections import defaultdict, Counter
from itertools import zip_longest
import json
import os
import re
import sys
import urllib
import numpy as np
import requests
from consequence_prediction.vep_mapping_pipeline.consequence_mapping import *
from eva_cttv_pipeline.clinvar_xml_io.clinvar_xml_io import *
from eva_cttv_pipeline.evidence_string_generation.clinvar_to_evidence_strings import convert_allele_origins
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Follow-up questions - 02/02/2022
Table of contents
Phenotypes
Summary
Uncertain ranges
Spans
Genes
Summary
End of explanation
"""
PROJECT_ROOT = '/home/april/projects/opentargets/complex-events'
# dump of all records with no functional consequences and no complete coordinates
# uses June consequence pred + ClinVar 6/26/2021
no_consequences_path = os.path.join(PROJECT_ROOT, 'no-conseq_no-coords.xml.gz')
dataset = ClinVarDataset(no_consequences_path)
def get_somatic_germline_counts(dataset):
all_allele_origins = [convert_allele_origins(record.valid_allele_origins) for record in dataset]
# Our pipeline's definition for distinguishing somatic & germline
def is_somatic(allele_origins):
return allele_origins == ['somatic']
phenotypes_counts = Counter()
for allele_origins in all_allele_origins:
germline = False
somatic = False
for ao in allele_origins:
if is_somatic(ao):
somatic = True
else:
germline = True
if germline and somatic:
phenotypes_counts['both'] += 1
if germline and not somatic:
phenotypes_counts['germline'] += 1
if somatic and not germline:
phenotypes_counts['somatic'] += 1
# flat count of allele origins
flattened_allele_origins = [x for allele_origins in all_allele_origins for ao in allele_origins for x in ao]
flat_pheno_counts = Counter(flattened_allele_origins)
return phenotypes_counts, flat_pheno_counts
complex_phenotypes, complex_flat_aos = get_somatic_germline_counts(dataset)
complex_phenotypes
# check if these are enriched in somatic relative to full set
full_dataset = ClinVarDataset(os.path.join(PROJECT_ROOT, 'ClinVarFullRelease_2021-07.xml.gz'))
full_phenotypes, full_flat_aos = get_somatic_germline_counts(full_dataset)
full_phenotypes
def percent_somatic(c):
return c['somatic'] / sum(c.values()) * 100.0
print('percent somatic for complex:', percent_somatic(complex_phenotypes))
print('percent somatic for all:', percent_somatic(full_phenotypes))
"""
Explanation: Phenotypes
What are the phenotypes of the structural variant? Are they somatic or germline?
Top of page
End of explanation
"""
(67974756 - 67967551) / (67974774 - 67967534)
(67967551 - 67967534) + (67974774 - 67974756)
"""
Explanation: Summary for phenotypes
somatic slightly enriched in complex variants compared to germline, but not much
complex are 1.77% somatic (vs. 0.878% overall)
Top of page
Uncertain ranges
It would be interesting to know the size of the uncertainty when it is known and compare it to the known inner range (ratio between certain range and uncertain range).
Top of page
Imprecise - known bounds
Ex. NC_000011.8:g.(67967534_67967551)_(67974756_67974774)del
Certainty ratio = smallest possible span divided by largest possible span
Also interested in absolute size of uncertain bounds region = range of possible start points + range of possible end points
End of explanation
"""
(26775295 - 26547773) / 101991189 # length of chr 15
"""
Explanation: Precise
Ex. NC_000016.10:g.12595039_12636793del
Certainty of 1 (no uncertainty)
Imprecise - unknown bounds
Ex. NC_000015.10:g.(?_26547773)_(26775295_?)del
In theory we could compute the same numbers using the full length of the reference sequence, but not sure it's meaningful.
End of explanation
"""
sequence_identifier = r'[a-zA-Z0-9_.]+'
genomic_sequence = f'^({sequence_identifier}):g\.'
# only INS, DEL, DUP supported by VEP
variant_type_regex = {
re.compile(f'{genomic_sequence}.*?del(?!ins).*?') : 'DEL',
re.compile(f'{genomic_sequence}.*?dup.*?') : 'DUP',
re.compile(f'{genomic_sequence}.*?(?<!del)ins.*?') : 'INS',
}
# for this we EXCLUDE unknown bounds, and capture all numeric bounds on endpoints
def_range = r'([0-9]+)_([0-9]+)'
var_range = r'\(([0-9]+)_([0-9]+)\)_\(([0-9]+)_([0-9]+)\)'
ch = r'[^?_+-]'
def_span_regex = re.compile(f'{genomic_sequence}{ch}*?{def_range}{ch}*?$')
var_span_regex = re.compile(f'{genomic_sequence}{ch}*?{var_range}{ch}*?$')
def endpoint_bounds(dataset, include_precise=False, limit=None):
"""Returns inner and outer bounds on endpoints (duplicating inner/outer if precise)."""
n = 0
all_bounds = []
all_hgvs = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
# NC_000011.8:g.(67967534_67967551)_(67974756_67974774)del
var_match = var_span_regex.match(h)
if var_match and all(var_match.group(i) for i in range(2,6)):
# use terminology from dbVar data model
# see https://www.ncbi.nlm.nih.gov/core/assets/dbvar/files/dbVar_VCF_Submission.pdf
outer_start = int(var_match.group(2))
inner_start = int(var_match.group(3))
inner_stop = int(var_match.group(4))
outer_stop = int(var_match.group(5))
all_bounds.append(((outer_start, inner_start), (inner_stop, outer_stop)))
all_hgvs.append(h)
# presumably all hgvs expressions for one record have the same span, don't double count
break
elif include_precise:
# NC_000016.10:g.12595039_12636793del
def_match = def_span_regex.match(h)
if def_match and def_match.group(2) and def_match.group(3):
outer_start = inner_start = int(def_match.group(2))
inner_stop = outer_stop = int(def_match.group(3))
all_bounds.append(((outer_start, inner_start), (inner_stop, outer_stop)))
all_hgvs.append(h)
break
return all_hgvs, all_bounds
all_hgvs, all_bounds = endpoint_bounds(dataset)
def is_valid(bounds):
# invalid if any range is negative
return (bounds[0][1] >= bounds[0][0]
and bounds[1][1] >= bounds[1][0]
and bounds[1][0] >= bounds[0][1])
def certainty_ratio(bounds):
"""For an HGVS range (A_B)_(C_D), this computes (C-B) / (D-A)"""
return (bounds[1][0] - bounds[0][1]) / (bounds[1][1] - bounds[0][0])
def uncertain_bounds_region(bounds):
"""For an HGVS range (A_B)_(C_D), this computes (A-B) + (D-C)"""
return (bounds[0][1] - bounds[0][0]) + (bounds[1][1] - bounds[1][0])
len(all_bounds)
all_valid_bounds = [bounds for bounds in all_bounds if is_valid(bounds)]
len(all_valid_bounds)
all_certainty_ratios = [certainty_ratio(bounds) for bounds in all_valid_bounds]
all_uncertain_ranges = [uncertain_bounds_region(bounds) for bounds in all_valid_bounds]
# 1.0 is the most certain
print(min(all_certainty_ratios))
print(max(all_certainty_ratios))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Variants per certainty ratio (imprecise, known bounds)')
plt.hist(all_certainty_ratios, bins=100)
print(min(all_uncertain_ranges))
print(max(all_uncertain_ranges))
# exclude the max
i = all_uncertain_ranges.index(max(all_uncertain_ranges))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title('Variants per total size of uncertain bounds region')
plt.hist(all_uncertain_ranges[:i] + all_uncertain_ranges[i+1:], bins=100)
# the max is screwing up all my plots, get rid of it
i = all_uncertain_ranges.index(max(all_uncertain_ranges))
xs = all_certainty_ratios[:i] + all_certainty_ratios[i+1:]
ys = all_uncertain_ranges[:i] + all_uncertain_ranges[i+1:]
print(all_uncertain_ranges[i])
print(all_certainty_ratios[i])
plt.figure(figsize=(12,10))
plt.grid(visible=True)
plt.title('Certainty ratio vs. size of uncertain bounds region')
plt.xlabel('Certainty ratio')
plt.ylabel('Size of uncertain bounds region')
plt.scatter(xs, ys, marker='.')
plt.figure(figsize=(12,10))
plt.grid(visible=True)
plt.title('Certainty ratio vs. size of uncertain bounds region (log scale)')
plt.xlabel('Certainty ratio')
plt.ylabel('Size of uncertain bounds region')
plt.yscale('log')
plt.scatter(xs, ys, marker='.')
"""
Explanation: Uncertainty from spans
For this we only include the imprecise, known bounds case. Note numbers from previous notebook
precise: 1735
uncertainty is 0
imprecise (known bounds): 559
measures make sense, numbers below
imprecise (unknown bounds, mostly unknown outer bounds): 9311
measures might not make sense
Top of page
End of explanation
"""
def hgvs_and_bounds_to_vep_identifier(all_hgvs, all_bounds):
for hgvs, bounds in zip(all_hgvs, all_bounds):
m = def_span_regex.match(hgvs)
if not m:
m = var_span_regex.match(hgvs)
if not m:
continue
seq = m.group(1)
# not everything accepted by VEP, for now we'll be lazy
if not (seq.startswith('NC') or seq.startswith('LRG') or seq.startswith('NW') or seq.startswith('AC')):
continue
variant_type = None
for r, s in variant_type_regex.items():
if r.match(hgvs):
variant_type = s
break
if not variant_type:
continue
# yield both inner and outer bounds
# include inner/outer in the identifier so we can connect them later
yield f'{seq} {bounds[0][1]} {bounds[1][0]} {variant_type} + {hgvs}###INNER'
yield f'{seq} {bounds[0][0]} {bounds[1][1]} {variant_type} + {hgvs}###OUTER'
# modified from previous notebook...
def grouper(iterable, n):
args = [iter(iterable)] * n
return [x for x in zip_longest(*args, fillvalue=None) if x is not None]
def get_vep_results(all_hgvs, all_bounds):
variants = [v for v in hgvs_and_bounds_to_vep_identifier(all_hgvs, all_bounds) if v]
print(f'{len(variants)} parsed into chrom/start/end/type')
# VEP only accepts batches of 200
vep_results = []
for group in grouper(variants, n=200):
vep_results.extend(query_vep(variants=group, search_distance=VEP_SHORT_QUERY_DISTANCE))
return vep_results
def extract_genes(vep_results):
results_by_variant = defaultdict(list)
for result in vep_results:
variant_identifier = result['id']
consequences = result.get('transcript_consequences', [])
results_by_variant[variant_identifier].extend({c['gene_id'] for c in consequences})
return results_by_variant
def gene_counts(all_hgvs, all_bounds, limit=None):
"""Return a map: hgvs -> (num affected genes inner, num affected genes outer)"""
if limit:
vep_results = get_vep_results(all_hgvs[:limit], all_bounds[:limit])
else:
vep_results = get_vep_results(all_hgvs, all_bounds)
identifiers_to_genes = extract_genes(vep_results)
print(f'{len(identifiers_to_genes)} successfully mapped by VEP')
result = defaultdict(dict)
for identifier, genes in identifiers_to_genes.items():
hgvs, inner_or_outer = identifier.split('###')
result[hgvs][inner_or_outer] = len(genes)
return result
result = gene_counts(all_hgvs, all_bounds)
def certainty_ratio_genes(genes_dict):
return genes_dict['INNER'] / genes_dict['OUTER']
# don't think the UBR size measurement makes sense
all_genes_ratios = [certainty_ratio_genes(x) for x in result.values()]
print(len(all_genes_ratios))
print(min(all_genes_ratios))
print(max(all_genes_ratios))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Variants per certainty ratio (target genes version)')
plt.hist(all_genes_ratios, bins=100)
"""
Explanation: Uncertainty from genes
Top of page
End of explanation
"""
|
ericmjl/Network-Analysis-Made-Simple
|
archive/4-cliques-triangles-structures-instructor.ipynb
|
mit
|
# Load the network. This network, while in reality is a directed graph,
# is intentionally converted to an undirected one for simplification.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
from nxviz import CircosPlot
c = CircosPlot(G)
c.draw()
"""
Explanation: Load Data
As usual, let's start by loading some network data. This time round, we have a physician trust network, but slightly modified such that it is undirected rather than directed.
This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
End of explanation
"""
# Example code.
def in_triangle(G, node):
"""
Returns whether a given node is present in a triangle relationship or not.
"""
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in combinations(G.neighbors(node), 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
# We return because any triangle that is present automatically
# satisfies the problem requirements.
return True
return False
in_triangle(G, 3)
"""
Explanation: Question
What can you infer about the structure of the graph from the Circos plot?
My answer: The structure is interesting. The graph looks like the physician trust network is comprised of discrete subnetworks.
Structures in a Graph
We can leverage what we have learned in the previous notebook to identify special structures in a graph.
In a network, cliques are one of these special structures.
Cliques
In a social network, cliques are groups of people in which everybody knows everybody.
Questions:
1. What is the simplest clique?
1. What is the simplest complex clique?
Let's try implementing a simple algorithm that finds out whether a node is present in a simple complex clique.
End of explanation
"""
nx.triangles(G, 3)
"""
Explanation: In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
End of explanation
"""
# Possible answer
def get_triangles(G, node):
neighbors1 = set(G.neighbors(node))
triangle_nodes = set()
triangle_nodes.add(node)
"""
Fill in the rest of the code below.
"""
for nbr1, nbr2 in combinations(neighbors1, 2):
if G.has_edge(nbr1, nbr2):
triangle_nodes.add(nbr1)
triangle_nodes.add(nbr2)
return triangle_nodes
# Verify your answer with the following funciton call. Should return something of the form:
# {3, 9, 11, 41, 42, 67}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = list(G.neighbors(3))
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
"""
Explanation: Exercise
Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Do not return the triplets, but the set/list of nodes. (5 min.)
Possible Implementation: If I check every pair of my neighbors, any pair that are also connected in the graph are in a triangle relationship with me.
Hint: Python's itertools module has a combinations function that may be useful.
Hint: NetworkX graphs have a .has_edge(node1, node2) function that checks whether an edge exists between two nodes.
Verify your answer by drawing out the subgraph composed of those nodes.
End of explanation
"""
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent
only the nodes involved in an open triangle; this is not the
approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = list(G.neighbors(node))
for n1, n2 in combinations(neighbors, 2):
if not G.has_edge(n1, n2):
open_triangle_nodes.append([n1, node, n2])
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
"""
Explanation: Friend Recommendation: Open Triangles
Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.
Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph.
What are the two general scenarios for finding open triangles that a given node is involved in?
The given node is the centre node.
The given node is one of the termini nodes.
Exercise
Can you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? (5 min.)
Note: For this exercise, only consider the case when the node of interest is the centre node.
Possible Implementation: Check every pair of my neighbors, and if they are not connected to one another, then we are in an open triangle relationship.
End of explanation
"""
list(nx.find_cliques(G))[0:20]
"""
Explanation: Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here.
Cliques
We have figured out how to find triangles. Now, let's find out what cliques are present in the network. Recall: what is the definition of a clique?
NetworkX has a clique-finding algorithm implemented.
This algorithm finds all maximally-sized cliques for a given node.
Note that maximal cliques of size n include all cliques of size < n
End of explanation
"""
def maximal_cliqes_of_size(size, G):
# Defensive programming check.
assert isinstance(size, int), "size has to be an integer"
assert size >= 2, "cliques are of size 2 or greater."
return [i for i in list(nx.find_cliques(G)) if len(i) == size]
maximal_cliqes_of_size(2, G)[0:20]
"""
Explanation: Exercise
Try writing a function maximal_cliques_of_size(size, G) that implements a search for all maximal cliques of a given size. (3 min.)
End of explanation
"""
ccsubgraph_nodes = list(nx.connected_components(G))
ccsubgraph_nodes
"""
Explanation: Connected Components
From Wikipedia:
In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.
NetworkX also implements a function that identifies connected component subgraphs.
Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
End of explanation
"""
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, nodeset in enumerate(ccsubgraph_nodes):
for n in nodeset:
G.nodes[n]['subgraph'] = i
c = CircosPlot(G, node_color='subgraph', node_order='subgraph')
c.draw()
plt.savefig('images/physicians.png', dpi=300)
"""
Explanation: Exercise
Draw a circos plot of the graph, but now colour and order the nodes by their connected component subgraph. (5 min.)
Recall Circos API:
python
c = CircosPlot(G, node_order='...', node_color='...')
c.draw()
plt.show() # or plt.savefig(...)
End of explanation
"""
|
m2dsupsdlclass/lectures-labs
|
labs/04_conv_nets/01_Convolutions.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from skimage.io import imread
from skimage.transform import resize
sample_image = imread("bumblebee.png")
sample_image= sample_image.astype("float32")
size = sample_image.shape
print("sample image shape: ", sample_image.shape)
plt.imshow(sample_image.astype('uint8'));
"""
Explanation: Convolutions
Objectives:
- Application of convolution on images
Reading and opening images
The following code enables to read an image, put it in a numpy array and display it in the notebook.
End of explanation
"""
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D
conv = Conv2D(filters=3, kernel_size=(5, 5), padding="same",
input_shape=(None, None, 3))
"""
Explanation: A simple convolution filter
The goal of this section to use tensorflow / Keras to perform individual convolutions on images. This section does not involve training any model yet.
End of explanation
"""
sample_image.shape
img_in = np.expand_dims(sample_image, 0)
img_in.shape
"""
Explanation: Remember: in Keras, None is used as a marker for tensor dimensions with dynamic size. In this case batch_size, width and height are all dynamic: they can depend on the input. Only the number of input channels (3 colors) has been fixed.
End of explanation
"""
img_out = conv(img_in)
print(type(img_out), img_out.shape)
"""
Explanation: Questions:
If we apply this convolution to this image, what will be the shape of the generated feature map?
Hints:
in Keras padding="same" means that convolutions uses as much padding as necessary so has to preserve the spatial dimension of the input maps or image;
in Keras, convolutions have no strides by default.
Bonus: how much padding Keras has to use to preserve the spatial dimensions in this particular case?
End of explanation
"""
np_img_out = img_out[0].numpy()
print(type(np_img_out))
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(10, 5))
ax0.imshow(sample_image.astype('uint8'))
ax1.imshow(np_img_out.astype('uint8'));
"""
Explanation: The output is a tensorflow Eager Tensor, which can be converted to obtain a standard numpy array:
End of explanation
"""
conv.count_params()
"""
Explanation: The output has 3 channels, hence can also be interpreted as an RGB image with matplotlib. However it is the result of a random convolutional filter applied to the original one.
Let's look at the parameters:
End of explanation
"""
len(conv.get_weights())
weights = conv.get_weights()[0]
weights.shape
"""
Explanation: Question: can you compute the number of trainable parameters from the layer hyperparameters?
Hints:
the input image has 3 colors and a single convolution kernel mixes information from all the three input channels to compute its output;
a convolutional layer outputs many channels at once: each channel is the output of a distinct convolution operation (aka unit) of the layer;
do not forget the biases!
Solution: let's introspect the keras model:
End of explanation
"""
biases = conv.get_weights()[1]
biases.shape
"""
Explanation: Eeach of the 3 output channels is generated by a distinct convolution kernel.
Each convolution kernel has a spatial size of 5x5 and operates across 3 input channels.
End of explanation
"""
def my_init(shape=(5, 5, 3, 3), dtype=None):
array = np.zeros(shape=shape, dtype="float32")
array[:, :, 0, 0] = 1 / 25
array[:, :, 1, 1] = 1 / 25
array[:, :, 2, 2] = 1 / 25
return array
"""
Explanation: One bias per output channel.
We can instead build a kernel ourselves, by defining a function which will be passed to Conv2D Layer.
We'll create an array with 1/25 for filters, with each channel seperated.
End of explanation
"""
np.transpose(my_init(), (2, 3, 0, 1))
conv = Conv2D(filters=3, kernel_size=(5, 5), padding="same",
input_shape=(None, None, 3), kernel_initializer=my_init)
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(10, 5))
ax0.imshow(img_in[0].astype('uint8'))
img_out = conv(img_in)
np_img_out = img_out[0].numpy()
ax1.imshow(np_img_out.astype('uint8'));
"""
Explanation: We can display the numpy filters by moving the spatial dimensions in the end (using np.transpose):
End of explanation
"""
# %load solutions/strides_padding.py
"""
Explanation: Exercise
- Define a Conv2D layer with 3 filters (5x5) that compute the identity function (preserve the input image without mixing the colors).
- Change the stride to 2. What is the size of the output image?
- Change the padding to 'VALID'. What do you observe?
End of explanation
"""
# convert image to greyscale
grey_sample_image = sample_image.mean(axis=2)
# add the channel dimension even if it's only one channel so
# as to be consistent with Keras expectations.
grey_sample_image = grey_sample_image[:, :, np.newaxis]
# matplotlib does not like the extra dim for the color channel
# when plotting gray-level images. Let's use squeeze:
plt.imshow(np.squeeze(grey_sample_image.astype(np.uint8)),
cmap=plt.cm.gray);
"""
Explanation: Working on edge detection on Grayscale image
End of explanation
"""
# %load solutions/edge_detection
"""
Explanation: Exercise
- Build an edge detector using Conv2D on greyscale image
- You may experiment with several kernels to find a way to detect edges
- https://en.wikipedia.org/wiki/Kernel_(image_processing)
Try Conv2D? or press shift-tab to get the documentation. You may get help at https://keras.io/layers/convolutional/
End of explanation
"""
from tensorflow.keras.layers import MaxPool2D, AvgPool2D
# %load solutions/pooling.py
# %load solutions/average_as_conv.py
"""
Explanation: Pooling and strides with convolutions
Exercise
- Use MaxPool2D to apply a 2x2 max pool with strides 2 to the image. What is the impact on the shape of the image?
- Use AvgPool2D to apply an average pooling.
- Is it possible to compute a max pooling and an average pooling with well chosen kernels?
Bonus
- Implement a 3x3 average pooling with a regular convolution Conv2D, with well chosen strides, kernel and padding
End of explanation
"""
|
charmasaur/digbeta
|
tour/traj_visualisation.ipynb
|
gpl-3.0
|
%matplotlib inline
import os
import re
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
from fastkml import kml, styles
from shapely.geometry import Point, LineString
random.seed(123456789)
data_dir = 'data/data-ijcai15'
#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')
fvisit = os.path.join(data_dir, 'userVisits-Toro.csv')
fcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')
suffix = fvisit.split('-')[-1].split('.')[0]
visits = pd.read_csv(fvisit, sep=';')
visits.head()
coords = pd.read_csv(fcoord, sep=';')
coords.head()
# merge data frames according to column 'photoID'
assert(visits.shape[0] == coords.shape[0])
traj = pd.merge(visits, coords, on='photoID')
traj.head()
num_photo = traj['photoID'].unique().shape[0]
num_user = traj['userID'].unique().shape[0]
num_seq = traj['seqID'].unique().shape[0]
num_poi = traj['poiID'].unique().shape[0]
pd.DataFrame([num_photo, num_user, num_seq, num_poi, num_photo/num_user, num_seq/num_user], \
index = ['#photo', '#user', '#seq', '#poi', '#photo/user', '#seq/user'], columns=[str(suffix)])
"""
Explanation: Trajectory Visualisation
NOTE: Before running this notebook, please run script src/ijcai15_setup.py to setup data properly.
Visualise trajectories on maps by generating a KML file for each trajectory.
Prepare Data
Load Trajectory Data
Compute POI Info
Construct Travelling Sequences
Generate KML File for Trajectory
Trajectory with same (start, end)
Trajectory with more than one occurrence
Visualise Trajectory
Visualise Trajectories with more than one occurrence
Visualise Trajectories with same (start, end) but different paths
Visualise the Most Common Edges
Count the occurrence of edges
<a id='sec1'></a>
1. Prepare Data
<a id='sec1.1'></a>
1.1 Load Trajectory Data
End of explanation
"""
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)
poi_coords.reset_index(inplace=True)
poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)
poi_coords.head()
"""
Explanation: <a id='sec3.2'></a>
<a id='sec1.2'></a>
1.2 Compute POI Info
Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
End of explanation
"""
poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()
poi_catfreq.reset_index(inplace=True)
poi_catfreq.head()
poi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')
poi_all.set_index('poiID', inplace=True)
poi_all.head()
"""
Explanation: Extract POI category and visiting frequency.
End of explanation
"""
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\
.groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])
seq_all.columns = seq_all.columns.droplevel()
seq_all.reset_index(inplace=True)
seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)
seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']
seq_all.head()
"""
Explanation: <a id='sec1.3'></a>
1.3 Construct Travelling Sequences
End of explanation
"""
def generate_kml(fname, seqid_set, seq_all, poi_all):
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
styid = 'style1'
# colors in KML: aabbggrr, aa=00 is fully transparent
sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red
doc = kml.Document(ns, '1', 'Trajectory', 'Trajectory visualization', styles=[sty])
k.append(doc)
poi_set = set()
seq_dict = dict()
for seqid in seqid_set:
# ordered POIs in sequence
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
seq = seqi['poiID'].tolist()
seq_dict[seqid] = seq
for poi in seq: poi_set.add(poi)
# Placemark for trajectory
for seqid in sorted(seq_dict.keys()):
seq = seq_dict[seqid]
desc = 'Trajectory: ' + str(seq[0]) + '->' + str(seq[-1])
pm = kml.Placemark(ns, str(seqid), 'Trajectory ' + str(seqid), desc, styleUrl='#' + styid)
pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in seq])
doc.append(pm)
# Placemark for POI
for poi in sorted(poi_set):
desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']
pm = kml.Placemark(ns, str(poi), 'POI ' + str(poi), desc, styleUrl='#' + styid)
pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])
doc.append(pm)
# save to file
kmlstr = k.to_string(prettyprint=True)
with open(fname, 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8"?>\n')
f.write(kmlstr)
"""
Explanation: <a id='sec1.4'></a>
1.4 Generate KML File for Trajectory
Visualise Trajectory on map by generating a KML file for a trajectory and its associated POIs.
End of explanation
"""
seq_user = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size)
seq_user.reset_index(inplace=True)
seq_user.rename(columns={'size':'seqLen'}, inplace=True)
seq_user.set_index('seqID', inplace=True)
seq_user.head()
def extract_seq(seqid, seq_all):
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
return seqi['poiID'].tolist()
startend_dict = dict()
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
if (seq[0], seq[-1]) not in startend_dict:
startend_dict[(seq[0], seq[-1])] = [seqid]
else:
startend_dict[(seq[0], seq[-1])].append(seqid)
indices = sorted(startend_dict.keys())
columns = ['#traj', '#user']
startend_seq = pd.DataFrame(data=np.zeros((len(indices), len(columns))), index=indices, columns=columns)
for pair, seqid_set in startend_dict.items():
users = set([seq_user.loc[x, 'userID'] for x in seqid_set])
startend_seq.loc[pair, '#traj'] = len(seqid_set)
startend_seq.loc[pair, '#user'] = len(users)
startend_seq.sort(columns=['#traj'], ascending=True, inplace=True)
startend_seq.index.name = '(start, end)'
startend_seq.sort_index(inplace=True)
print(startend_seq.shape)
startend_seq
"""
Explanation: <a id='sec2'></a>
2. Trajectory with same (start, end)
End of explanation
"""
distinct_seq = dict()
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
#if len(seq) < 2: continue # drop trajectory with single point
if str(seq) not in distinct_seq:
distinct_seq[str(seq)] = [(seqid, seq_user.loc[seqid].iloc[0])] # (seqid, user)
else:
distinct_seq[str(seq)].append((seqid, seq_user.loc[seqid].iloc[0]))
print(len(distinct_seq))
#distinct_seq
distinct_seq_df = pd.DataFrame.from_dict({k:len(distinct_seq[k]) for k in sorted(distinct_seq.keys())}, orient='index')
distinct_seq_df.columns = ['#occurrence']
distinct_seq_df.index.name = 'trajectory'
distinct_seq_df['seqLen'] = [len(x.split(',')) for x in distinct_seq_df.index]
distinct_seq_df.sort_index(inplace=True)
print(distinct_seq_df.shape)
distinct_seq_df.head()
plt.figure(figsize=[9, 9])
plt.xlabel('sequence length')
plt.ylabel('#occurrence')
plt.scatter(distinct_seq_df['seqLen'], distinct_seq_df['#occurrence'], marker='+')
"""
Explanation: <a id='sec3'></a>
3. Trajectory with more than one occurrence
Contruct trajectories with more than one occurrence (can be same or different user).
End of explanation
"""
distinct_seq_df2 = distinct_seq_df[distinct_seq_df['seqLen'] > 1]
distinct_seq_df2 = distinct_seq_df2[distinct_seq_df2['#occurrence'] > 1]
distinct_seq_df2.head()
plt.figure(figsize=[9, 9])
plt.xlabel('sequence length')
plt.ylabel('#occurrence')
plt.scatter(distinct_seq_df2['seqLen'], distinct_seq_df2['#occurrence'], marker='+')
"""
Explanation: Filtering out sequences with single point as well as sequences occurs only once.
End of explanation
"""
for seqstr in distinct_seq_df2.index:
assert(seqstr in distinct_seq)
seqid = distinct_seq[seqstr][0][0]
fname = re.sub(',', '_', re.sub('[ \[\]]', '', seqstr))
fname = os.path.join(data_dir, suffix + '-seq-occur-' + str(len(distinct_seq[seqstr])) + '_' + fname + '.kml')
generate_kml(fname, [seqid], seq_all, poi_all)
"""
Explanation: <a id='sec4'></a>
4. Visualise Trajectory
<a id='sec4.1'></a>
4.1 Visualise Trajectories with more than one occurrence
End of explanation
"""
startend_distinct_seq = dict()
distinct_seqid_set = [distinct_seq[x][0][0] for x in distinct_seq_df2.index]
for seqid in distinct_seqid_set:
seq = extract_seq(seqid, seq_all)
if (seq[0], seq[-1]) not in startend_distinct_seq:
startend_distinct_seq[(seq[0], seq[-1])] = [seqid]
else:
startend_distinct_seq[(seq[0], seq[-1])].append(seqid)
for pair in sorted(startend_distinct_seq.keys()):
if len(startend_distinct_seq[pair]) < 2: continue
fname = suffix + '-seq-start_' + str(pair[0]) + '_end_' + str(pair[1]) + '.kml'
fname = os.path.join(data_dir, fname)
print(pair, len(startend_distinct_seq[pair]))
generate_kml(fname, startend_distinct_seq[pair], seq_all, poi_all)
"""
Explanation: <a id='sec4.2'></a>
4.2 Visualise Trajectories with same (start, end) but different paths
End of explanation
"""
edge_count = pd.DataFrame(data=np.zeros((poi_all.index.shape[0], poi_all.index.shape[0]), dtype=np.int), \
index=poi_all.index, columns=poi_all.index)
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
for j in range(len(seq)-1):
edge_count.loc[seq[j], seq[j+1]] += 1
edge_count
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
width_set = set()
# Placemark for edges
pm_list = []
for poi1 in poi_all.index:
for poi2 in poi_all.index:
width = edge_count.loc[poi1, poi2]
if width < 1: continue
width_set.add(width)
sid = str(poi1) + '_' + str(poi2)
desc = 'Edge: ' + str(poi1) + '->' + str(poi2) + ', #occurrence: ' + str(width)
pm = kml.Placemark(ns, sid, 'Edge_' + sid, desc, styleUrl='#sty' + str(width))
pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in [poi1, poi2]])
pm_list.append(pm)
# Placemark for POIs
for poi in poi_all.index:
sid = str(poi)
desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']
pm = kml.Placemark(ns, sid, 'POI_' + sid, desc, styleUrl='#sty1')
pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])
pm_list.append(pm)
# Styles
stys = []
for width in width_set:
sid = 'sty' + str(width)
# colors in KML: aabbggrr, aa=00 is fully transparent
stys.append(styles.Style(id=sid, styles=[styles.LineStyle(color='3f0000ff', width=width)])) # transparent red
doc = kml.Document(ns, '1', 'Edges', 'Edge visualization', styles=stys)
for pm in pm_list: doc.append(pm)
k.append(doc)
# save to file
fname = suffix + '-common_edges.kml'
fname = os.path.join(data_dir, fname)
kmlstr = k.to_string(prettyprint=True)
with open(fname, 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8"?>\n')
f.write(kmlstr)
"""
Explanation: <a id='sec5'></a>
5. Visualise the Most Common Edges
<a id='sec5.1'></a>
5.1 Count the occurrence of edges
End of explanation
"""
|
jvbalen/cover_id
|
draft_notebooks/SHS_data_draft.ipynb
|
mit
|
%matplotlib inline
from __future__ import division, print_function
import numpy as np
import os
"""
Explanation: Sketches and progress for SHS I/O
End of explanation
"""
import SHS_data
uris, ids = SHS_data.read_uris()
"""
Explanation: Read a list of all available URI's
Python
def read_uris():
...
End of explanation
"""
reload(SHS_data)
cliques_by_name, cliques_by_id = SHS_data.read_cliques()
"""
Explanation: Read cliques
Python
def read_cliques(clique_file='shs_pruned.txt'):
...
End of explanation
"""
reload(SHS_data)
train_cliques, test_cliques, val_cliques = SHS_data.split_train_test_validation(cliques_by_name)
"""
Explanation: Split cliques into train, test & evaluation sets
Python
def split_train_test_validation(clique_dict, ratio=(50,20,30),
random_state=1988):
...
End of explanation
"""
reload(SHS_data)
train_uris = SHS_data.uris_from_clique_dict(train_cliques)
"""
Explanation: Get URI's for a clique
First idea: get URI's and a ground truth matrix.
But maybe that's not what we want:
18K x 18K ground truth matrix in dense form = 2Gb.
Therefore: just URI's for now.
Open Question
Should this function be in SHS_data or somewhere more general?
Probably somewhere more general, but there is no such somewhere for now, so leaving it in place.
End of explanation
"""
|
shareactorIO/pipeline
|
source.ml/jupyterhub.ml/notebooks/talks/ODSC/MasterClass/Mar-01-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
|
apache-2.0
|
import numpy as np
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import time
# make things wide
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def=None, width=1200, height=800, max_const_size=32, ungroup_gradients=False):
if not graph_def:
graph_def = tf.get_default_graph().as_graph_def()
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
data = str(strip_def)
if ungroup_gradients:
data = data.replace('"gradients/', '"b_')
#print(data)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(data), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:{}px;height:{}px;border:0" srcdoc="{}"></iframe>
""".format(width, height, code.replace('"', '"'))
display(HTML(iframe))
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer("batch_size", 10, "The batch size to train")
flags.DEFINE_integer("epoch_number", 10, "Number of epochs to run trainer")
flags.DEFINE_integer("steps_to_validate", 1,
"Steps to validate and print loss")
flags.DEFINE_string("checkpoint_dir", "./checkpoint/",
"indicates the checkpoint directory")
#flags.DEFINE_string("model_path", "./model/", "The export path of the model")
flags.DEFINE_string("model_path", "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/", "The export path of the model")
flags.DEFINE_integer("export_version", 27, "The version number of the model")
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
def main():
# Define training data
x = np.ones(FLAGS.batch_size)
y = np.ones(FLAGS.batch_size)
# Define the model
X = tf.placeholder(tf.float32, shape=[None], name="X")
Y = tf.placeholder(tf.float32, shape=[None], name="yhat")
w = tf.Variable(1.0, name="weight")
b = tf.Variable(1.0, name="bias")
loss = tf.square(Y - tf.mul(X, w) - b)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
predict_op = tf.mul(X, w) + b
saver = tf.train.Saver()
checkpoint_dir = FLAGS.checkpoint_dir
checkpoint_file = checkpoint_dir + "/checkpoint.ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
# Start the session
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
print("Continue training from the model {}".format(ckpt.model_checkpoint_path))
saver.restore(sess, ckpt.model_checkpoint_path)
saver_def = saver.as_saver_def()
print(saver_def.filename_tensor_name)
print(saver_def.restore_op_name)
# Start training
start_time = time.time()
for epoch in range(FLAGS.epoch_number):
sess.run(train_op, feed_dict={X: x, Y: y})
# Start validating
if epoch % FLAGS.steps_to_validate == 0:
end_time = time.time()
print("[{}] Epoch: {}".format(end_time - start_time, epoch))
saver.save(sess, checkpoint_file)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.pb', as_text=False)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.txt', as_text=True)
start_time = end_time
# Print model variables
w_value, b_value = sess.run([w, b])
print("The model of w: {}, b: {}".format(w_value, b_value))
# Export the model
print("Exporting trained model to {}".format(FLAGS.model_path))
model_exporter = exporter.Exporter(saver)
model_exporter.init(
sess.graph.as_graph_def(),
named_graph_signatures={
'inputs': exporter.generic_signature({"features": X}),
'outputs': exporter.generic_signature({"prediction": predict_op})
})
model_exporter.export(FLAGS.model_path, tf.constant(FLAGS.export_version), sess)
print('Done exporting!')
if __name__ == "__main__":
main()
show_graph()
"""
Explanation: Where Am I?
ODSC Masterclass Summit - San Francisco - Mar 01, 2017
Who Am I?
Chris Fregly
Research Scientist @ PipelineIO
Video Series Author "High Performance Tensorflow in Production" @ OReilly (Coming Soon)
Founder @ Advanced Spark and Tensorflow Meetup
Github Repo
DockerHub Repo
Slideshare
YouTube
Who Was I?
Software Engineer @ Netflix, Databricks, IBM Spark Tech Center
1. Infrastructure and Tools
Docker
Images, Containers
Useful Docker Image: AWS + GPU + Docker + Tensorflow + Spark
Kubernetes
Container Orchestration Across Clusters
Weavescope
Kubernetes Cluster Visualization
Jupyter Notebooks
What We're Using Here for Everything!
Airflow
Invoke Any Type of Workflow on Any Type of Schedule
Github
Commit New Model to Github, Airflow Workflow Triggered for Continuous Deployment
DockerHub
Maintains Docker Images
Continuous Deployment
Not Just for Code, Also for ML/AI Models!
Canary Release
Deploy and Compare New Model Alongside Existing
Metrics and Dashboards
Not Just System Metrics, ML/AI Model Prediction Metrics
NetflixOSS-based
Prometheus
Grafana
Elasticsearch
Separate Cluster Concerns
Training/Admin Cluster
Prediction Cluster
Hybrid Cloud Deployment for eXtreme High Availability (XHA)
AWS and Google Cloud
Apache Spark
Tensorflow + Tensorflow Serving
2. Model Deployment Bundles
KeyValue
ie. Recommendations
In-memory: Redis, Memcache
On-disk: Cassandra, RocksDB
First-class Servable in Tensorflow Serving
PMML
It's Useful and Well-Supported
Apple, Cisco, Airbnb, HomeAway, etc
Please Don't Re-build It - Reduce Your Technical Debt!
Native Code
Hand-coded (Python + Pickling)
Generate Java Code from PMML?
Tensorflow Model Exports
freeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model
3. Model Deployments and Rollbacks
Mutable
Each New Model is Deployed to Live, Running Container
Immutable
Each New Model is a New Docker Image
4. Optimizing Tensorflow Models for Serving
Python Scripts
optimize_graph_for_inference.py
Pete Warden's Blog
Graph Transform Tool
Compile (Tensorflow 1.0+)
XLA Compiler
Compiles 3 graph operations (input, operation, output) into 1 operation
Removes need for Tensorflow Runtime (20 MB is significant on tiny devices)
Allows new backends for hardware-specific optimizations (better portability)
tfcompile
Convert Graph into executable code
Compress/Distill Ensemble Models
Convert ensembles or other complex models into smaller models
Re-score training data with output of model being distilled
Train smaller model to produce same output
Output of smaller model learns more information than original label
5. Optimizing Serving Runtime Environment
Throughput
Option 1: Add more Tensorflow Serving servers behind load balancer
Option 2: Enable request batching in each Tensorflow Serving
Option Trade-offs: Higher Latency (bad) for Higher Throughput (good)
$TENSORFLOW_SERVING_HOME/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
--port=9000
--model_name=tensorflow_minimal
--model_base_path=/root/models/tensorflow_minimal/export
--enable_batching=true
--max_batch_size=1000000
--batch_timeout_micros=10000
--max_enqueued_batches=1000000
Latency
The deeper the model, the longer the latency
Start inference in parallel where possible (ie. user inference in parallel with item inference)
Pre-load common inputs from database (ie. user attributes, item attributes)
Pre-compute/partial-compute common inputs (ie. popular word embeddings)
Memory
Word embeddings are huge!
Use hashId for each word
Off-load embedding matrices to parameter server and share between serving servers
6. Demos!!
Train and Deploy Tensorflow AI Model (Simple Model, Immutable Deploy)
Train Tensorflow AI Model
End of explanation
"""
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027
!git status
!git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/
!git status
!git commit -m "updated tensorflow model"
!git status
# If this fails with "Permission denied", use terminal within jupyter to manually `git push`
!git push
"""
Explanation: Commit and Deploy New Tensorflow AI Model
Commit Model to Github
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://demo.pipeline.io:8080/admin">'
display(HTML(html))
"""
Explanation: Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers
End of explanation
"""
!kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1
!kubectl get pod --context=awsdemo
"""
Explanation: Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy)
Scale Out Spark Training Cluster
Kubernetes CLI
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://kubernetes-aws.demo.pipeline.io">'
display(HTML(html))
"""
Explanation: Weavescope Kubernetes AWS Cluster Visualization
End of explanation
"""
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.feature import OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.regression import LinearRegression
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
"""
Explanation: Generate PMML from Spark ML Model
End of explanation
"""
df = spark.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("s3a://datapalooza/airbnb/airbnb.csv.bz2")
df.registerTempTable("df")
print(df.head())
print(df.count())
"""
Explanation: Step 0: Load Libraries and Data
End of explanation
"""
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null")
df_filtered.registerTempTable("df_filtered")
df_final = spark.sql("""
select
id,
city,
case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')
then state
else 'Other'
end as state,
space,
cast(price as double) as price,
cast(bathrooms as double) as bathrooms,
cast(bedrooms as double) as bedrooms,
room_type,
host_is_super_host,
cancellation_policy,
cast(case when security_deposit is null
then 0.0
else security_deposit
end as double) as security_deposit,
price_per_bedroom,
cast(case when number_of_reviews is null
then 0.0
else number_of_reviews
end as double) as number_of_reviews,
cast(case when extra_people is null
then 0.0
else extra_people
end as double) as extra_people,
instant_bookable,
cast(case when cleaning_fee is null
then 0.0
else cleaning_fee
end as double) as cleaning_fee,
cast(case when review_scores_rating is null
then 80.0
else review_scores_rating
end as double) as review_scores_rating,
cast(case when square_feet is not null and square_feet > 100
then square_feet
when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)
then 350.0
else 380 * bedrooms
end as double) as square_feet
from df_filtered
""").persist()
df_final.registerTempTable("df_final")
df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show()
print(df_final.count())
print(df_final.schema)
# Most popular cities
spark.sql("""
select
state,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by state
order by count(*) desc
""").show()
# Most expensive popular cities
spark.sql("""
select
city,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by city
order by avg(price) desc
""").filter("ct > 25").show()
"""
Explanation: Step 1: Clean, Filter, and Summarize the Data
End of explanation
"""
continuous_features = ["bathrooms", \
"bedrooms", \
"security_deposit", \
"cleaning_fee", \
"extra_people", \
"number_of_reviews", \
"square_feet", \
"review_scores_rating"]
categorical_features = ["room_type", \
"host_is_super_host", \
"cancellation_policy", \
"instant_bookable", \
"state"]
"""
Explanation: Step 2: Define Continous and Categorical Features
End of explanation
"""
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
"""
Explanation: Step 3: Split Data into Training and Validation
End of explanation
"""
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features")
continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \
withStd=True, withMean=False)
"""
Explanation: Step 4: Continous Feature Pipeline
End of explanation
"""
categorical_feature_indexers = [StringIndexer(inputCol=x, \
outputCol="{}_index".format(x)) \
for x in categorical_features]
categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \
outputCol="oh_encoder_{}".format(x.getOutputCol() )) \
for x in categorical_feature_indexers]
"""
Explanation: Step 5: Categorical Feature Pipeline
End of explanation
"""
feature_cols_lr = [x.getOutputCol() \
for x in categorical_feature_one_hot_encoders]
feature_cols_lr.append("scaled_continuous_features")
feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \
outputCol="features_lr")
"""
Explanation: Step 6: Assemble our Features and Feature Pipeline
End of explanation
"""
linear_regression = LinearRegression(featuresCol="features_lr", \
labelCol="price", \
predictionCol="price_prediction", \
maxIter=10, \
regParam=0.3, \
elasticNetParam=0.8)
estimators_lr = \
[continuous_feature_assembler, continuous_feature_scaler] \
+ categorical_feature_indexers + categorical_feature_one_hot_encoders \
+ [feature_assembler_lr] + [linear_regression]
pipeline = Pipeline(stages=estimators_lr)
pipeline_model = pipeline.fit(training_dataset)
print(pipeline_model)
"""
Explanation: Step 7: Train a Linear Regression Model
End of explanation
"""
from jpmml import toPMMLBytes
pmmlBytes = toPMMLBytes(spark, training_dataset, pipeline_model)
print(pmmlBytes.decode("utf-8"))
"""
Explanation: Step 8: Convert PipelineModel to PMML
End of explanation
"""
import urllib.request
update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.request
update_url = 'http://prediction-pmml-gcp.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-gcp.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
"""
Explanation: Push PMML to Live, Running Spark ML Model Server (Mutable)
End of explanation
"""
from urllib import request
sourceBytes = ' \n\
private String str; \n\
\n\
public void initialize(Map<String, Object> args) { \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
String id = (String)inputs.get("id"); \n\
\n\
return id.equals("21619"); \n\
} \n\
'.encode('utf-8')
from urllib import request
name = 'codegen_equals'
update_url = 'http://prediction-codegen-aws.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_equals'
update_url = 'http://prediction-codegen-gcp.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_equals'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21618"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return true
from urllib import request
name = 'codegen_equals'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21619"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return false
"""
Explanation: Deploy Java-based Model (Simple Model, Mutable Deploy)
End of explanation
"""
from urllib import request
sourceBytes = ' \n\
public Map<String, Object> data = new HashMap<String, Object>(); \n\
\n\
public void initialize(Map<String, Object> args) { \n\
data.put("url", "http://demo.pipeline.io:9040/prediction/"); \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
try { \n\
String userId = (String)inputs.get("userId"); \n\
String itemId = (String)inputs.get("itemId"); \n\
String url = data.get("url") + "/" + userId + "/" + itemId; \n\
\n\
return org.apache.http.client.fluent.Request \n\
.Get(url) \n\
.execute() \n\
.returnContent(); \n\
\n\
} catch(Exception exc) { \n\
System.out.println(exc); \n\
throw exc; \n\
} \n\
} \n\
'.encode('utf-8')
from urllib import request
name = 'codegen_httpclient'
# Note: Must have trailing '/'
update_url = 'http://prediction-codegen-aws.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_httpclient'
# Note: Must have trailing '/'
update_url = 'http://prediction-codegen-gcp.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_httpclient'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"userId":"21619", "itemId":"10006"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return float
from urllib import request
name = 'codegen_httpclient'
evaluate_url = 'http://prediction-codegen-gcp.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"userId":"21619", "itemId":"10006"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return float
"""
Explanation: Deploy Java Model (HttpClient Model, Mutable Deploy)
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D">'
display(HTML(html))
"""
Explanation: Load Test and Compare Cloud Providers (AWS and Google)
Monitor Performance Across Cloud Providers
NetflixOSS Services Dashboard (Hystrix)
End of explanation
"""
# Spark ML - PMML - Airbnb
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
# Codegen - Java - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
# Tensorflow AI - Tensorflow Serving - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
"""
Explanation: Start Load Tests
Run JMeter Tests from Local Laptop (Limited by Laptop)
Run Headless JMeter Tests from Training Clusters in Cloud
End of explanation
"""
!kubectl delete --context=awsdemo rc loadtest-aws-airbnb
!kubectl delete --context=gcpdemo rc loadtest-aws-airbnb
!kubectl delete --context=awsdemo rc loadtest-aws-equals
!kubectl delete --context=gcpdemo rc loadtest-aws-equals
!kubectl delete --context=awsdemo rc loadtest-aws-minimal
!kubectl delete --context=gcpdemo rc loadtest-aws-minimal
"""
Explanation: End Load Tests
End of explanation
"""
!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=awsdemo
!kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=gcpdemo
"""
Explanation: Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy)
Kubernetes CLI
End of explanation
"""
|
4dsolutions/Python5
|
BellCurve.ipynb
|
mit
|
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import math
"""
Explanation: Gaussian Distribution (Normal or Bell Curve)
Think of a Jupyter Notebook file as a Python script, but with comments given the seriousness they deserve, meaning inserted Youtubes if necessary. We also adopt a more conversational style with the reader, and with Python, pausing frequently to take stock, because we're telling a story.
One might ask, what is the benefit of computer programs if we read through them this slowly? Isn't the whole point that they run blazingly fast, and nobody needs to read them except those tasked with maintaining them, the programmer cast?
First, lets point out the obvious: even when reading slowly, we're not keeping Python from doing its part as fast as it can, and what it does would have taken a single human ages to do, and would have occupied a team of secretaries for ages. Were you planning to pay them? Python effectively puts a huge staff at your disposal, ready to do your bidding. But that doesn't let you off the hook. They need to be managed, told what to do.
Here's what you'll find at the top of your average script. A litany of players, a congress of agents, need to be assembled and made ready for the job at hand. But don't worry, as you remember to include necessary assets, add them at will as you need them. We rehearse the script over and over while building it. Nobody groans, except maybe you, when the director says "take it from the top" once again.
End of explanation
"""
domain = np.linspace(-5, 5, 100)
"""
Explanation: You'll be glad to have np.linspace as a friend, as so often you know exactly what the upper and lower bounds, of a domain, might be. You'll be computing a range. Do you remember these terms from high school? A domain is like a pile of cannon balls that we feed to our cannon, which them fires them, testing our knowledge of ballistics. It traces a parabola. We plot that in our tables. A lot of mathematics traces to developing tables for battle field use. Leonardo da Vinci, a great artist, was also an architect of defensive fortifications.
Anyway, np.linspace lets to give exactly the number of points you would like of this linear one dimensional array space, as a closed set, meaning -5 and 5 are included, the minimum and maximum you specify. Ask for a healthy number of points, as points are cheap. All they require is memory. But then it's up to you not to overdo things. Why waste CPU cycles on way too many points?
I bring up this niggling detail about points as a way of introducing what they're calling "hyperparameters" in Machine Learning, meaning settings or values that come from outside the data, so also "metadata" in some ways. You'll see in other notebooks how we might pick a few hyperparameters and ask scikit-learn to try all combinations of same.
Here's what you'll be saying then:
from sklearn.model_selection import GridSearchCV #CV = cross-validation
End of explanation
"""
mu = 0 # might be x-bar if discrete
sigma = 1 # standard deviation, more below
"""
Explanation: I know mu sounds like "mew", the sound a kitten makes, and that's sometimes insisted upon by sticklers, for when we have a continuous function, versus one that's discrete. Statisticians make a big deal about the difference between digital and analog, where the former is seen as a "sampling" of the latter. Complete data may be an impossibility. We're always stuck with something digital trying to approximate something analog, or so it seems. Turn that around in your head sometimes: we smooth it over as an approximation, because a discrete treatment would require too high a level of precision.
The sticklers say "mu" for continuous, but "x-bar" (an x with a bar over it) for plain old "average" of discrete sets. I don't see this conventions holding water necessarily, for one thing because it's inconvenient to always reach for the most fancy typography. Python does have full access to Unicode, and to LaTex, but do we have to bother? Lets leave that question for another day and move on to...
The Guassian (Binomial if Discrete)
End of explanation
"""
from IPython.display import display, Latex
ltx = '$ pdf(x,\\mu,\\sigma) = \\frac{1}{ \\sigma' + \
'\\sqrt{2 \\pi}} e^{\\left(-\\frac{{\\left(\\mu - ' + \
'x\\right)}^{2}}{2 \\, \\sigma^{2}}\\right)} $'
display(Latex(ltx))
"""
Explanation: What we have here (below) is a typical Python numeric function, although it does get its pi from numpy instead of math. That won't matter. The sigma and mu in this function are globals and set above. Some LaTex would be in order here, I realize. Let me scavange the internet for something appropriate...
$pdf(x,\mu,\sigma) = \frac{1}{ \sigma \sqrt{2 \pi}} e^{\left(-\frac{{\left(\mu - x\right)}^{2}}{2 \, \sigma^{2}}\right)}$
Use of dollar signs is key.
Here's another way, in a code cell instead of a Markdown cell.
End of explanation
"""
def g(x):
return (1/(sigma * math.sqrt(2 * np.pi))) * math.exp(-0.5 * ((mu - x)/sigma)**2)
"""
Explanation: I'm really tempted to try out PrettyPy.
End of explanation
"""
%timeit vg = np.vectorize(g)
"""
Explanation: What I do below is semi-mysterious, and something I'd like to get to in numpy in more detail. The whole idea behind numpy is every function, or at least the unary ones, are vectorized, meaning the work element-wise through every cell, with no need for any for loops.
My Gaussian formula above won't natively understand how to have relations with a numpy array, unless we store it in vectorized form. I'm not claiming this will make it run any faster than under the control of for loops, we can test that. Even without a speedup, here we have a recipe for shortening our code.
As many have proclaimed around numpy: one of its primary benefits is it allows one to "lose the loops".
End of explanation
"""
%timeit vg2 = np.array([g(x) for x in domain])
vg = np.vectorize(g)
%matplotlib inline
%timeit plt.plot(domain, vg(domain))
"""
Explanation: At any rate, this way, with a list comprehension, is orders of magnitude slower:
End of explanation
"""
%timeit plt.plot(domain, st.norm.pdf(domain))
mu = 0
sigma = math.sqrt(0.2)
plt.plot(domain, vg(domain), color = 'blue')
sigma = math.sqrt(1)
plt.plot(domain, vg(domain), color = 'red')
sigma = math.sqrt(5)
plt.plot(domain, vg(domain), color = 'orange')
mu = -2
sigma = math.sqrt(.5)
plt.plot(domain, vg(domain), color = 'green')
plt.title("Gaussian Distributions")
"""
Explanation: I bravely built my own version of the Gaussian distribution, a continuous function (any real number input is OK, from negative infinity to infinity, but not those (keep it in between). The thing about a Gaussian is you can shrink it and grow it while keeping the curve itself, self similar. Remember "hyperparamters"? They control the shape. We should be sure to play around with those parameters.
Of course the stats.norm section of scipy comes pre-equipped with the same PDF (probability distribution function). You'll see this curve called many things in the literature.
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo("xgQhefFOXrM")
a = st.norm.cdf(-1) # Cumulative distribution function
b = st.norm.cdf(1)
b - a
a = st.norm.cdf(-2)
b = st.norm.cdf(2)
b - a
# 99.73% is more correct than 99.72%
a = st.norm.cdf(-3)
b = st.norm.cdf(3)
b - a
# 95%
a = st.norm.cdf(-1.96)
b = st.norm.cdf(1.96)
b - a
# 99%
a = st.norm.cdf(-2.58)
b = st.norm.cdf(2.58)
b - a
from IPython.display import YouTubeVideo
YouTubeVideo("zZWd56VlN7w")
"""
Explanation: see Wikipedia figure
These are Gaussian PDFs or Probability Density Functions.
68.26% of values happen within -1 and 1.
End of explanation
"""
st.norm.cdf(-1.32)
"""
Explanation: What are the chances a value is less than -1.32?
End of explanation
"""
1 - st.norm.sf(-0.21) # filling in from the right (survival function)
a = st.norm.cdf(0.85) # filling in from the left
a
b = st.norm.cdf(-0.21) # from the left
b
a-b # getting the difference (per the Youtube)
"""
Explanation: What are the chances a value is between -0.21 and 0.85?
End of explanation
"""
plt.plot(domain, st.norm.cdf(domain))
"""
Explanation: Lets plot the integral of the Bell Curve. This curve somewhat describes the temporal pattern whereby a new technology is adopted, first by early adopters, then comes the bandwagon effect, then come the stragglers. Not the every technology gets adopted in this way. Only some do.
End of explanation
"""
x = st.norm.cdf(domain)
diff = st.norm.cdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01)
x = st.norm.pdf(domain)
diff = st.norm.pdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01)
x = st.norm.pdf(domain)
plt.plot(domain, x, color = "red")
x = st.norm.pdf(domain)
diff = st.norm.pdf(domain + 0.01)
plt.plot(domain, (diff-x)/0.01, color = "blue")
"""
Explanation: Standard Deviation
Above is the Bell Curve integral.
Remember the derivative is obtain from small differences: (f(x+h) - f(x))/x
Given x is our entire domain and operations are vectorized, it's easy enough to plot said derivative.
End of explanation
"""
from sympy import var, Lambda, integrate, sqrt, pi, exp, latex
fig = plt.gcf()
fig.set_size_inches(8,5)
var('a b x sigma mu')
pdf = Lambda((x,mu,sigma),
(1/(sigma * sqrt(2*pi)) * exp(-(mu-x)**2 / (2*sigma**2)))
)
cdf = Lambda((a,b,mu,sigma),
integrate(
pdf(x,mu,sigma),(x,a,b)
)
)
display(Latex('$ cdf(a,b,\mu,\sigma) = ' + latex(cdf(a,b,mu,sigma)) + '$'))
"""
Explanation: Integrating the Gaussian
Apparently there's no closed form, however sympy is able to do an integration somehow.
End of explanation
"""
x = np.linspace(50,159,100)
y = np.array([cdf(-1e99,v,100,15) for v in x],dtype='float')
plt.grid(True)
plt.title('Cumulative Distribution Function')
plt.xlabel('IQ')
print(type(plt.xlabel))
plt.ylabel('Y')
plt.text(65,.75,'$\mu = 100$',fontsize=16)
plt.text(65,.65,'$\sigma = 15$',fontsize=16)
plt.plot(x,y,color='gray')
plt.fill_between(x,y,0,color='#c0f0c0')
plt.show()
"""
Explanation: Lets stop right here and note the pdf and cdf have been defined, using sympy's Lambda and integrate, and the cdf will be fed a lot of data, one hundred points, along with mu and sigma. Then it's simply a matter of plotting.
What's amazing is our ability to get something from sympy that works to give a cdf, independently of scipy.stats.norm.
End of explanation
"""
domain = np.linspace(0, 200, 3000)
IQ = st.norm.pdf(domain, 100, 15)
plt.plot(domain, IQ, color = "red")
domain = np.linspace(0, 200, 3000)
mu = 100
sigma = 15
IQ = vg(domain)
plt.plot(domain, IQ, color = "green")
"""
Explanation: The above is truly a testament to Python's power, or the Python ecosystem's power. We've brought in sympy, able to do symbolic integration, and talk LaTeX at the same time. That's impressive. Here's the high IQ source for the original version of the above code.
There's no indefinite integral of the Gaussian, but there's a definite one. sympy comes with its own generic sympy.stats.cdf function which produces Lambdas (symbolic expressions) when used to integrate different types of probability spaces, such as Normal (a continuous PDF). It also accepts discrete PMFs as well.
<pre>
Examples
========
>>> from sympy.stats import density, Die, Normal, cdf
>>> from sympy import Symbol
>>> D = Die('D', 6)
>>> X = Normal('X', 0, 1)
>>> density(D).dict
{1: 1/6, 2: 1/6, 3: 1/6, 4: 1/6, 5: 1/6, 6: 1/6}
>>> cdf(D)
{1: 1/6, 2: 1/3, 3: 1/2, 4: 2/3, 5: 5/6, 6: 1}
>>> cdf(3*D, D > 2)
{9: 1/4, 12: 1/2, 15: 3/4, 18: 1}
>>> cdf(X)
Lambda(_z, -erfc(sqrt(2)*_z/2)/2 + 1)
</pre>
LAB: convert the Normal Distribution Below to IQ Curve...
That means domain is 0-200, standard deviation 15, mean = 100.
End of explanation
"""
|
jhconning/Dev-II
|
notebooks/SavingsCommit.ipynb
|
bsd-3-clause
|
import Contract
"""
Explanation: Time-inconsistent preferences and the demand for commitment services
The 'rational' or exponential discounter benchmark
Consider a simple extension to the standard intertemporal optimization problem (seen in an earlier notebook from two to three periods. A time-consistent exponential discounter wishes use own-savings strategies and/or the services of a competitive financial service sector to obtain an optimally 'smooth' consumption stream from a possibly more variable income stream. Formally we can think of the consumer as using fiancial markets to exchange existing endowment income stream $\textbf{y} = (y_0,y_1,y_2)$ for a preferred smoothed consumption stream $\textbf{c}= (c_0, c_1, c_2)$.
The consumer optimization problem is to choose $\textbf{c}$ to solve:
$$\max_{c_0,c_1,c_2} u(c_0) + \delta u(c_1) + \delta^2 u(c_2)$$
$$ s.t. c_0 + \frac{c_1}{1+r} + \frac{c_2}{1+r} = y_0 + \frac{y_1}{1+r} + \frac{y_2}{1+r}$$
where $\delta$ is the consumer's own personal psychic discount factor and $r$ is the financial sector's opportunity cost of funds.
Suppose the financial market is competitive. Banks compete to offer consumption contract $c^=(c_0^,c_1^,c_2^)$ in exchange for the consumer's original more volatile income stream $\textbf{y}$ of equal monetary present value.
Setting this problem up as a Lagrangean leads to:
$$ u(c_0) + \delta u(c_1) + \delta^2 u(c_2) - \lambda \left [ Ey - c_0 - \frac{c_1}{1+r} - \frac{c_2}{1+r} \right ]$$
The first-order necessary conditions for an interior optimum are:
$$u'(c_0^) = \lambda$$
$$\delta u'(c_1^) = \lambda \frac{1}{(1+r)}$$
$$\delta^2 u'(c_0^*) = \lambda \frac{1}{(1+r)^2}$$
If the consumer's discount factor $\delta$ just happens to stand in this relationship to the bank's opportunity cost of funds $r$:
$$\delta = \frac{1}{1+r}$$
the first order conditions collapse down to:
$$u'(c_0^) = u'(c_1^) =u'(c_2^*) = \lambda$$
and the consumer will aim to keep consumption constant across periods $c_0^ = c_1^ = c_2^*$
If the consumer is sufficiently patient and/or the return to saving is high enough then the consumer prefers rising consumption:
$$\delta > \frac{1}{1+r} \text{ then } c_0^ < c_1^ < c_2^* $$
and if instead the consumer is relatively impatient and/or the return to savings is they will prefer to consume more in earlier periods:
$$\delta > \frac{1}{1+r} \text{ then } c_0^ > c_1^ > c_2^* $$
The CRRA utility case
A Constant-Relative Risk Aversion (CRRA) felicity function is given by:
$$
\begin{equation}
u\left(c_{t}\right)=\begin{cases}
\frac{c^{1-\rho}}{1-\rho}, & \text{if } \rho>0 \text{ and } \rho \neq 1 \
ln\left(c\right) & \text{if } \rho=1
\end{cases}
\end{equation}
$$
The Arrow-Pratt measure of relative risk aversion
$$R(c) =\frac{-cu''(c)}{u'(c)} = \rho$$
is a measure of the curvature of the consumers felicity function or how averse consumers are to risks. As its name implies the CRRA function has a constant measure of relative risk aversion given simply by $\rho$ that doesn't change as consumer income increases.
For a CRRA function the elasticity of intertemporal substitution $\sigma$, which measures the responsiveness of the slope of the consumption path to changes in the interest rate, is also constant and given by:
$$\sigma = \frac{1}{\rho}$$
With a CRRA felicity function the earlier derived FOC allow us to solve to find:
$$\frac{c_1^}{c_0^} = \frac{c_2^}{c_1^} = \left [\delta (1+r) \right ]^\frac{1}{\rho} $$
If we substitute these into the bank's zero profit condition we can find closed form solutions for the consumption path.
'Beta-delta' quasi-hyperbolic time-inconsistent preferences
A tractable way to model present-biased and time-inconsistent preferences is by using quasi-hyperbolic preferences. The key idea is that the consumer's preferences change from period to period. The consumer is modeled has having multiple selves.
The period 0 self wants to consume some amount in period 0 and then keep consumption smoothly balanced between periods 1 and 2 as summarized by their preferences. When period 1 arrives however the consumer's preferences change in a way that makes him/her more impatient or present biased. The new period 1 self consumer would now like to undo the optimal consumption plans laid out by her earlier period 0 self and replace it with a new consumption plan that boosts present (period 1) consumption at the expense of future (period 2) consumption. From the preference standpoint of the period 0 self this will look like an effort by the period 1 self to 'raid savings' or 'ramp up debt' at the expense of later period selves.
A'sophisticated' (quasi) hyperbolic discounter anticipates his own future self's changing preference and is likely to respond by strategically altering their original period 0 consumption plan. The period 0 self acts strategically, anticipating their own later period's optimal responses.
The demand for commitment services
Let's see this formally.The consumer's "period-0" self wants to choose consumption path $(c_0,c_1,c_2)$ to maximize:
$$u(c_0)+\beta[\delta u(c_1)+\delta^2 u(c_2)]$$
subject to the earlier described intertemporal budget constraint.
This is as before except for the fact that we've now introduced a 'present-bias' parameter $\beta$ (where $0<\beta \leq 1$) that tilts the consumer's preferences toward period zero consumption. Whenever $\beta$ is strictly less than one the consumer wants to smooth preferences in a way that tilts toward period 0 consumption but seeks to keep consumption between period 1 and 2 more balanced.
When period 1 rolls around however the consumer's preferences over remaining consumption bundles change. The period 1 self now wants to re-arrange remaining consumption to now maximize:
$$u(c_1) +\beta\delta u(c_2)$$
While the period-0 self wanted to trade off period 1 and period 2 consumption to equalize marginal utilities like this:
$$u'(c_1^) = \delta (1+r )u'(c_2^)$$
the new period-1 self now pefers to tradeoff like this:
$$u'(c_1^) = \beta \delta (1+r )u'(c_2^)$$
The $\beta$ on the right-hand side of the last equation means the period 1 consumer now values period 2 consumption less relative to period 1 consumption compared to his period 0 self. Compared to the period-0 self's plans the new period-1 consumer wants to 'raid-savings' and/or 'take out a new loan'.
Lets look again at the CRRA case under the special assumption that $\delta = \frac{1}{1+r}$ and $r=0$ (and hence $\delta = 1$. These last assumptions are without loss of generality and done only to simplify the math and spotlight the key mechanisms at work.
Let's assume also that the period zero consumer can contract with a bank and that the bank can credibly commit to not renegotiating the terms of the contract even when the consumer's period 1 self comes begging for the bank to do so.
In this case the problem is exactly as described in the earlier section. The period 0 consumer maximizes
$$u(c_0)+\beta[\delta u(c_1)+\delta^2 u(c_2)]$$
subject to the intertemporal budget constraint.
When $\delta = \frac{1}{1+r}$ the period 0 consumer would like to keep consumption between period 1 and period 2 flat. The first order conditions now reduce to:
$$c_1^ = \beta^\frac{1}{\rho} c_0^$$
$$c_2^ = c_1^$$
Substituting these into the binding budget constraint yields:
$$c_0^ = \frac{E[y]}{1+\beta^\frac{1}{\rho}}$$
$$c_1^ = c_2^ = \beta^\frac{1}{\rho}c_0^$$
We call this the full commitment contract.
The financial intermediary who offers such a contract is really providing two services to consumer's period 0 self: they're helping the consumer to smooth consumption between period zero and later periods and they're also helping the consumer resist his period 1 self's temptation to disrupt this optimal consumption plan.
For example suppose that consumer endowment income is such that she finds it optimum to save in period 0 and then finance constant consumption out of endowment income and savings in period 1 and 2. When period 1 rolls around the consumer's period 1 self present bias would tempt them to want to 'raid savings' and/or take out a new loan to boost period 1 consumption at the expense of period 2 consumption.
To see this formally note that under the full commitment contract the consumer enters the period with contractual claims to the remaining consumption stream $(c_1^,c_2^) =({\bar c}^, {\bar c}^)$. If they could however the period 1 self would re-contract with this bank (or another) to choose another contract $(\hat c_1, \hat c_2)$ that solves:
$$\max_{c_1, c_2} u(c_1) + \beta u(c_2)$$
subject to
$$c_1 + \frac{c_2}{1+r} \leq {\bar c}^ + \frac{{\bar c}^}{1+r} $$
The first order conditions for this problem are
$$u'(c_1) = \beta u'(c_2)$$
which in the CRRA case requires $c_2 =\beta ^\frac{1}{\rho} c_1$
which is clearly not satisfied along the original contract which had $(c_1^,c_2^) =({\bar c}^, {\bar c}^)$
Substituting the FOC into the new period 1 budget constraint we can arrive at a solution or reaction function that states that when the period 1 self enters the period with claims ${c_1, c_2}$ they will want to renegotiate to:
$$\hat c_1 = \frac{ c_1 +c_2 }{1+\beta^\frac{1}{\rho} }$$
$$\hat c_2 =\beta ^\frac{1}{\rho} \hat c_1$$
If this renegotiation takes place then the period 1's welfare would increase but at the expense of period 0's welfare, since period 0's optimal consumption plan would have been undone.
From this discussion it should be clear that the period 0 self would like to have banks compete for her business in period 0 but offer a full-commitment contract which in practice means being locked into an exclusive relationship (if the relationship were not exclusive then the period 1 self would approach a second bank in period 1 and ask them to 'buy out' and then renegotiate their contract with the first bank.
More detailed analysis
Much more detail on the analysis of contracts like this can be found in our 2016 working paper (with Karna Basu) entitled "Sticking to the Contract: Nonprofit vs For-profit Banks and Hyperbolic Discounters.” A github repository is available with code and jupyter notebooks describing some of that analysis. I draw on that here, but most of the details of the methods are in those other notebooks.
Commitment Savings products
Many economists believe that time-inconsistent preferences drive people to have to struggle with important issues of self-control that affects their ability to reach savings goals, and generates a demand for commitment services. Indeed some economiss have gone so far as to argue that these problems are ubiquitous and that one reason for the demand for microfinance services is that they provide individuals with the discipline and commitment mechanisms that can help them with these problems of self-control.
Rotating Savings and Credit Associations (ROSCAs) for example place individuals into savings groups and then create pressures and social sanctions to help individuals achieve savings goals. Iterant savings collectors may serve a similar purpose. Some have even suggested that many microcredit loans which advance loans to individuals and then put them on tight weekly repayment schedules may be popular not so much because they are releasing the individual from a binding credit constraint as because they are helping the individual 'borrow to save.' In this type of scenario the individual wants to save up to buy say a refrigerator or other household appliance but cannot find the discipline to save up for the item themselves. A microcredit loan allows them to buy the refrigerator outright and then places them on a strict repayment schedule that commits the individual to putting aside the fraction of weekly income toward the item that they might have otherwise struggled to do on their own.
In all these stories the availabiliy of commitment contracts provides additional value to the consumer, value that they may be willing to pay for.
In the following section I will illustrate some of these ideas with contracts that we will solve for and illustrate for the CRRA case. The code for much of what follows is in the Contract.py module described in greater detail in the github repo.
End of explanation
"""
cC = Contract.Competitive(beta = 0.8)
cC.rho = 0.5
"""
Explanation: Lets look at a competive contracting situation where $\beta = 0.8$ and $\rho = 0.5$
End of explanation
"""
cC.y = [150,100,50]
"""
Explanation: Now lets give the individual an endowment income of $(y_0,y_1,y_2) = (150, 100, 50)$
End of explanation
"""
cCF = cC.fcommit(); cCF
"""
Explanation: The full commitment contract is:
End of explanation
"""
cCR = cC.reneg(cCF); cCR
"""
Explanation: Note how this involves some saving in period 0 ($y_0=150$ and consumption $c_0=131.6$) as well as additional saving for period 2.
If for some reason period 0 self agreed to this contract (sincerely believing that it would never be renegotiated) but then suddenly to everyone's surprise period 1 self had an opportunity to renegotiate the continuation of this contract, they would renegotiate to the following contract:
End of explanation
"""
cCRP = cC.reneg_proof().x; cCRP
"""
Explanation: We can see here how period 1 self 'raids' the savings that the period 0 self had intended should be passed onto period 2 consumption. The period 1 self boosts period 1 consumption from 84 to 101. Rather than pass 34 units of consumption into period 2 only 20 units are now passed.
If the bank cannot credibly commit to not renegotiate the contract then the sophisticated time-inconsistent consumer will defensively alter their consumption plans to try to thwart their future self's strategic renegotiations. The period 0 consumer will insist on renegotiation proof contracts which will impose an additional constraint on the problem, namely that no bank find it profitable to renegotiate the contract. In general this will be achieved by in a sense 'surrendering' to their future self's bias tilt toward period 1 consumption.
Details in the paper and in class lecture on the shape of this constraint.
The important thing to understand is that adding any constraint to the problem can only lower consumer welfare and push us away from the full-commitment optimum.
For the present setup the renegotiation proof contract can be solved for to find:
End of explanation
"""
%matplotlib inline
import seaborn as sns
import pandas as pd
import ipystata
import statsmodels.formula.api as sm
"""
Explanation: Compare to the full commitment contract this contract involves more savings pased from from period 0 to period 1 but much less savings passed from period 1 to period 2. Total savings (period 0 plus period 1 savings) are higher when credible commitment savings devices are available compared to when not.
Replicating Ashraf, Karlan and Yin (2006) Commitment Savings paper
Ashraf, Karlan and Yin (2006) "Tying Odysseus to the Mast: Evidence from a Commitment Savings Product in the Philippines," *Quarterly Journal of Economics.
The Stata dataset and code replication files have been made available by the authors via the Harvard Dataverse here. Below we just replicate a few of the regressions. A few more sections are replicated in this notebook.
End of explanation
"""
#df = pd.read_stata(r"G:\GC\Dev-II\notebooks\seedanalysis_011204_1_v12.dta")
df = pd.read_stata(r"G:\GC\Dev-II\notebooks\data\savings1.dta")
regVI_1 = 'balchange ~ treatment + marketing'
model = sm.ols(regVI_1, df)
fitted = model.fit(cov_type='HC1')
print(fitted.summary())
model = sm.ols('balchange ~ treatment', df[(df.treatment ==1) | (df.marketing ==1)])
fitted = model.fit(cov_type='HC1')
print(fitted.summary())
"""
Explanation: Open the datasets
One disadvantage of proprietary software is that you often cannot open a dataset saved with a later version of the software unless you pay to upgrade the software. I'm going to load the Stata dataset into a python pandas dataframe using its read_stata method and then pass the dataset into the running Stata session (which on my home machine is Stata 11).
End of explanation
"""
|
DavidObando/carnd
|
Term1/Labs/CarND-Keras-Lab/traffic-sign-classification-with-keras.ipynb
|
apache-2.0
|
from urllib.request import urlretrieve
from os.path import isfile
from tqdm import tqdm
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('train.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Train Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/train.p',
'train.p',
pbar.hook)
if not isfile('test.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Test Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/test.p',
'test.p',
pbar.hook)
print('Training and Test data downloaded.')
"""
Explanation: Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here:
End of explanation
"""
import pickle
import numpy as np
import math
# Fix error with TF and Keras
import tensorflow as tf
tf.python.control_flow_ops = tf
print('Modules loaded.')
"""
Explanation: Overview
Here are the steps you'll take to build the network:
Load the training data.
Preprocess the data.
Build a feedforward neural network to classify traffic signs.
Build a convolutional neural network to classify traffic signs.
Evaluate the final neural network on testing data.
Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.
End of explanation
"""
with open('train.p', 'rb') as f:
data = pickle.load(f)
# TODO: Load the feature data to the variable X_train
X_train = data['features']
# TODO: Load the label data to the variable y_train
y_train = data['labels']
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert np.array_equal(X_train, data['features']), 'X_train not set to data[\'features\'].'
assert np.array_equal(y_train, data['labels']), 'y_train not set to data[\'labels\'].'
print('Tests passed.')
"""
Explanation: Load the Data
Start by importing the data from the pickle file.
End of explanation
"""
# TODO: Shuffle the data
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert X_train.shape == data['features'].shape, 'X_train has changed shape. The shape shouldn\'t change when shuffling.'
assert y_train.shape == data['labels'].shape, 'y_train has changed shape. The shape shouldn\'t change when shuffling.'
assert not np.array_equal(X_train, data['features']), 'X_train not shuffled.'
assert not np.array_equal(y_train, data['labels']), 'y_train not shuffled.'
print('Tests passed.')
"""
Explanation: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint: You can use the scikit-learn shuffle function to shuffle the data.
End of explanation
"""
# TODO: Normalize the data features to the variable X_normalized
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [-0.5, 0.5]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = -0.5
b = 0.5
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
X_normalized = normalize_grayscale(X_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized))
print('Tests passed.')
"""
Explanation: Normalize the features
Hint: You solved this in TensorFlow lab Problem 1.
End of explanation
"""
# TODO: One Hot encode the labels to the variable y_one_hot
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
y_one_hot = lb.fit_transform(y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
import collections
assert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\'s {}, it should be (39209, 43)'.format(y_one_hot.shape)
assert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.'
print('Tests passed.')
"""
Explanation: One-Hot Encode the labels
Hint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
model = Sequential()
# TODO: Build a Multi-layer feedforward neural network with Keras here.
# 1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
# 2nd Layer - Add a fully connected layer
model.add(Dense(128))
# 3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 4th Layer - Add a fully connected layer
model.add(Dense(43))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.activations import relu, softmax
def check_layers(layers, true_layers):
assert len(true_layers) != 0, 'No layers found'
for layer_i in range(len(layers)):
assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__)
assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers))
check_layers([Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)'
assert model.layers[2].activation == relu, 'Third layer not a relu activation layer'
assert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)'
assert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer'
print('Tests passed.')
"""
Explanation: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this:
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
Create the Sequential model
model = Sequential()
1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
2nd Layer - Add a fully connected layer
model.add(Dense(100))
3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
4th Layer - Add a fully connected layer
model.add(Dense(60))
5th Layer - Add a ReLU activation layer
model.add(Activation('relu'))
```
Keras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer.
The first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model.
Build a Multi-Layer Feedforward Network
Build a multi-layer feedforward neural network to classify the traffic sign images.
Set the first layer to a Flatten layer with the input_shape set to (32, 32, 3)
Set the second layer to Dense layer width to 128 output.
Use a ReLU activation function after the second layer.
Set the output layer width to 43, since there are 43 classes in the dataset.
Use a softmax activation function after the output layer.
To get started, review the Keras documentation about models and layers.
The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
End of explanation
"""
# TODO: Compile and train the model here.
# Configures the learning process and metrics
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
print(model.summary())
# Train the model
# History is a record of training loss and metrics
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=10, validation_split=0.2)
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.optimizers import Adam
assert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function'
assert isinstance(model.optimizer, Adam), 'Not using adam optimizer'
assert len(history.history['acc']) == 10, 'You\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc']))
assert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1]
assert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1]
print('Tests passed.')
"""
Explanation: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)
Calculate test score
test_score = model.evaluate(x_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
End of explanation
"""
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
model = Sequential()
# 1st Layer - Add a convolution with 32 filters, 3x3 kernel, and valid padding
model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(32, 32, 3)))
# 2nd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 3rd Layer - Add a flatten layer
model.add(Flatten())
# 4th Layer - Add a fully connected layer
model.add(Dense(128))
# 5th Layer - Add a ReLU activation layer
model.add(Activation('relu'))
# 6th Layer - Add a fully connected layer
model.add(Dense(43))
# 7th Layer - Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
check_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32'
assert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3'
assert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
"""
Explanation: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
End of explanation
"""
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
# Add a convolution with 32 filters, 3x3 kernel, and valid padding
model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(32, 32, 3)))
# Add a max pooling of 2x2
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a flatten layer
model.add(Flatten())
# Add a fully connected layer
model.add(Dense(128))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a fully connected layer
model.add(Dense(43))
# Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[1].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=4, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
"""
Explanation: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
End of explanation
"""
# TODO: Re-construct the network and add dropout after the pooling layer.
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
# Add a convolution with 32 filters, 3x3 kernel, and valid padding
model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(32, 32, 3)))
# Add a max pooling of 2x2
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add a dropout of 50%
model.add(Dropout(0.5))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a flatten layer
model.add(Flatten())
# Add a fully connected layer
model.add(Dense(128))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a fully connected layer
model.add(Dense(43))
# Add a ReLU activation layer
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Dropout, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[2].p == 0.5, 'Third layer should be a Dropout of 50%'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
"""
Explanation: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
End of explanation
"""
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
from keras.regularizers import l2, activity_l2
model = Sequential()
# Add a convolution with 32 filters, 3x3 kernel, and valid padding
model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(32, 32, 3)))
# Add a max pooling of 2x2
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add a dropout of 50%
model.add(Dropout(0.5))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a convolution with 64 filters, 2x2 kernel, and valid padding
model.add(Convolution2D(64, 2, 2, border_mode='valid'))
# Add a max pooling of 2x2
model.add(MaxPooling2D(pool_size=(2, 2)))
# Add a dropout of 50%
model.add(Dropout(0.5))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a flatten layer
model.add(Flatten())
# Add a fully connected layer
model.add(Dense(128, W_regularizer=l2(0.0001), activity_regularizer=activity_l2(0.0001)))
# Add a ReLU activation layer
model.add(Activation('relu'))
# Add a fully connected layer
model.add(Dense(43, W_regularizer=l2(0.0001), activity_regularizer=activity_l2(0.0001)))
# Add a ReLU activation layer
model.add(Activation('softmax'))
print(model.summary())
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=256, nb_epoch=20, validation_split=0.2)
"""
Explanation: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
End of explanation
"""
# TODO: Load test data
with open('test.p', 'rb') as f:
data = pickle.load(f)
X_test = data['features']
y_test = data['labels']
# TODO: Preprocess data & one-hot encode the labels
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [-0.5, 0.5]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = -0.5
b = 0.5
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
X_test_normalized = normalize_grayscale(X_test)
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
y_test_one_hot = lb.fit_transform(y_test)
# TODO: Evaluate model on test data
print(model.metrics_names)
model.evaluate(X_test_normalized, y_test_one_hot, batch_size=256, verbose=1, sample_weight=None)
"""
Explanation: Best Validation Accuracy:
So far I've achieved 98.3%.
Testing
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate() method to see how well it does.
Hint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.
End of explanation
"""
|
rflamary/POT
|
notebooks/plot_gromov.ipynb
|
mit
|
# Author: Erwan Vautier <erwan.vautier@gmail.com>
# Nicolas Courty <ncourty@irisa.fr>
#
# License: MIT License
import scipy as sp
import numpy as np
import matplotlib.pylab as pl
from mpl_toolkits.mplot3d import Axes3D # noqa
import ot
"""
Explanation: Gromov-Wasserstein example
This example is designed to show how to use the Gromov-Wassertsein distance
computation in POT.
End of explanation
"""
n_samples = 30 # nb samples
mu_s = np.array([0, 0])
cov_s = np.array([[1, 0], [0, 1]])
mu_t = np.array([4, 4, 4])
cov_t = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
xs = ot.datasets.make_2D_samples_gauss(n_samples, mu_s, cov_s)
P = sp.linalg.sqrtm(cov_t)
xt = np.random.randn(n_samples, 3).dot(P) + mu_t
"""
Explanation: Sample two Gaussian distributions (2D and 3D)
The Gromov-Wasserstein distance allows to compute distances with samples that
do not belong to the same metric space. For demonstration purpose, we sample
two Gaussian distributions in 2- and 3-dimensional spaces.
End of explanation
"""
fig = pl.figure()
ax1 = fig.add_subplot(121)
ax1.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')
ax2 = fig.add_subplot(122, projection='3d')
ax2.scatter(xt[:, 0], xt[:, 1], xt[:, 2], color='r')
pl.show()
"""
Explanation: Plotting the distributions
End of explanation
"""
C1 = sp.spatial.distance.cdist(xs, xs)
C2 = sp.spatial.distance.cdist(xt, xt)
C1 /= C1.max()
C2 /= C2.max()
pl.figure()
pl.subplot(121)
pl.imshow(C1)
pl.subplot(122)
pl.imshow(C2)
pl.show()
"""
Explanation: Compute distance kernels, normalize them and then display
End of explanation
"""
p = ot.unif(n_samples)
q = ot.unif(n_samples)
gw0, log0 = ot.gromov.gromov_wasserstein(
C1, C2, p, q, 'square_loss', verbose=True, log=True)
gw, log = ot.gromov.entropic_gromov_wasserstein(
C1, C2, p, q, 'square_loss', epsilon=5e-4, log=True, verbose=True)
print('Gromov-Wasserstein distances: ' + str(log0['gw_dist']))
print('Entropic Gromov-Wasserstein distances: ' + str(log['gw_dist']))
pl.figure(1, (10, 5))
pl.subplot(1, 2, 1)
pl.imshow(gw0, cmap='jet')
pl.title('Gromov Wasserstein')
pl.subplot(1, 2, 2)
pl.imshow(gw, cmap='jet')
pl.title('Entropic Gromov Wasserstein')
pl.show()
"""
Explanation: Compute Gromov-Wasserstein plans and distance
End of explanation
"""
|
csdms/pymt
|
notebooks/frost_number.ipynb
|
mit
|
# Import standard Python modules
import numpy as np
import pandas
import matplotlib.pyplot as plt
# Import the FrostNumber PyMT model
import pymt.models
frost_number = pymt.models.FrostNumber()
"""
Explanation: Frost Number Model
Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/frost_number.ipynb
Install command:
$ conda install notebook pymt_permamodel
Download a local copy of the notebook:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/frost_number.ipynb
Start a Jupyter Notebook session in the current directory:
$ jupyter notebook
Introduction to Permafrost Processes - Lesson 1
This lab has been designed and developed by Irina Overeem and Mark Piper, CSDMS, University of Colorado, CO
with assistance of Kang Wang, Scott Stewart at CSDMS, University of Colorado, CO, and Elchin Jafarov, at Los Alamos National Labs, NM.
These labs are developed with support from NSF Grant 1503559, ‘Towards a Tiered Permafrost Modeling Cyberinfrastructure’
Classroom organization
This lab is the first in a series of introduction to permafrost process modeling, designed for inexperienced users. In this first lesson, we explore the Air Frost Number model and learn to use the CSDMS Python Modeling Toolkit (PyMT). We implemented a basic configuration of the Air Frost Number (as formulated by Nelson and Outcalt in 1987). This series of labs is designed for inexperienced modelers to gain some experience with running a numerical model, changing model inputs, and analyzing model output. Specifically, this first lab looks at what controls permafrost occurrence and compares the occurrence of permafrost in Russia.
Basic theory on the Air Frost Number is presented in Frost Number Model Lecture 1.
This lab will likely take ~ 1,5 hours to complete in the classroom. This time assumes you are unfamiiar with the PyMT and need to learn setting parameters, saving runs, downloading data and looking at output (otherwise it will be much faster).
We will use netcdf files for output, this is a standard output from all CSDMS models. If you have no experience with visualizing these files, Panoply software will be helpful. Find instructions on how to use this software.
Learning objectives
Skills
familiarize with a basic configuration of the Air Frost Number Model
hands-on experience with visualizing NetCDF output with Panoply.
Topical learning objectives:
what is the primary control on the occurrence of permafrost
freezing and thawing day indices and how to approximate these
where in Russia permafrost occurs
References and More information
Nelson, F.E., Outcalt, S.I., 1987. A computational method for prediction and prediction and regionalization of permafrost. Arct. Alp. Res. 19, 279–288.
Janke, J., Williams, M., Evans, A., 2012. A comparison of permafrost prediction models along a section of Trail Ridge Road, RMNP, CO. Geomorphology 138, 111-120.
The Air Frost number
The Air Frost number uses the mean annual air temperature of a location (MAAT), as well as the yearly temperature amplitude. In the Air Frost parametrization the Mean monthly temperature of the warmest month (Tw) and coldest month (Tc) set that amplitude. The 'degree thawing days' are above 0 C, the 'degree freezing days' are below 0 C. To arrive at the cumulative freezing degree days and thawing degree days the annual temperature curve is approximated by a cosine as defined by the warmest and coldest months, and one can integrate under the cosine curve (see figure, and more detailed notes in the associated presentation).
End of explanation
"""
config_file, config_folder = frost_number.setup(T_air_min=-13., T_air_max=19.5)
frost_number.initialize(config_file, config_folder)
frost_number.update()
frost_number.output_var_names
frost_number.get_value('frostnumber__air')
"""
Explanation: Part 1
Adapt the base case configuration to a mean temperature of the coldest month of -13C, and of the warmest month +19.5C (the actual values for Vladivostok in Far East Russia).
End of explanation
"""
args = frost_number.setup(T_air_min=-40.9, T_air_max=19.5)
frost_number.initialize(*args)
frost_number.update()
frost_number.get_value('frostnumber__air')
"""
Explanation: Part 2
Now run the same simulation for Yakutsk on the Lena River in Siberia. There the warmest month is again 19.5C, but the coldest month is -40.9C.
End of explanation
"""
data = pandas.read_csv("https://raw.githubusercontent.com/mcflugen/pymt_frost_number/master/data/t_air_min_max.csv")
data
frost_number = pymt.models.FrostNumber()
config_file, run_folder = frost_number.setup()
frost_number.initialize(config_file, run_folder)
t_air_min = data["atmosphere_bottom_air__time_min_of_temperature"]
t_air_max = data["atmosphere_bottom_air__time_max_of_temperature"]
fn = np.empty(6)
for i in range(6):
frost_number.set_value("atmosphere_bottom_air__time_min_of_temperature", t_air_min.values[i])
frost_number.set_value("atmosphere_bottom_air__time_max_of_temperature", t_air_max.values[i])
frost_number.update()
fn[i] = frost_number.get_value('frostnumber__air')
years = range(2000, 2006)
plt.subplot(211)
plt.plot(years, t_air_min, years, t_air_max)
plt.subplot(212)
plt.plot(years, fn)
"""
Explanation: Questions
Please answer the following questions in each box (double click the box to edit).
Q1: What is the Frost Number the model returned for each of the Vladivostok and Yakutsk temperature regimes?
A1: the answer in here.
Q2: What do these specific Frost numbers imply for the likelihood of permafrost occurrence?
A2:
Q3: How do you think the annual temperature distribution would look in regions of Russia bordering the Barents Sea?
A3:
Q4: Devise a scenario and run it; was the calculated Frost number what you expected?
A4:
Q5: On the map below, find the how the permafrost is mapped in far west coastal Russia at high-latitude (e.g. Murmansk).
A5:
Q6: Discuss the factors that would make this first-order approach problematic?
A6:
Q7: When would the temperature in the first cm in the soil be significantly different from the air temperature?
A7:
Extra Credit
Now run a time series.
End of explanation
"""
|
manipopopo/tensorflow
|
tensorflow/contrib/autograph/examples/notebooks/rnn_keras_estimator.ipynb
|
apache-2.0
|
def parse(line):
"""Parses a line from the colors dataset."""
items = tf.string_split([line], ",").values
rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
color_name = items[0]
chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)
length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)
return rgb, chars, length
def set_static_batch_shape(batch_size):
def apply(rgb, chars, length):
rgb.set_shape((batch_size, None))
chars.set_shape((batch_size, None, 256))
length.set_shape((batch_size,))
return rgb, chars, length
return apply
def load_dataset(data_dir, url, batch_size, training=True):
"""Loads the colors data at path into a tf.PaddedDataset."""
path = tf.keras.utils.get_file(os.path.basename(url), url, cache_dir=data_dir)
dataset = tf.data.TextLineDataset(path)
dataset = dataset.skip(1)
dataset = dataset.map(parse)
dataset = dataset.cache()
dataset = dataset.repeat()
if training:
dataset = dataset.shuffle(buffer_size=3000)
dataset = dataset.padded_batch(
batch_size, padded_shapes=((None,), (None, 256), ()))
# To simplify the model code, we statically set as many of the shapes that we
# know.
dataset = dataset.map(set_static_batch_shape(batch_size))
return dataset
"""
Explanation: Case study: training a custom RNN, using Keras and Estimators
In this section, we show how you can use AutoGraph to build RNNColorbot, an RNN that takes as input names of colors and predicts their corresponding RGB tuples. The model will be trained by a custom Estimator.
To get started, set up the dataset. The following cells defines methods that download and format the data needed for RNNColorbot; the details aren't important (read them in the privacy of your own home if you so wish), but make sure to run the cells before proceeding.
End of explanation
"""
@autograph.convert()
class RnnColorbot(tf.keras.Model):
"""RNN Colorbot model."""
def __init__(self):
super(RnnColorbot, self).__init__()
self.lower_cell = tf.contrib.rnn.LSTMBlockCell(256)
self.upper_cell = tf.contrib.rnn.LSTMBlockCell(128)
self.relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)
def _rnn_layer(self, chars, cell, batch_size, training):
"""A single RNN layer.
Args:
chars: A Tensor of shape (max_sequence_length, batch_size, input_size)
cell: An object of type tf.contrib.rnn.LSTMBlockCell
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (max_sequence_length, batch_size, output_size).
"""
hidden_outputs = tf.TensorArray(tf.float32, 0, True)
state, output = cell.zero_state(batch_size, tf.float32)
for ch in chars:
cell_output, (state, output) = cell.call(ch, (state, output))
hidden_outputs.append(cell_output)
hidden_outputs = autograph.stack(hidden_outputs)
if training:
hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)
return hidden_outputs
def build(self, _):
"""Creates the model variables. See keras.Model.build()."""
self.lower_cell.build(tf.TensorShape((None, 256)))
self.upper_cell.build(tf.TensorShape((None, 256)))
self.relu_layer.build(tf.TensorShape((None, 128)))
self.built = True
def call(self, inputs, training=False):
"""The RNN model code. Uses Eager.
The model consists of two RNN layers (made by lower_cell and upper_cell),
followed by a fully connected layer with ReLU activation.
Args:
inputs: A tuple (chars, length)
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (batch_size, 3) - the model predictions.
"""
chars, length = inputs
batch_size = chars.shape[0]
seq = tf.transpose(chars, (1, 0, 2))
seq = self._rnn_layer(seq, self.lower_cell, batch_size, training)
seq = self._rnn_layer(seq, self.upper_cell, batch_size, training)
# Grab just the end-of-sequence from each output.
indices = (length - 1, range(batch_size))
indices = tf.stack(indices, 1)
sequence_ends = tf.gather_nd(seq, indices)
return self.relu_layer(sequence_ends)
@autograph.convert()
def loss_fn(labels, predictions):
return tf.reduce_mean((predictions - labels) ** 2)
"""
Explanation: To show the use of control flow, we write the RNN loop by hand, rather than using a pre-built RNN model.
Note how we write the model code in Eager style, with regular if and while statements. Then, we annotate the functions with @autograph.convert to have them automatically compiled to run in graph mode.
We use Keras to define the model, and we will train it using Estimators.
End of explanation
"""
def model_fn(features, labels, mode, params):
"""Estimator model function."""
chars = features['chars']
sequence_length = features['sequence_length']
inputs = (chars, sequence_length)
# Create the model. Simply using the AutoGraph-ed class just works!
colorbot = RnnColorbot()
colorbot.build(None)
if mode == tf.estimator.ModeKeys.TRAIN:
predictions = colorbot(inputs, training=True)
loss = loss_fn(labels, predictions)
learning_rate = params['learning_rate']
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
global_step = tf.train.get_global_step()
train_op = optimizer.minimize(loss, global_step=global_step)
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
predictions = colorbot(inputs)
loss = loss_fn(labels, predictions)
return tf.estimator.EstimatorSpec(mode, loss=loss)
elif mode == tf.estimator.ModeKeys.PREDICT:
predictions = colorbot(inputs)
predictions = tf.minimum(predictions, 1.0)
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
"""
Explanation: We will now create the model function for the custom Estimator.
In the model function, we simply use the model class we defined above - that's it!
End of explanation
"""
def input_fn(data_dir, data_url, params, training=True):
"""An input function for training"""
batch_size = params['batch_size']
# load_dataset defined above
dataset = load_dataset(data_dir, data_url, batch_size, training=training)
# Package the pipeline end in a format suitable for the estimator.
labels, chars, sequence_length = dataset.make_one_shot_iterator().get_next()
features = {
'chars': chars,
'sequence_length': sequence_length
}
return features, labels
"""
Explanation: We'll create an input function that will feed our training and eval data.
End of explanation
"""
params = {
'batch_size': 64,
'learning_rate': 0.01,
}
train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/train.csv"
test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/test.csv"
data_dir = "tmp/rnn/data"
regressor = tf.estimator.Estimator(
model_fn=model_fn,
params=params)
regressor.train(
input_fn=lambda: input_fn(data_dir, train_url, params),
steps=100)
eval_results = regressor.evaluate(
input_fn=lambda: input_fn(data_dir, test_url, params, training=False),
steps=2
)
print('Eval loss at step %d: %s' % (eval_results['global_step'], eval_results['loss']))
"""
Explanation: We now have everything in place to build our custom estimator and use it for training and eval!
End of explanation
"""
def predict_input_fn(color_name):
"""An input function for prediction."""
_, chars, sequence_length = parse(color_name)
# We create a batch of a single element.
features = {
'chars': tf.expand_dims(chars, 0),
'sequence_length': tf.expand_dims(sequence_length, 0)
}
return features, None
def draw_prediction(color_name, pred):
pred = pred * 255
pred = pred.astype(np.uint8)
plt.axis('off')
plt.imshow(pred)
plt.title(color_name)
plt.show()
def predict_with_estimator(color_name, regressor):
predictions = regressor.predict(
input_fn=lambda:predict_input_fn(color_name))
pred = next(predictions)
predictions.close()
pred = np.minimum(pred, 1.0)
pred = np.expand_dims(np.expand_dims(pred, 0), 0)
draw_prediction(color_name, pred)
tb = widgets.TabBar(["RNN Colorbot"])
while True:
with tb.output_to(0):
try:
color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ")
except (EOFError, KeyboardInterrupt):
break
if not color_name:
break
with tb.output_to(0):
tb.clear_tab()
predict_with_estimator(color_name, regressor)
"""
Explanation: And here's the same estimator used for inference.
End of explanation
"""
|
aitatanit/metatlas
|
4notebooks/old/examplenotebooks/Specify information about the experiment methods samples and files.ipynb
|
bsd-3-clause
|
myExperiment = metatlas_objects.Experiment(name = 'QExactive_Hilic_Pos_Actinobacteria_Phylogeny')
"""
Explanation: <h1>Create an experiment</h1>
End of explanation
"""
myPath = '/global/homes/b/bpb/ExoMetabolomic_Example_Data/'
myPath = '/project/projectdirs/metatlas/data_for_metatlas_2/20150324_LPSilva_BHedlund_chloroflexi_POS_rerun/'
myFiles = glob.glob('%s*.mzML'%myPath)
myFiles.sort()
groupID = []
for f in myFiles:
groupID.append('')
i = 0
while i < len(myFiles):
a,b = os.path.split(myFiles[i])
j = raw_input('enter group id for %s [number, "x" to go back]:'%b)
if j == 'x':
i = i - 1
else:
groupID[i] = j
i = i + 1
print groupID
uGroupID = sorted(set(groupID))
print uGroupID
"""
Explanation: <h1>Get a list of mzML files that you uploaded and assign them to a group</h1>
End of explanation
"""
uGroupName = []
for u in uGroupID:
j = raw_input('enter group name for Group #%s: '%u)
uGroupName.append(j)
"""
Explanation: <h1>Specify the descriptive names for each group</h1>
End of explanation
"""
fsList = []
for i,g in enumerate(groupID):
for j,u in enumerate(uGroupID):
if g == u:
fs = metatlas_objects.FileSpec(polarity = 1,
group = uGroupName[j],
inclusion_order = i)
fsList.append(fs)
myExperiment.load_files([myFiles[i]],fs)
myExperiment.save()
print myExperiment.finfos[0].hdf_file
print myExperiment.finfos[0].group
print myExperiment.finfos[0].polarity
"""
Explanation: <h1>Steps in the file description and conversion process</h1>
<ul>
<li>upload mzml files</li>
<li>glob to get list of mzml files</li>
<li>for a homogenous set of mzml files make a single filespec object with </li>
metatlas_objects.FileSpec(polarity = ,group = inclus = )
<li>Call an experiment, e = metatlas_objects.Experiment(name = 'Test_20150722')</li>
<li>e.load_files(mzmlfiles,sp)</li>
<li>repeat this process for each homogeneous set of files</li>
<li>Alternative, you can specify your own filespec object for each file</li>
</ul>
End of explanation
"""
# myH5Files = []
# for f in myFiles:
# metatlas.mzml_to_hdf('%s'%(f))
# myH5Files.append(f.replace('.mzML','.h5'))
# print f
print len(myExperiment.finfos)
"""
Explanation: <h1>Convert All Your Files Manually</h1>
<h3>This is typically not performed because the "load_files" command above has already taken care of it</h3>
End of explanation
"""
|
NathanYee/ThinkBayes2
|
code/chap09.ipynb
|
gpl-2.0
|
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalBinomialPmf
import thinkplot
"""
Explanation: Think Bayes: Chapter 9
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
"""
Explanation: Improving Reading Ability
From DASL(http://lib.stat.cmu.edu/DASL/Stories/ImprovingReadingAbility.html)
An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve.
Summary statistics on the two groups of children show that the average score of the treatment class was almost ten points higher than the average of the control class. A two-sample t-test is appropriate for testing whether this difference is statistically significant. The t-statistic is 2.31, which is significant at the .05 level.
I'll use Pandas to load the data into a DataFrame.
End of explanation
"""
grouped = df.groupby('Treatment')
for name, group in grouped:
print(name, group.Response.mean())
"""
Explanation: And use groupby to compute the means for the two groups.
End of explanation
"""
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: sequence of test scores
hypo: mu, sigma
"""
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
"""
Explanation: The Normal class provides a Likelihood function that computes the likelihood of a sample from a normal distribution.
End of explanation
"""
mus = np.linspace(20, 80, 101)
sigmas = np.linspace(5, 30, 101)
"""
Explanation: The prior distributions for mu and sigma are uniform.
End of explanation
"""
from itertools import product
control = Normal(product(mus, sigmas))
data = df[df.Treatment=='Control'].Response
control.Update(data)
"""
Explanation: I use itertools.product to enumerate all pairs of mu and sigma.
End of explanation
"""
thinkplot.Contour(control, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
"""
Explanation: After the update, we can plot the probability of each mu-sigma pair as a contour plot.
End of explanation
"""
pmf_mu0 = control.Marginal(0)
thinkplot.Pdf(pmf_mu0)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
"""
Explanation: And then we can extract the marginal distribution of mu
End of explanation
"""
pmf_sigma0 = control.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
"""
Explanation: And the marginal distribution of sigma
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# It looks like there is a high probability that the mean of
# the treatment group is higher, and the most likely size of
# the effect is 9-10 points.
# It looks like the variance of the treated group is substantially
# smaller, which suggests that the treatment might be helping
# low scorers more than high scorers.
"""
Explanation: Exercise: Run this analysis again for the control group. What is the distribution of the difference between the groups? What is the probability that the average "reading power" for the treatment group is higher? What is the probability that the variance of the treatment group is higher?
End of explanation
"""
class Paintball(Suite, Joint):
"""Represents hypotheses about the location of an opponent."""
def __init__(self, alphas, betas, locations):
"""Makes a joint suite of parameters alpha and beta.
Enumerates all pairs of alpha and beta.
Stores locations for use in Likelihood.
alphas: possible values for alpha
betas: possible values for beta
locations: possible locations along the wall
"""
self.locations = locations
pairs = [(alpha, beta)
for alpha in alphas
for beta in betas]
Suite.__init__(self, pairs)
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: pair of alpha, beta
data: location of a hit
Returns: float likelihood
"""
alpha, beta = hypo
x = data
pmf = MakeLocationPmf(alpha, beta, self.locations)
like = pmf.Prob(x)
return like
def MakeLocationPmf(alpha, beta, locations):
"""Computes the Pmf of the locations, given alpha and beta.
Given that the shooter is at coordinates (alpha, beta),
the probability of hitting any spot is inversely proportionate
to the strafe speed.
alpha: x position
beta: y position
locations: x locations where the pmf is evaluated
Returns: Pmf object
"""
pmf = Pmf()
for x in locations:
prob = 1.0 / StrafingSpeed(alpha, beta, x)
pmf.Set(x, prob)
pmf.Normalize()
return pmf
def StrafingSpeed(alpha, beta, x):
"""Computes strafing speed, given location of shooter and impact.
alpha: x location of shooter
beta: y location of shooter
x: location of impact
Returns: derivative of x with respect to theta
"""
theta = math.atan2(x - alpha, beta)
speed = beta / math.cos(theta)**2
return speed
"""
Explanation: Paintball
Suppose you are playing paintball in an indoor arena 30 feet
wide and 50 feet long. You are standing near one of the 30 foot
walls, and you suspect that one of your opponents has taken cover
nearby. Along the wall, you see several paint spatters, all the same
color, that you think your opponent fired recently.
The spatters are at 15, 16, 18, and 21 feet, measured from the
lower-left corner of the room. Based on these data, where do you
think your opponent is hiding?
Here's the Suite that does the update. It uses MakeLocationPmf,
defined below.
End of explanation
"""
alphas = range(0, 31)
betas = range(1, 51)
locations = range(0, 31)
suite = Paintball(alphas, betas, locations)
suite.UpdateSet([15, 16, 18, 21])
"""
Explanation: The prior probabilities for alpha and beta are uniform.
End of explanation
"""
locations = range(0, 31)
alpha = 10
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
pmf = MakeLocationPmf(alpha, beta, locations)
pmf.label = 'beta = %d' % beta
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
"""
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
"""
marginal_alpha = suite.Marginal(0, label='alpha')
marginal_beta = suite.Marginal(1, label='beta')
print('alpha CI', marginal_alpha.CredibleInterval(50))
print('beta CI', marginal_beta.CredibleInterval(50))
thinkplot.PrePlot(num=2)
thinkplot.Cdf(Cdf(marginal_alpha))
thinkplot.Cdf(Cdf(marginal_beta))
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
"""
Explanation: Here are the marginal posterior distributions for alpha and beta.
End of explanation
"""
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
cond = suite.Conditional(0, 1, beta)
cond.label = 'beta = %d' % beta
thinkplot.Pdf(cond)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
"""
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
"""
thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
"""
Explanation: Another way to visualize the posterio distribution: a pseudocolor plot of probability as a function of alpha and beta.
End of explanation
"""
d = dict((pair, 0) for pair in suite.Values())
percentages = [75, 50, 25]
for p in percentages:
interval = suite.MaxLikeInterval(p)
for pair in interval:
d[pair] += 1
thinkplot.Contour(d, contour=False, pcolor=True)
thinkplot.Text(17, 4, '25', color='white')
thinkplot.Text(17, 15, '50', color='white')
thinkplot.Text(17, 30, '75')
thinkplot.Config(xlabel='alpha',
ylabel='beta',
legend=False)
"""
Explanation: Here's another visualization that shows posterior credible regions.
End of explanation
"""
def shared_bugs(p1, p2, bugs):
k1 = np.random.random(bugs) < p1
k2 = np.random.random(bugs) < p2
return np.sum(k1 & k2)
p1 = .20
p2 = .15
bugs = 100
bug_pmf = Pmf()
for trial in range(1000):
bug_pmf[shared_bugs(p1, p2, bugs)] += 1
bug_pmf.Normalize()
bug_pmf.Print()
thinkplot.Hist(bug_pmf)
"""
Explanation: Exercise: From John D. Cook
"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There’s no way to know with one tester. But if you have two testers, you can get a good idea, even if you don’t know how skilled the testers are.
Suppose two testers independently search for bugs. Let k1 be the number of errors the first tester finds and k2 the number of errors the second tester finds. Let c be the number of errors both testers find. The Lincoln Index estimates the total number of errors as k1 k2 / c [I changed his notation to be consistent with mine]."
So if the first tester finds 20 bugs, the second finds 15, and they find 3 in common, we estimate that there are about 100 bugs. What is the Bayesian estimate of the number of errors based on this data?
End of explanation
"""
from scipy import special
class bugFinder(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: (k1, k2, c)
hypo: (n, p1, p1)
"""
n = hypo[0]
p1 = hypo[1]
p2 = hypo[2]
k1 = data[0]
k2 = data[1]
c = data[2]
like1 = EvalBinomialPmf(k1, n, p1)
like2 = EvalBinomialPmf(k2, n, p2)
return like1 * like2
p1 = np.linspace(0, 1, 40)
p2 = np.linspace(0, 1, 40)
n = np.linspace(32, 300, 40)
hypos = []
for p1_ in p1:
for p2_ in p2:
for n_ in n:
hypos.append((n_, p1_, p2_))
bug_finder_suite = bugFinder(hypos)
bug_finder_suite.Update([20, 15, 3])
thinkplot.Contour(bug_finder_suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
"""
Explanation: Now do some bayes
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: The GPS problem. According to Wikipedia

GPS included a (currently disabled) feature called Selective Availability (SA) that adds intentional, time varying errors of up to 100 meters (328 ft) to the publicly available navigation signals. This was intended to deny an enemy the use of civilian GPS receivers for precision weapon guidance.
[...]
Before it was turned off on May 2, 2000, typical SA errors were about 50 m (164 ft) horizontally and about 100 m (328 ft) vertically.[10] Because SA affects every GPS receiver in a given area almost equally, a fixed station with an accurately known position can measure the SA error values and transmit them to the local GPS receivers so they may correct their position fixes. This is called Differential GPS or DGPS. DGPS also corrects for several other important sources of GPS errors, particularly ionospheric delay, so it continues to be widely used even though SA has been turned off. The ineffectiveness of SA in the face of widely available DGPS was a common argument for turning off SA, and this was finally done by order of President Clinton in 2000.
Suppose it is 1 May 2000, and you are standing in a field that is 200m square. You are holding a GPS unit that indicates that your location is 51m north and 15m west of a known reference point in the middle of the field.
However, you know that each of these coordinates has been perturbed by a "feature" that adds random errors with mean 0 and standard deviation 30m.
1) After taking one measurement, what should you believe about your position?
Note: Since the intentional errors are independent, you could solve this problem independently for X and Y. But we'll treat it as a two-dimensional problem, partly for practice and partly to see how we could extend the solution to handle dependent errors.
You can start with the code in gps.py.
2) Suppose that after one second the GPS updates your position and reports coordinates (48, 90). What should you believe now?
3) Suppose you take 8 more measurements and get:
(11.903060613102866, 19.79168669735705)
(77.10743601503178, 39.87062906535289)
(80.16596823095534, -12.797927542984425)
(67.38157493119053, 83.52841028148538)
(89.43965206875271, 20.52141889230797)
(58.794021026248245, 30.23054016065644)
(2.5844401241265302, 51.012041625783766)
(45.58108994142448, 3.5718287379754585)
At this point, how certain are you about your location?
End of explanation
"""
import pandas as pd
df = pd.read_csv('flea_beetles.csv', delimiter='\t')
df.head()
# Solution goes here
"""
Explanation: Exercise: The Flea Beetle problem from DASL
Datafile Name: Flea Beetles
Datafile Subjects: Biology
Story Names: Flea Beetles
Reference: Lubischew, A.A. (1962) On the use of discriminant functions in taxonomy. Biometrics, 18, 455-477. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 254-255.
Authorization: Contact Authors
Description: Data were collected on the genus of flea beetle Chaetocnema, which contains three species: concinna (Con), heikertingeri (Hei), and heptapotamica (Hep). Measurements were made on the width and angle of the aedeagus of each beetle. The goal of the original study was to form a classification rule to distinguish the three species.
Number of cases: 74
Variable Names:
Width: The maximal width of aedeagus in the forpart (in microns)
Angle: The front angle of the aedeagus (1 unit = 7.5 degrees)
Species: Species of flea beetle from the genus Chaetocnema
Suggestions:
Plot CDFs for the width and angle data, broken down by species, to get a visual sense of whether the normal distribution is a good model.
Use the data to estimate the mean and standard deviation for each variable, broken down by species.
Given a joint posterior distribution for mu and sigma, what is the likelihood of a given datum?
Write a function that takes a measured width and angle and returns a posterior PMF of species.
Use the function to classify each of the specimens in the table and see how many you get right.
End of explanation
"""
|
agile-geoscience/striplog
|
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
|
apache-2.0
|
data = """Comp Formation,Depth
A,100
B,200
C,250
D,400
E,600"""
"""
Explanation: Extract curves into striplogs
Sometimes you'd like to summarize or otherwise extract curve data (e.g. wireline log data) into a striplog (e.g. one that represents formations).
We'll start by making some fake CSV text — we'll make 5 formations called A, B, C, D and E:
End of explanation
"""
from striplog import Striplog
s = Striplog.from_csv(text=data, stop=650)
"""
Explanation: If you have a CSV file, you can do:
s = Striplog.from_csv(filename=filename)
But we have text, so we do something slightly different, passing the text argument instead. We also pass a stop argument to tell Striplog to make the last unit (E) 50 m thick. (If you don't do this, it will be 1 m thick).
End of explanation
"""
s[0]
"""
Explanation: Each element of the striplog is an Interval object, which has a top, base and one or more Components, which represent whatever is in the interval (maybe a rock type, or in this case a formation). There is also a data field, which we will use later.
End of explanation
"""
s.plot(aspect=3)
"""
Explanation: We can plot the striplog. By default, it will use a random legend for the colours:
End of explanation
"""
s.plot(style='tops', field='formation', aspect=1)
"""
Explanation: Or we can plot in the 'tops' style:
End of explanation
"""
from welly import Curve
import numpy as np
depth = np.linspace(0, 699, 700)
data = np.sin(depth/10)
curve = Curve(data=data, index=depth)
"""
Explanation: Random curve data
Make some fake data:
End of explanation
"""
import matplotlib.pyplot as plt
fig, axs = plt.subplots(ncols=2, sharey=True)
axs[0] = s.plot(ax=axs[0])
axs[1] = curve.plot(ax=axs[1])
"""
Explanation: Plot it:
End of explanation
"""
s = s.extract(curve.values, basis=depth, name='GR')
"""
Explanation: Extract data from the curve into the striplog
End of explanation
"""
s[1]
"""
Explanation: Now we have some the GR data from each unit stored in that unit:
End of explanation
"""
plt.plot(s[1].data['GR'])
"""
Explanation: So we could plot a segment of curve, say:
End of explanation
"""
s = s.extract(curve, basis=depth, name='GRmean', function=np.nanmean)
s[1]
"""
Explanation: Extract and reduce data
We don't have to store all the data points. We can optionaly pass a function to produce anything we like, and store the result of that:
End of explanation
"""
s[1].data['foo'] = 'bar'
s[1]
"""
Explanation: Other helpful reducing functions:
np.nanmedian — median average (ignoring nans)
np.product — product
np.nansum — sum (ignoring nans)
np.nanmin — minimum (ignoring nans)
np.nanmax — maximum (ignoring nans)
scipy.stats.mstats.mode — mode average
scipy.stats.mstats.hmean — harmonic mean
scipy.stats.mstats.gmean — geometric mean
Or you can write your own, for example:
def trim_mean(a):
"""Compute trimmed mean, trimming min and max"""
return (np.nansum(a) - np.nanmin(a) - np.nanmax(a)) / a.size
Then do:
s.extract(curve, basis=basis, name='GRtrim', function=trim_mean)
The function doesn't have to return a single number like this, it could return anything you like, including a dictionary.
We can also add bits to the data dictionary manually:
End of explanation
"""
|
ubcgif/gpgTutorials
|
notebooks/mag/MagneticDipoleApplet.ipynb
|
mit
|
from geoscilabs.mag.MagDipoleApp import MagneticDipoleApp
"""
Explanation: This is the <a href="https://jupyter.org/">Jupyter Notebook</a>, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it.
To use the notebook:
- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)
- You can alter variables and re-run cells
- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook)
End of explanation
"""
mag = MagneticDipoleApp()
mag.interact_plot_model_dipole()
"""
Explanation: Magnetic Dipole Applet
Purpose
The objective is to learn about the magnetic field observed at the ground's surface, caused by a small buried dipolar magnet. In geophysics, this simulates the observed anomaly over a buried susceptible sphere that is magnetized by the Earth's magnetic field.
What is shown
<b>The colour map</b> shows the strength of the chosen parameter (Bt, Bx, By, Bz, or Bg) as a function of position.
Imagine doing a two dimensional survey over a susceptible sphere that has been magentized by the Earth's magnetic field specified by inclination and declination. "Measurement" location is the centre of each coloured box. This is a simple (but easily programmable) alternative to generating a smooth contour map.
The anomaly depends upon magnetic latitude, direction of the inducing (Earth's) field, the depth of the buried dipole, and the magnetic moment of the buried dipole.
Important Notes:
<b>Inclination (I)</b> and <b>declination (D)</b> describe the orientation of the Earth's ambient field at the centre of the survey area. Positive inclination implies you are in the northern hemisphere, and positive declination implies that magnetic north is to the east of geographic north.
The <b>"length"</b> adjuster changes the size of the square survey area. The default of 72 means the survey square is 72 metres on a side.
The <b>"data spacing"</b> adjuster changes the distance between measurements. The default of 1 means measurements were acquired over the survey square on a 2-metre grid. In other words, "data spacing = 2" means each coloured box is 2 m square.
The <b>"depth"</b> adjuster changes the depth (in metres) to the centre of the buried dipole.
The <b>"magnetic moment (M)"</b> adjuster changes the strength of the induced field. Units are Am2. This is related to the strength of the inducing field, the susceptibility of the buried sphere, and the volume of susceptible material.
<b>Bt, Bg, Bx, By, Bz</b> are Total field, X-component (positive northwards), Y-component (positive eastwards), and Z-component (positive down) of the anomaly field respectively.
Checking the <b>fixed scale</b> button fixes the colour scale so that the end points of the colour scale are minimum and maximum values for the current data set.
You can generate a <b>profile</b> along either "East" or "North" direction
Check <b>half width</b> to see the half width of the anomaly. Anomaly width is noted on the botton of the graph.
Measurements are taken 1m above the surface.
For gradient data (<b>Bg</b>), measurements are taken at 1m and 2m
Note that magnetic moment (M) for monopole is equal to the charge (Q):
End of explanation
"""
mag.interact_plot_model_two_monopole()
"""
Explanation: Two monopoles (pseudo dipole)
Different from the previous app, here we focus on a situtation where we have to monopoles having negative
and postive signs. Their horizontal location: (X, Y) are same, but depths are different. By default depth of the negative pole, <b>depth${-Q}$ </b>, is located 0m and that of the positive pole, <b>depth${-Q}$ </b>, is 10m.
End of explanation
"""
|
crocha700/pyspec
|
examples/example_2d_spectra.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
%matplotlib inline
import seawater as sw
from pyspec import spectrum as spec
"""
Explanation: pyspec example notebook: 2D spectrum
This notebook showcases a basic usage of pyspec for computing 2D spectrum and its associated isotropic spectrum. Other featrures such as bin average in log space and confidence limit estimation are also shown.
End of explanation
"""
fni = "data/synthetic_uv.npz"
uv_synthetic = np.load(fni)
up = uv_synthetic['up']
# We may also want to calculate the wavenumber spectrum of a 3d-array along two dimensions, and
# then average along the third dimension. Here we showcase that pyspec capability by repeating the
# up array...
up2 = np.tile(up,(10,1,1)).T
up2.shape
"""
Explanation: Load random data with $\kappa^{-3}$ spectrum
End of explanation
"""
spec2d10 = spec.TWODimensional_spec(up2,1.,1.)
spec2d = spec.TWODimensional_spec(up,1.,1.)
fig = plt.figure(figsize=(9,7))
ax = fig.add_subplot(111)
cf = ax.contourf(spec2d.kk1,spec2d.kk2,spec2d.spec.mean(axis=-1),np.logspace(-6,6,10),norm=LogNorm(vmin=1.e-6,vmax=1e6))
cb = plt.colorbar(cf)
ax.set_xlabel(r'$k_x$')
ax.set_ylabel(r'$k_y$')
cb.set_label(r'log$_{10}$ E')
fig = plt.figure(figsize=(9,7))
ax = fig.add_subplot(111)
cf = ax.contourf(spec2d.kk1,spec2d.kk2,spec2d10.spec.mean(axis=-1),np.logspace(-6,6,10),norm=LogNorm(vmin=1.e-6,vmax=1e6))
cb = plt.colorbar(cf)
ax.set_xlabel(r'$k_x$')
ax.set_ylabel(r'$k_y$')
cb.set_label(r'log$_{10}$ E')
"""
Explanation: Compute and plot the 2D spectrum using $dx = dy = 1$
End of explanation
"""
spec2d.ndim
k3 = np.array([.5e-2,.5])
E3 = 1/k3**3/1e5
fig = plt.figure(figsize=(9,7))
ax = fig.add_subplot(111)
plt.loglog(spec2d.ki,spec2d10.ispec.mean(axis=-1))
plt.loglog(k3,E3,'k--')
plt.text(1.e-2,50,r'$\kappa^{-3}$',fontsize=25)
ax.set_xlabel(r"Wavenumber")
ax.set_ylabel(r"Spectral density")
"""
Explanation: Calculating the isotropic spectrum
The class "TWODimensional_spec" has the objects "ispec" for isotropic the spectrum and "kr" for the isotropic wavenumber. The isotropic spectrum is computed by interpolating the 2D spectrum from Cartesian to polar coordinates and integrating in the azimuthal direction; the integration is not very accurate at low wavenumbers due to the paucity of information. An important point is that we neglect the corners ($\kappa > max(k_x,k_y)$) since in this square domain it preferentially selects some direction. Hence, we just need to plot it.
End of explanation
"""
ki, Ei = spec.avg_per_decade(spec2d.ki,spec2d.ispec,nbins = 10)
fig = plt.figure(figsize=(9,7))
ax = fig.add_subplot(111)
plt.loglog(spec2d.ki,spec2d.ispec,label='raw')
plt.loglog(ki,Ei,label='binned')
ax.set_xlabel(r"Wavenumber")
ax.set_ylabel(r"Spectral density")
plt.legend(loc=3)
"""
Explanation: Averaging with 10 bins decade
Because we generally plot and analyze spectra in $\log_{10}\times\log_{10}$, it is sometimes useful to bin the spectrum. This makes the spectrum uniformaly space in log space. This may be useful for avoinding bias of more data at highwanumber when trying to least-squares fit slopes to the spectrum in log space. The module spec has a built in function that does the spectral average. Here we use 10 bins per deca.
End of explanation
"""
sn = 5*np.ones(Ei.size) # number of spectral realizations
sn[10:16] = 20
sn[16:] = 100
El,Eu = spec.spec_error(Ei, sn, ci=0.95) # calculate lower and upper limit of confidence limit
fig = plt.figure(figsize=(9,7))
ax = fig.add_subplot(111)
ax.fill_between(ki,El,Eu, color='r', alpha=0.25)
plt.loglog(ki,Ei,color='r')
ax.set_xlabel(r"Wavenumber")
ax.set_ylabel(r"Spectral density")
"""
Explanation: Adding error bars
pyspec has a built-in function to calculate confidence limits to the 1D spectrum. The function spec_error calculates these confidence limits assuming that the estimates of the spectrum are $\chi^2$-distributed. Suppose we have estimated the spectra Ei with different number of averaing at different wavenumber. Thus we have different number of spectral realization. To illustrate how to use the function, we pick some arbitrary numbers.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/messy-consortium/cmip6/models/emac-2-53-vol/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-VOL
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
wcmckee/wcmckee.com
|
posts/redtube.ipynb
|
mit
|
import requests
import json
import random
import getpass
#import couchdb
import pickle
import getpass
#!flask/bin/python
#from flask import Flask, jsonify
myusr = getpass.getuser()
print(myusr)
#couch = couchdb.Server()
with open('/home/{}/prn.pickle'.format(myusr), 'rb') as handle:
prnlis = pickle.load(handle)
#db = couch.create('redtube')
#db = couch['redtube']
"""
Explanation: RedTube json Python
End of explanation
"""
payload = {'output' : 'json', 'data' : 'redtube.Videos.searchVideos', 'page' : 1}
getprn = requests.get('http://api.redtube.com/', params = payload)
daprn = getprn.json()
levid = len(daprn['videos'])
porndick = dict()
#for lev in range(0, levid):
# print(daprn['videos'][lev]['video'])
# prntit = (daprn['videos'][lev]['video']['title'])
# prnnow = prntit.replace(' ', '-')
# prnlow = prnnow.lower()
# print(prnlow)
# try:
# somelis = list()
# for dapr in daprn['videos'][lev]['video']['tags']:
# print(dapr['tag_name'])
# somelis.append(dapr['tag_name'])
# porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
#db.save(porndick)
#try:
# db = couch.create(prnlow)
#except PreconditionFailed:
# db = couch[prnlow]
#db.save({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
# except KeyError:
# continue
#for i in db:
# print(i)
#db.save(porndick)
#for i in db:
# print(db[i])
#print(pornd['tags'])
#loaPrn = json.loads(getPrn.text)
#print loaUrl
"""
Explanation: Requests and json are the two main modules used for this. Random can also be handy
End of explanation
"""
lenvid = len(daprn[u'videos'])
lenvid
#aldic = dict()
with open('/home/{}/prn3.pickle'.format(myusr), 'rb') as handles:
aldic = pickle.load(handles)
import shutil
for napn in range(0, lenvid):
print(daprn[u'videos'][napn]['video']['url'])
print(daprn[u'videos'][napn]['video']['title'])
try:
letae = len(daprn[u'videos'][napn]['video']['tags'])
tagna = (daprn[u'videos'][napn]['video']['tags'])
reqbru = requests.get('http://api.giphy.com/v1/gifs/translate?s={}&api_key=dc6zaTOxFJmzC'.format(tagna))
brujsn = reqbru.json()
print(brujsn['data']['images']['fixed_width']['url'])
gurl = (brujsn['data']['images']['fixed_width']['url'])
gslug = (brujsn['data']['slug'])
#fislg = gslug.repl
try:
somelis = list()
for dapr in daprn['videos'][lev]['video']['tags']:
print(dapr['tag_name'])
somelis.append(dapr['tag_name'])
porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : ", ".join(str(x) for x in somelis)}})
except KeyError:
continue
aldic.update({gslug : gurl})
#print(gurl)
'''
with open('/home/pi/redtube/posts/{}.meta'.format(gslug), 'w') as blmet:
blmet.write('.. title: ' + glug + ' \n' + '.. slug: ' + nameofblogpost + ' \n' + '.. date: ' + str(nowtime) + ' \n' + '.. tags: ' + tagblog + '\n' + '.. link:\n.. description:\n.. type: text')
response = requests.get(gurl, stream=True)#
response
with open('/home/pi/redtube/galleries/{}.gif'.format(gslug), 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
tan = tagna.replace(' ', '-')
tanq = tan.lower()
print(tanq)
'''
except KeyError:
continue
with open('/home/{}/prn.pickle'.format(myusr), 'wb') as handle:
pickle.dump(porndick, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('/home/{}/prn3.pickle'.format(myusr), 'wb') as handle:
pickle.dump(aldic, handle, protocol=pickle.HIGHEST_PROTOCOL)
#db.save(aldic)
"""
Explanation: Convert it into readable text that you can work with
End of explanation
"""
|
jsharpna/DavisSML
|
lectures/lecture6/lecture6.ipynb
|
mit
|
import pandas as pd
import numpy as np
import matplotlib as mpl
import plotnine as p9
import matplotlib.pyplot as plt
import itertools
import warnings
warnings.simplefilter("ignore")
from sklearn import neighbors, preprocessing, impute, metrics, model_selection, linear_model, svm, feature_selection
from matplotlib.pyplot import rcParams
rcParams['figure.figsize'] = 6,6
def train_bank_to_xy(bank):
"""standardize and impute training"""
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
scaler = preprocessing.StandardScaler().fit(X)
imputer = impute.SimpleImputer(fill_value=0).fit(X)
trans_prep = lambda Z: imputer.transform(scaler.transform(Z))
X = trans_prep(X)
y = (y == 'yes')*1
return (X, y), trans_prep
def test_bank_to_xy(bank, trans_prep):
"""standardize and impute test"""
bank_sel = bank[['age','balance','duration','y']].values
X,y = bank_sel[:,:-1], bank_sel[:,-1]
X = trans_prep(X)
y = (y == 'yes')*1
return (X, y)
bank = pd.read_csv('../../data/bank.csv',sep=';',na_values=['unknown',999,'nonexistent'])
bank.info()
bank_tr, bank_te = model_selection.train_test_split(bank,test_size=.33)
p9.ggplot(bank_tr, p9.aes(x = 'age',fill = 'y')) + p9.geom_density(alpha=.2)
(X_tr, y_tr), trans_prep = train_bank_to_xy(bank_tr)
X_te, y_te = test_bank_to_xy(bank_te, trans_prep)
def plot_conf_score(y_te,score,tau):
y_classes = (1,0)
cf_inds = ["Pred {}".format(c) for c in y_classes]
cf_cols = ["True {}".format(c) for c in y_classes]
y_pred = score_dur > tau
return pd.DataFrame(metrics.confusion_matrix(y_pred,y_te,labels=y_classes),index=cf_inds,columns=cf_cols)
"""
Explanation: Classification 1: Generative methods
StatML: Lecture 6
Prof. James Sharpnack
Some content and images are from "The Elements of Statistical Learning" by Hastie, Tibshirani, Friedman
Reading ESL Chapter 4
Bayes rule in classification
Recall from homework that Bayes rule is
$$
g(x) = \left{ \begin{array}{ll} 1, &\mathbb P {Y = 1 | X = x } > \mathbb P {Y = 0 | X = x } \
0, &{\rm otherwise}\end{array}\right.
$$
Another way to write this event is (for $f_X(x) > 0$)
$$
f_{Y,X}(1, x) = \mathbb P {Y = 1 | X = x } f_X(x) > \mathbb P {Y = 0 | X = x } f_X(x) = f_{Y,X} (0, x)
$$
Let $\pi = \mathbb P { Y = 1}$ then this is also
$$
\pi f_{X|Y}(x | 1) > (1 - \pi) f_{X|Y} (x|0)
$$
which is
$$
\frac{f_{X|Y}(x | 1)}{f_{X|Y} (x|0)} > \tau = \frac{1-\pi}{\pi}
$$
Bayes rule in classification
$$
\frac{f_{X|Y}(x | 1)}{f_{X|Y} (x|0)} > \tau = \frac{1-\pi}{\pi}
$$
the Bayes rule is performing a likelihood ratio test
Generative methods
A generative method does the following
1. treats $Y=1$ and $Y=0$ as different datasets and tries to estimate the densities $\hat f_{X | Y}$.
2. then plug these in to the formula for the Bayes rule
Naive Bayes methods assume that each component of $X$ is independent of one another, but does non-parametric density estimation for the densities $\hat f_{X_j|Y}$
Parametric methods fit a parametric density to $X|Y$
Density estimation
Parametric maximum likelihood estimation
Nonparametric: Kernel density estimation (KDE), nearest neighbor methods,
Reasonable heuristic for estimating a density $\hat f_X$, based on data $x_1,\ldots,x_n$ is
1. Let $N(x,\epsilon)$ be the number of data points within $\epsilon$ of $x$
2. $\hat f(x) = N(x,\epsilon) / n$Vol$(B(\epsilon))$ divide by the volume of the ball of radius $\epsilon$
$$\mathbb E \left( \frac{N(x,\epsilon)}{n} \right)= \mathbb P{X \in B(x,\epsilon) } \approx f_x(x) \textrm{Vol}(B(\epsilon))$$
Kernel density estimation
Let the Boxcar kernel function be
$$
k(\|x_0-x_1\|) = \frac{1{ \| x_0 - x_1 \| \le 1 }}{{\rm Vol}(B(1))}
$$
then the number of pts within $\epsilon$ is
$$
N(x,\epsilon) = {\rm Vol}(B(1)) \sum_i k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
and the density estimate is
$$
\hat f(x) = \frac 1n \sum_i \frac{{\rm Vol}(B(1))}{{\rm Vol}(B(\epsilon))} \cdot k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
this is equal to
$$
\hat f(x) = \frac 1n \sum_i \frac{1}{\epsilon^p} \cdot k\left( \frac{\| x - x_i \|}{\epsilon} \right)
$$
Kernel density estimation
General kernel density estimate is based on a kernel such that
$$
\int k(\|x-x_0\|) dx = 1.
$$
Then KDE is
$$
\hat f(x') = \frac 1n \sum_i \frac{1}{\epsilon^p} \cdot k\left( \frac{\| x' - x_i \|}{\epsilon} \right)
$$
where $p$ is the dimensionality of the X space.
$\epsilon$ is a bandwidth parameter.
from wikipedia
Naive Bayes
For each $y = 0,1$ let $x_1,\ldots,x_{n_y}$ be the predictor data with $Y = y$
- For each dimension j
- Let $\hat f_{y,j}$ be the KDE of $x_{1,j},\ldots,x_{n_y,j}$
- Let $\hat f_y = \prod_j \hat f_{y,j}$
Let $\pi$ be the proportion of $Y = 1$ then let $\tau = (1 - \pi) / \pi$.
Predict $\hat y = 1$ for a new $x'$ if
$$
\frac{\hat f_{1}(x')}{\hat f_{0} (x')} > \tau
$$
and $\hat y=0$ otherwise.
from mathworks.org
Exercise 6.1
Let $x_0,x_1 \in \mathbb R^p$ and
$$k(\|x_0 - x_1\|) = \frac{1}{(2\pi)^{k/2}} \exp \left(- \frac 12 \|x_0 - x_1\|_2^2 \right).$$
How do we know that this is a valid kernel for multivariate density estimation?
Suppose that you used this kernel to obtain a multivariate density estimate, $\hat f: \mathbb R^p \rightarrow \mathbb R$, and also used the subroutine in Naive Bayes to estimate $\hat f_N(x') = \prod_j \hat f_j(x_j')$. Will these return the same results? Think about the boxcar kernel with bandwidth of 1, what are the main differences between these methods?
STOP
Answer to 6.1
This is a Gaussian pdf with mean $x_1$ and variance $I$ so it integrates to 1.
They are not the same because
$$
\frac 1n \sum_i \exp\left(-\frac 12 \sum_j (x_{ij} - x_j')^2\right) \ne \prod_j \left( \frac 1n \sum_i \exp(-\frac 12 (x_{ij} - x_j')^2)\right)
$$
For the boxcar kernel in p dimensions, $k(\| x' - x_i\|) \ne 0$ if $\| x' - x_i \| \le 1$ while $k(|x_j' - x_{ij}|) \ne 0$ if $|x_j' - x_{ij}| \le 1$. So $\hat f_N(x') \ne 0$ if $|x_j' - x_{ij}| \le 1$ for all j.
Gaussian Generative Models
Fit parametric model for each class using likelihood based approach.
Assume a Gaussian distribution
$$
X | Y = k \sim \mathcal N(\mu_k, \Sigma_k)
$$
for mean and variance parameters $\mu_k, \Sigma_k$.
End of explanation
"""
score_dur = X_te[:,2]
p9.ggplot(bank_tr[['duration','y']].dropna(axis=0)) + p9.aes(x = 'duration',fill = 'y')\
+ p9.geom_density(alpha=.5)
y_te
plot_conf_score(y_te,score_dur,1.)
plot_conf_score(y_te,score_dur,2.)
## Fit and find NNs
nn = neighbors.NearestNeighbors(n_neighbors=10,metric="l2")
nn.fit(X_tr)
dists, NNs = nn.kneighbors(X_te)
NNs[1], y_tr[NNs[1]].mean(), y_te[1]
score_nn = np.array([(y_tr[knns] == 1).mean() for knns in NNs])
plot_conf_score(y_te,score_nn,.2)
nn = neighbors.KNeighborsClassifier(n_neighbors=10)
nn.fit(X_tr, y_tr)
score_nn = nn.predict_proba(X_te)[:,1]
plot_conf_score(y_te,score_nn,.2)
def print_top_k(score_dur,y_te,k_top):
ordering = np.argsort(score_dur)[::-1]
print("k: score, y")
for k, (yv,s) in enumerate(zip(y_te[ordering],score_dur[ordering])):
print("{}: {}, {}".format(k,s,yv))
if k >= k_top - 1:
break
print_top_k(score_dur,y_te,10)
"""
Explanation: Evaluating a classifier
Most classifiers are "soft" because they can output a score, higher means more likely to be $Y=1$
- Logistic regression: output probability
- SVM: distance from margin
- kNN: percent of neighbors with $Y=1$
- LDA/QDA/Naive bayes: estimated likelihood ratio
If we order from largest to smallest then this gives us the points to predict as 1 first.
Choose a cut-off to say all above this value are 1 and below are 0 can see different errors
Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>True 1</td><td>True 0</td></tr>
<tr><td>Pred 1</td><td>True Pos</td><td>False Pos</td></tr>
<tr><td>Pred 0</td><td>False Neg</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
"""
plt.style.use('ggplot')
fpr_dur, tpr_dur, threshs = metrics.roc_curve(y_te,score_dur)
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur)
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.title("ROC for 'duration'")
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(fpr_dur,tpr_dur,label='duration')
plt.plot(fpr_nn,tpr_nn,label='knn')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC")
fpr_nn, tpr_nn, threshs = metrics.roc_curve(y_te,score_nn)
plot_temp()
def plot_temp():
plt.figure(figsize=(6,6))
plt.plot(rec_dur,prec_dur,label='duration')
plt.plot(rec_nn,prec_nn,label='knn')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
prec_dur, rec_dur, threshs = metrics.precision_recall_curve(y_te,score_dur)
prec_nn, rec_nn, threshs = metrics.precision_recall_curve(y_te,score_nn)
plot_temp()
"""
Explanation: Confusion matrix and classification metrics
<table style='font-family:"Courier New", Courier, monospace; font-size:120%'>
<tr><td></td><td>True 1</td><td>True 0</td></tr>
<tr><td>Pred 1</td><td>True Pos</td><td>False Pos</td></tr>
<tr><td>Pred 0</td><td>False Neg</td><td>True Neg</td></tr>
</table>
$$
\textrm{FPR} = \frac{FP}{FP+TN}
$$
$$
\textrm{TPR, Recall} = \frac{TP}{TP + FN}
$$
$$
\textrm{Precision} = \frac{TP}{TP + FP}
$$
End of explanation
"""
from sklearn import discriminant_analysis
## Init previous predictors list
preds = [("Duration",score_dur), ("NN", score_nn)]
## Fit and predict with LDA
lda = discriminant_analysis.LinearDiscriminantAnalysis()
lda.fit(X_tr,y_tr)
score_pred = lda.predict_log_proba(X_te)[:,1]
preds += [("LDA",score_pred)]
## Fit and predict with QDA
qda = discriminant_analysis.QuadraticDiscriminantAnalysis()
qda.fit(X_tr,y_tr)
score_pred = qda.predict_log_proba(X_te)[:,1]
preds += [("QDA",score_pred)]
def plot_pr_models(X_te, y_te, preds):
plt.figure(figsize=(6,6))
for name, score_preds in preds:
prec, rec, threshs = metrics.precision_recall_curve(y_te,score_preds)
plt.plot(rec,prec,label=name)
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve")
plot_pr_models(X_te, y_te, preds)
"""
Explanation: Comments
"Good" ROC should be in top left
"Good" PR should be large for all recall values
PR is better for large class imbalance
ROC treats each type of error equally
Exercise 6.2
Apply LDA and QDA to the above dataset and compare the PR curves to the previous two methods. To calculate the "score" you can use the predict_log_proba method.
End of explanation
"""
|
NuGrid/NuPyCEE
|
DOC/Capabilities/Including_radioactive_isotopes.ipynb
|
bsd-3-clause
|
# Import python modules
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Import the NuPyCEE codes
from NuPyCEE import sygma
from NuPyCEE import omega
"""
Explanation: Including Radioactive Isotopes in NuPyCEE
Prepared by: Benoit Côté
This notebook describe the radioactive isotope implementation in NuPyCEE and shows how to run a SYGMA and OMEGA with radioactive yields.
End of explanation
"""
# Number of timesteps in the simulaton.
# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb
special_timesteps = -1
nb_dt = 100
tend = 2.0e6
dt = tend / float(nb_dt)
# No star formation.
no_sf = True
# Dummy neutron star merger yields to activate the radioactive option.
nsmerger_table_radio = 'yield_tables/extra_table_radio_dummy.txt'
# Add 1 Msun of radioactive Al-26 in the gas.
# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file
# Index 0, 1, 2 --> Al-26, K-40, U-238
ism_ini_radio = [1.0, 0.0, 0.0]
"""
Explanation: 1. Input Parameters
The inputs that need to be provided to activate the radioactive option are:
the list of selected radioactive isotopes,
the radioactive yield tables.
The list of isotopes is declared in the yield_tables/decay_info.txt file and can be modified prior any simulation. The radioactive yields are found (or need to be added) in the yield_tables/ folder. Each stable yield table can have their associated radioactive yield table:
Massive and AGB stars
Stable isotopes: table
Radioactive isotopes: table_radio
Type Ia supernovae
Stable isotopes: sn1a_table
Radioactive isotopes: sn1a_table_radio
Neutron star mergers
Stable isotopes: nsmerger_table
Radioactive isotopes: nsmerger_table_radio
Etc..
Each enrichment source can be activated independently by providing its input radioactive yield table. The radioactive yield table file format needs to be identical to its stable counterpart.
Warning: Radioactive isotopes will decay into stable isotopes. When using radioactive yields, please make sure that the stable yields do not include the decayed isotopes already.
2. Single Decay Channel (Default Option)
If the radioactive isotopes you selected have only one decay channel, you can use the default decay option, which uses the following exponential law,
$N_r(t)=N_r(t_0)\,\mathrm{exp}\left[\frac{-(t-t_0)}{\tau}\right],$
$\tau=\frac{T_{1/2}}{\mathrm{ln}(2)},$
where $t_0$ is the reference time where the number of radioactive isotopes was equal to $N_0$. $T_{1/2}$ is the half-life of the isotope, which needs to be specified in yield_tables/decay_info.txt. The decayed product will be added to the corresponding stable isotope, as defined in yield_tables/decay_info.txt.
Example with Al-26
Below, a SYGMA simulation is ran with no star formation to better isolate the decay process. Here we choose Al-26 as an example, which decays into Mg-26.
End of explanation
"""
# Run SYGMA (or in this case, the decay process)
s = sygma.sygma(iniZ=0.02, no_sf=no_sf, ism_ini_radio=ism_ini_radio,\
special_timesteps=special_timesteps, tend=tend, dt=dt,\
decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio)
# Get the Al-26 (radioactive) and Mg-26 (stable) indexes in the gas arrays
i_Al_26 = s.radio_iso.index('Al-26')
i_Mg_26 = s.history.isotopes.index('Mg-26')
# Extract the evolution of these isotopes as a function of time
Al_26 = np.zeros(s.nb_timesteps+1)
Mg_26 = np.zeros(s.nb_timesteps+1)
for i_t in range(s.nb_timesteps+1):
Al_26[i_t] = s.ymgal_radio[i_t][i_Al_26]
Mg_26[i_t] = s.ymgal[i_t][i_Mg_26]
"""
Explanation: Run SYGMA
End of explanation
"""
# Plot the evolution of Al-26 and Mg-26
%matplotlib nbagg
plt.figure(figsize=(8,4.5))
plt.plot( np.array(s.history.age)/1e6, Al_26, '--b', label='Al-26' )
plt.plot( np.array(s.history.age)/1e6, Mg_26, '-r', label='Mg-26' )
plt.plot([0,2.0], [0.5,0.5], ':k')
plt.plot([0.717,0.717], [0,1], ':k')
# Labels and fontsizes
plt.xlabel('Time [Myr]', fontsize=16)
plt.ylabel('Mass of isotope [M$_\odot$]', fontsize=16)
plt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5))
plt.subplots_adjust(top=0.96)
plt.subplots_adjust(bottom=0.15)
plt.subplots_adjust(right=0.75)
matplotlib.rcParams.update({'font.size': 14.0})
"""
Explanation: Plot results
End of explanation
"""
# Add 1 Msun of radioactive K-40 in the gas.
# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file
# Index 0, 1, 2 --> Al-26, K-40, U-238
ism_ini_radio = [0.0, 1.0, 0.0]
# Number of timesteps in the simulaton.
# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb
special_timesteps = -1
nb_dt = 100
tend = 5.0e9
dt = tend / float(nb_dt)
# Run SYGMA (or in this case, the decay process)
# with the decay module
s = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\
special_timesteps=special_timesteps, tend=tend, dt=dt,\
decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\
use_decay_module=True, radio_refinement=1)
# Get the K-40 (radioactive) and Ca-40 and Ar-40 (stable) indexes in the gas arrays
i_K_40 = s.radio_iso.index('K-40')
i_Ca_40 = s.history.isotopes.index('Ca-40')
i_Ar_40 = s.history.isotopes.index('Ar-40')
# Extract the evolution of these isotopes as a function of time
K_40 = np.zeros(s.nb_timesteps+1)
Ca_40 = np.zeros(s.nb_timesteps+1)
Ar_40 = np.zeros(s.nb_timesteps+1)
for i_t in range(s.nb_timesteps+1):
K_40[i_t] = s.ymgal_radio[i_t][i_K_40]
Ca_40[i_t] = s.ymgal[i_t][i_Ca_40]
Ar_40[i_t] = s.ymgal[i_t][i_Ar_40]
# Plot the evolution of Al-26 and Mg-26
%matplotlib nbagg
plt.figure(figsize=(8,4.5))
plt.plot( np.array(s.history.age)/1e6, K_40, '--b', label='K-40' )
plt.plot( np.array(s.history.age)/1e6, Ca_40, '-r', label='Ca-40' )
plt.plot( np.array(s.history.age)/1e6, Ar_40, '-g', label='Ar-40' )
# Labels and fontsizes
plt.xlabel('Time [Myr]', fontsize=16)
plt.ylabel('Mass of isotope [M$_\odot$]', fontsize=16)
plt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5))
plt.subplots_adjust(top=0.96)
plt.subplots_adjust(bottom=0.15)
plt.subplots_adjust(right=0.75)
matplotlib.rcParams.update({'font.size': 14.0})
"""
Explanation: 3. Multiple Decay Channels
If the radioactive isotopes you selected have more than one decay channel, you need to use the provided decay module. This option can be activated by adding use_decay_module=True in the list of parameters when creating an instance of SYGMA and OMEGA. When using the decay module, the yield_tables/decay_file.txt file still needs to be provided as an input to define which radioactive isotopes are selected for the calculation.
Example with K-40
Below we still run a SYGMA simulation with no star formation to better isolate the decay process. A fraction of K-40 decays into Ca-40, and another fraction decays into Ar-40.
Run SYGMA
End of explanation
"""
# Add 1 Msun of radioactive U-238 in the gas.
# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file
# Index 0, 1, 2 --> Al-26, K-40, U-238
ism_ini_radio = [0.0, 0.0, 1.0]
# Number of timesteps in the simulaton.
# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb
special_timesteps = -1
nb_dt = 100
tend = 5.0e9
dt = tend / float(nb_dt)
# Run SYGMA (or in this case, the decay process)
# with the decay module
s = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\
special_timesteps=special_timesteps, tend=tend, dt=dt,\
decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\
use_decay_module=True, radio_refinement=1)
"""
Explanation: Example with U-238
End of explanation
"""
print(s.radio_iso)
"""
Explanation: In the case of U-238, there are many isotopes that are resulting from the multiple decay channels. Those new radioactive isotopes are added automatically in the list of isotopes in NuPyCEE.
End of explanation
"""
|
vishalsrangras/env-setup
|
env-test/test.ipynb
|
mit
|
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
img = mpimg.imread('test.jpg')
plt.imshow(img)
"""
Explanation: Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
End of explanation
"""
import cv2
# convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='Greys_r')
"""
Explanation: Test OpenCV
End of explanation
"""
import tensorflow as tf
with tf.Session() as sess:
a = tf.constant(1)
b = tf.constant(2)
c = a + b
# Should be 3
print("1 + 2 = {}".format(sess.run(c)))
"""
Explanation: Test TensorFlow
End of explanation
"""
# Import everything needed to edit/save/watch video clips
import imageio
imageio.plugins.ffmpeg.download()
from moviepy.editor import VideoFileClip
from IPython.display import HTML
"""
Explanation: Test Moviepy
End of explanation
"""
new_clip_output = 'test_output.mp4'
test_clip = VideoFileClip("test.mp4")
new_clip = test_clip.fl_image(lambda x: cv2.cvtColor(x, cv2.COLOR_RGB2YUV)) #NOTE: this function expects color images!!
%time new_clip.write_videofile(new_clip_output, audio=False)
HTML("""
<video width="640" height="300" controls>
<source src="{0}" type="video/mp4">
</video>
""".format(new_clip_output))
"""
Explanation: Create a new video with moviepy by processing each frame to YUV color space.
End of explanation
"""
|
kkozarev/mwacme
|
notebooks/test_synchrotron.ipynb
|
gpl-2.0
|
from matplotlib import pyplot as plt
%matplotlib inline
import math
import scipy.integrate as integrate
import numpy as np
import scipy.special as special
"""
Explanation: Development of Synchrotron model and fitting for MWA data
By Kamen Kozarev
End of explanation
"""
#Asymptotic synchrotron values
x=np.arange(1000)/250.
f=np.zeros(1000)
f[0:200]=4*math.pi/(math.sqrt(3)*special.gamma(1./3.))*pow(x[0:200]/2.,1./3.)
f[201:]=math.sqrt(math.pi/2.)*np.exp(-x[201:])*pow(x[201:],1./2.)
plt.plot(x,f)
"""
Explanation: Formulae for asymptotic synchrotron spectrum values
End of explanation
"""
#The F(x) function
def fnu(values):
result=[]
for x in values:
integral = integrate.quad(lambda x: special.kv(5./3.,x), x, np.inf)
result.append(x * integral[0])
return np.array(result)
def fnu_single(x):
integral = integrate.quad(lambda x: special.kv(5./3.,x), x, np.inf)
result= x * integral[0]
return np.array(result)
"""
Explanation: Algorithm for the F(x) function, which approximates the synchrotron spectral shape.
End of explanation
"""
#Define the constants and arrays
nu_c=0.5
nfreq=100
x=np.arange(nfreq)/(nfreq/4.)
Ptot=np.zeros(nfreq)
p=3.
#Pitch angle
a=45*math.pi/180.
FF=fnu(x)
plt.plot(x,FF)
"""
Explanation: A generic model for testing the algorithm
End of explanation
"""
#start and end frequency, MHz
nu_start=1.e6
nu_end=300.e6
#Critical frequency, MHz
nu_c=120.e6
#electron mass, g
me=9.10938e-28
#electron charge Statcoulomb
q=4.80326e-10
#speed of light [cm/s]
c=2.99792458e10
#Bmag [Gauss]
bmag=10.
#Pitch angle
a=45.*math.pi/180.
#Frequency array, MHz
nus=np.linspace(nu_start,nu_end,num=nfreq)
factors=math.sqrt(3)*q**3*bmag*math.sin(a)/(2.*math.pi*me*c**2)
#Array to hold total power
Ptot=[]
for nu in nus:
ff=fnu_single(nu/nu_c)
ff.shape
Ptot.append(factors*ff)
Ptot=np.array(Ptot)
plt.plot(nus/1.e6,Ptot)
plt.xlim(1,300)
plt.title("Electron Synchrotron spectrum")
plt.xscale("log")
plt.xlabel("f [MHz]")
plt.yscale("log")
plt.ylabel("Power")
"""
Explanation: A more realistic model, using proper frequencies.
End of explanation
"""
#CONSTANTS:
#electron mass, g
me=9.10938e-28
#electron charge Statcoulomb
q=4.80326e-10
#speed of light [cm/s]
c=2.99792458e10
#PARAMETERS:
#Constant of proportionality
const=10.
#Power law index of electron spectrum
p=3.5
#Pitch angle
a=80.*math.pi/180.
#Bmag [Gauss]
bmag=10.
#Critical frequency, MHz
nu_c=100.e6
#start and end frequency, MHz
nu_start=80.e6
nu_end=300.e6
#Frequency array, MHz
nus=np.linspace(nu_start,nu_end,num=nfreq)#/(nfreq/4.)
#Array to hold total power
Ppow=[]
Ssyn=[]
jfactor1=math.sqrt(3)*q**3*const*bmag*math.sin(a)/(2.*math.pi*me*c**2*(p+1))
jfactor2=special.gamma(p/4. + 19./12.)*special.gamma(p/4. - 1./12.)
#afactor1=math.sqrt(3)*q**3/(8*math.pi*me) * const*(bmag*math.sin(a))**(0.5*p+1)
#afactor2=special.gamma((3.*p+2.)/12.)*special.gamma((3*p+22.)/12.)
for nu in nus:
jfactor3=pow(((me*c*(nu/nu_c)*2*math.pi)/(3*q*bmag*math.sin(a))),(0.5*(1-p)))
Pemit=jfactor1*jfactor2*jfactor3
#Abs=afactor1*afactor2*pow(nu,-0.5*(p+4.))
#Ppow.append(Pemit/(Abs*4*math.pi))
Ppow.append(jfactor3*pow(nu/nu_c,5./2))
#Ssyn.append(pow(nu/nu_c,5./2)*(1-math.exp(-pow(nu/nu_c,-0.5*(p+4)))))
Ssyn.append(pow(nu/nu_c,5./2)*(1-math.exp(-pow(nu/nu_c,-0.5*(p+3)))))
Ppow=np.array(Ppow)
Ssyn=np.array(Ssyn)
plt.plot(nus/1.e6,Ssyn)
#plt.plot(nus/1.e6,Ppow)
#plt.xlim(30,300)
plt.title("Electron Power Law Synchrotron spectrum")
plt.xscale("log")
plt.xlabel("f [MHz]")
plt.yscale("log")
plt.ylabel("Power")
"""
Explanation: The Synchrotron spectrum for a power-law electron distribution, including self-absorption
End of explanation
"""
|
EvanBianco/Practical_Programming_for_Geoscientists
|
Part1b_Intro_to_scientific_computing.ipynb
|
apache-2.0
|
layers = [0.23, 0.34, 0.45, 0.25, 0.23, 0.35]
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
rcs
"""
Explanation: Lists
Before coming into the Notebook, spend some time in an interactive session learning about sequences (strings, lists), and doing basic indexing, slicing, append(), in, etc.
Then you can come in here...
End of explanation
"""
# Exercise
def compute_rc(layers):
"""
Computes reflection coefficients given
a list of layer impedances.
"""
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
return rcs
compute_rc(layers)
"""
Explanation: Functions
Definition, inputs, side-effects, returning, scope, docstrings
End of explanation
"""
import numpy as np # Just like importing file
biglog = np.random.random(10000000)
%timeit compute_rc(biglog)
"""
Explanation: Put in a file and import into a new notebook
Numpy
Before continuing, do some basic NumPy array stuff in the interpreter.
Let's make a really big 'log' from random numbers:
End of explanation
"""
# Exercise
def compute_rc_vector(layers):
uppers = layers[:-1]
lowers = layers[1:]
return (lowers - uppers) / (uppers + lowers)
%timeit compute_rc_vector(biglog)
"""
Explanation: Note that the log has to be fairly big for the benchmarking to work properly, because otherwise the CPU caches the computation and this skews the results.
Now we can re-write our function using arrays instead of lists.
End of explanation
"""
from numba import jit
@jit
def compute_rc_numba(layers):
uppers = layers[:-1]
lowers = layers[1:]
return (lowers - uppers) / (uppers + lowers)
%timeit compute_rc_numba(biglog)
"""
Explanation: 60 times faster on my machine!
Aside: more performance with numba
End of explanation
"""
def compute_rc_slow(layers):
uppers = layers[:-1]
lowers = layers[1:]
rcs = np.zeros_like(uppers)
for i in range(rcs.size):
rcs[i] = (lowers[i] - uppers[i]) / (uppers[i] + lowers[i])
return rcs
%timeit compute_rc_slow(biglog)
@jit
def compute_rc_faster(layers):
uppers = layers[:-1]
lowers = layers[1:]
rcs = np.zeros_like(uppers)
for i in range(rcs.size):
rcs[i] = (lowers[i] - uppers[i]) / (uppers[i] + lowers[i])
return rcs
%timeit compute_rc_faster(biglog)
"""
Explanation: OK, we'll make a fake example.
End of explanation
"""
@jit
def compute_rc_hopeful(layers):
"""
Computes reflection coefficients given
a list of layer impedances.
"""
uppers = layers[:-1]
lowers = layers[1:]
rcs = []
for pair in zip(lowers, uppers):
rc = (pair[1] - pair[0]) / (pair[1] + pair[0])
rcs.append(rc)
return rcs
%timeit compute_rc_hopeful(biglog)
"""
Explanation: However, you can't speed up our original list-based function this way.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Plotting basics
End of explanation
"""
plt.plot(biglog[:500])
fig = plt.figure(figsize=(15,2))
ax = fig.add_subplot(111)
ax.plot(biglog[:500])
ax.set_title("big log")
plt.show()
"""
Explanation: Not we can only plot part of biglog because it contains too many points for matplotlib (and for our screen!). If we really wanted to plot it, we'd have to find a way to upscale it.
End of explanation
"""
class Layers(object):
def __init__(self, layers, label=None):
# Just make sure we end up with an array
self.layers = np.array(layers)
self.label = label or "My log"
self.length = self.layers.size # But storing len in an attribute is unexpected...
def __len__(self): # ...better to do this.
return len(self.layers)
def rcs(self):
uppers = self.layers[:-1]
lowers = self.layers[1:]
return (lowers-uppers) / (uppers+lowers)
def plot(self, lw=0.5, color='#6699ff'):
fig = plt.figure(figsize=(2,6))
ax = fig.add_subplot(111)
ax.barh(range(len(self.layers)), self.layers, color=color, lw=lw, align='edge', height=1.0, alpha=1.0, zorder=10)
ax.grid(zorder=2)
ax.set_ylabel('Layers')
ax.set_title(self.label)
ax.set_xlim([-0.5,1.0])
ax.set_xlabel('Measurement (units)')
ax.invert_yaxis()
#ax.set_xticks(ax.get_xticks()[::2]) # take out every second tick
ax.spines['right'].set_visible(False) # hide the spine on the right
ax.yaxis.set_ticks_position('left') # Only show ticks on the left and bottom spines
plt.show()
l = Layers(layers, label='Well # 1')
l.rcs()
len(l)
l.plot()
rel_interval = np.cumsum(l.rcs(), dtype=float)
len(rel_interval)
relative_layers = np.insert(rel_interval, 0, 0)
relative_layers
relative = Layers(relative_layers, "relative")
relative.layers
relative.plot()
"""
Explanation: Objected oriented basics
The point is that we often want to store data along with relevant functions (methods) in one 'thing' — an object.
Build this up, piece by piece.
Start with __init__() which is required anyway. Only define self.layers.
Then add rcs(), then plot(), and finally __len__(), once you discover that len(l) doesn't work, because this object doesn't have that property.
End of explanation
"""
url = "http://en.wikipedia.org/wiki/Jurassic"
"""
Explanation: Web scraping basics
End of explanation
"""
import requests
r = requests.get(url)
r.text[:500]
"""
Explanation: Use View Source in your browser to figure out where the age range is on the page, and what it looks like.
Try to find the same string here.
End of explanation
"""
import re
s = re.search(r'<i>(.+?million years ago)</i>', r.text)
text = s.group(1)
"""
Explanation: Using a regular expression:
End of explanation
"""
def get_age(period):
url = "http://en.wikipedia.org/wiki/" + period
r = requests.get(url)
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+) million years ago</i>', r.text).groups()
return float(start), float(end)
period = "Jurassic"
get_age(period)
def duration(period):
t0, t1 = get_age(period)
duration = t0 - t1
response = "According to Wikipedia, the {0} period was {1:.2f} Ma long.".format(period, duration)
return response
duration('Cretaceous')
"""
Explanation: Exercise: Make a function to get the start and end ages of any geologic period, taking the name of the period as an argument.
End of explanation
"""
l = [0.001, 1, 3, 51, 41 , 601]
sorted(l)
"""
Explanation: Functional programming basics
In Python, functions are first class objects — you can pass them around like other objects.
Sometimes it's convenient to think in terms of map() and reduce(). This pattern is fundamental in data analytics and some packages, such as pandas.
A couple of very common functions, sorted() and filter(), always works with a function as one of their arguments. It can be confusing the first time you see it.
With no functional argument, sorted does what you'd expect:
End of explanation
"""
def strlen(n):
return len(str(n))
sorted(l, key=strlen)
"""
Explanation: What if we want to sort based on the number of characters? Don't ask why, we just do. Then we write a function that returns a key which, when sorted, will give the ordering we want.
End of explanation
"""
sorted(l, key=lambda n: len(str(n)))
"""
Explanation: We could rewrite that tiny function as a lambda, which is basically a little unnamed function:
End of explanation
"""
def sq(n):
return n**2
# In Python 3, map produces an iterator, not a list.
# So we must cast to list to inspect its contents.
list(map(sq, l))
"""
Explanation: When would you make a named function vs a lambda? It all depends on whether you want to use it again or not.
map and lambda
You can think of map as 'apply this function to all of these items'.
End of explanation
"""
list(map(lambda n: n**2, l))
"""
Explanation: We can get around defining that tiny function sq() with a lambda, which you can think of as a temporary, 'throwaway' function:
End of explanation
"""
[n**2 for n in l]
"""
Explanation: In practice, we'd often write this as a list comprehension. Then we can skip the creation of the funciton or lambda entirely:
End of explanation
"""
def runsum(a, b):
return a + b
# For some reason reduce is not in the main namespace like map
from functools import reduce
reduce(runsum, l)
def runmult(a, b):
return a * b
reduce(runmult, l)
"""
Explanation: One of the advantages of map is that it is 'lazy', so if you map a function to a giant list, you don't get a giant list back, you get an iterator. A list-comp would give you a giant list, possibly jamming the memory on your box.
reduce
reduce takes a sequence and applies some function to it recursively. You could think of it like a running sum, say, but for any function, not just summing.
End of explanation
"""
def power(a, b):
return a**b
def cuber(a):
return power(a, 3)
cuber(2)
"""
Explanation: partial for making curry
'Preapplying' an argument to a function is sometimes useful. For example, we might have a general function for raising a number a to the power b, but then want another function which raises numbers to the 3rd power.
I could do this by simply calling the first function from the second:
End of explanation
"""
from functools import partial
cuber = partial(power, b=3)
cuber(2)
"""
Explanation: But some people might find it more inuitive to do it this way:
End of explanation
"""
|
char-lie/python_presentations
|
numpy/arrays.ipynb
|
mit
|
from numpy import array
arr = array([1, 2, 3])
print(arr)
"""
Explanation: Arrays
NumPy deals just perfect with arrays, because of
- advanced overload of __getitem__ operator for indexing, which is handy;
- overload of other operators for comfortable shortcuts and intuitive interface;
- methods and functions implemented in C language, which is fast;
- rich library of functions and methods, which allows you to do almost whatever you want.
Creating arrays
It's not so easy to create them, but it's worth it
End of explanation
"""
ten = array(range(10))
matrix = array([[1,2], [3, 4]])
nested_matrix = array([matrix, matrix])
strange_array = array([[1], 2])
print('Range demo:', ten)
print('Matrix demo:', matrix)
print('Array of NumPy arrays:', nested_matrix)
print('Something strange:', strange_array)
"""
Explanation: You just have to provide array constructor of numpy module with iterable type.
More examples
End of explanation
"""
int_array = array([1., 2.5, -0.7], dtype='int')
print('You have {0} array of type {0.dtype}'.format(int_array))
"""
Explanation: Types
NumPy can be fast, because it allows you to create arrays with elements of C language types
Shorthands
Here you can see intuitive names of types to use
| Data type | Description |
|------------|-------------|
| bool_ | Boolean (True or False) stored as a byte |
| int_ | Default integer type (same as C long; normally either int64 or int32) |
| intc | Identical to C int (normally int32 or int64) |
| intp | Integer used for indexing (same as C ssize_t; normally either int32 or int64) |
| float_ | Shorthand for float64. |
| complex_ | Shorthand for complex128. |
Note Underscore suffix is not mandatory
Integers
If you need to use integers of specific sizes
- use its size in bits as a suffix;
- add u prefix to denote unsigned value.
| Data type | Description |
|------------|-------------|
| int8 | Byte (-128 to 127) |
| int16 | Integer (-32768 to 32767) |
| int32 | Integer (-2147483648 to 2147483647) |
| int64 | Integer (-9223372036854775808 to 9223372036854775807) |
| uint8 | Unsigned integer (0 to 255) |
| uint16 | Unsigned integer (0 to 65535) |
| uint32 | Unsigned integer (0 to 4294967295) |
| uint64 | Unsigned integer (0 to 18446744073709551615) |
Floating points and complex
There is IEEE-754 standard for floating point arithmetics, which describes format of half (16 bits), single (32 bits), double (64 bits), quadruple (128 bits) and octuple (256 bits) numbers
Standard C has single precision float, double precision double and additional long double which is at least as accurate as regular double
| Data type | Description |
|------------|-------------|
| float16 | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa |
| float32 | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa |
| float64 | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa |
| complex64 | Complex number, represented by two 32-bit floats (real and imaginary components) |
| complex128 | Complex number, represented by two 64-bit floats (real and imaginary components) |
Specify data type
NumPy array has a string dtype property to store and specify data type
End of explanation
"""
array([[0], 1], dtype='int')
"""
Explanation: Note that typecast was made automatically
NumPy will not allow you to create wrong array with specific type
End of explanation
"""
arrays = [
array([1, 2, 3]),
array(((1, 2), (3, 4.))),
array([[0], 1]),
array('Hello world')
]
for a in arrays:
print('{0.dtype}: {0}'.format(a))
"""
Explanation: NumPy assigned data type automatically, if it was not specified
End of explanation
"""
LENGTH = 4
a, b = array(range(LENGTH)), array(range(LENGTH, LENGTH*2))
print('Arighmetic')
print('{} + {} = {}'.format(a, b, a + b))
print('{} * {} = {}'.format(a, b, a * b))
print('{} ** {} = {}'.format(a, b, a ** b))
print('{} / {} = {}'.format(a, b, a / b))
print('Binary')
print('{} ^ {} = {}'.format(a, b, a ^ b))
print('{} | {} = {}'.format(a, b, a | b))
print('{} & {} = {}'.format(a, b, a & b))
"""
Explanation: Interesting thing: we explored new types!
Type object is used when we cannot say for sure that we have n-dimensional array of numbers
Operations on arrays
You can simply apply elementwise operations
End of explanation
"""
arr = array(range(10))
indices_list = [
[1, 5, 8],
range(1, 6, 2),
array([8, 2, 0, -1])
]
for indices in indices_list:
print('Indexed by {:<14}: {}'.format(str(indices), arr[indices]))
"""
Explanation: Indexing
Indexing of NumPy arrays is very agile
Just look on entitites which can be used as an index
- boolean arrays;
- integer arrays;
- numbers;
- Ellipsis;
- tuples of them;
- etc.
Integers array
You can get values from array by iterable (but not tuples) of indices
End of explanation
"""
arr = array(range(5))
print('Items more than 2:', arr > 2)
print(arr[arr>2])
"""
Explanation: Boolean array
Boolean arrays can be result of comparison and syntax of its usage is very handy
End of explanation
"""
a, b = array(range(0, 5)), array(range(5, 10))
print(a[b>7])
"""
Explanation: This can be read as "Give me the numbers which are greater than two"
What you actually asked for
- get elementwise comparison of array with scalar
- provide me with array of results
- fetch elements from the array, which are correspond to True
This means that you can use another array to get values from this one
End of explanation
"""
matrix = array([range(3*i, 3*(i+1)) for i in range(3)])
print('We have a matrix of shape', matrix.shape)
print('Regular Python indexing ', matrix[0][2])
print('Implicit tuple declaration', matrix[0, 2])
print('Explicit tuple declaration', matrix[(0, 2)])
"""
Explanation: This gives you elements from array a, corresponding elements from b of which are greater than 7
Tuple
Tuples are used to access n-dimensional array elements
End of explanation
"""
print('All elements of the first column', matrix[:, 0])
print('Get elements of the second column', matrix[:, 1])
print('Pick first and last column', matrix[:, 0:3:2])
print('Get only first row', matrix[0, :])
print('You could do this easier but nevermind', matrix[0])
print('Get first two elements of the third column', matrix[0:2, 2])
"""
Explanation: It was noted that we can use "tuple of them"
This means that it can contain not only numbers but arrays and slices
End of explanation
"""
a = array(range(5))
print(a)
print(a[:])
print(a[...])
"""
Explanation: Ellipsis
Ellipsis is a type for ellipsis constant written as ... (three dots)
It's very handy to be used in your own iterable types indexing when you want to skip some entries
Аollowing example usage of ellipsis is useдуыы because it behaves just like fetch of all elements
End of explanation
"""
array3d = array([[range(3*(i+j), 3*(i+j+1)) for i in range(3)] for j in range(3)])
print('Here is array of shape {0.shape}: {0}'.format(array3d))
"""
Explanation: Though it's useful for n-dimensional arrays when you want to skip multiple dimensions
End of explanation
"""
print('Item is a matrix of shape {0.shape}: {0}'.format(array3d[0]))
print('Item of the matrix is an array of shape {0.shape}: {0}'.format(array3d[0][0]))
print('Don`t forget about tuples: {0}'.format(array3d[0, 0]))
"""
Explanation: Element of this array is a matrix, element of which is an array
End of explanation
"""
array3d[:, :, -1]
"""
Explanation: If you want to get only last elements of each row in this huge thing you can do following
End of explanation
"""
array3d[..., -1]
"""
Explanation: Also you can avoid these slices and use ellipsis
End of explanation
"""
print('First matrix with all elements', array3d[0, ...])
print('First elements of all rows of the second matrix', array3d[1, ..., 0])
"""
Explanation: Ellipsis can be placed in the middle or in the end
It will mean that you fetch all elements from not specified dimensions
End of explanation
"""
|
bxin/cwfs
|
examples/AuxTel2001.ipynb
|
gpl-3.0
|
from lsst.cwfs.instrument import Instrument
from lsst.cwfs.algorithm import Algorithm
from lsst.cwfs.image import Image, readFile, aperture2image, showProjection
import lsst.cwfs.plots as plots
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Tiago provided a pair of images from AuxTel.
Let's look at how those images work with our cwfs code
load the modules
End of explanation
"""
fieldXY = [0,0]
I1 = Image(readFile('../tests/testImages/AuxTel2001/1579925613-16Pup_intra-0-1.fits'), fieldXY, Image.INTRA)
I2 = Image(readFile('../tests/testImages/AuxTel2001/1579925662-16Pup_extra-0-1.fits'), fieldXY, Image.EXTRA)
I1p = Image(readFile('../tests/testImages/AuxTel2001/1579925833-16Pup_intra-0-1.fits'), fieldXY, Image.INTRA)
I2p = Image(readFile('../tests/testImages/AuxTel2001/1579925882-16Pup_extra-0-1.fits'), fieldXY, Image.EXTRA)
plots.plotImage(I1.image,'intra')
plots.plotImage(I2.image,'extra')
plots.plotImage(I1p.image,'intra')
plots.plotImage(I2p.image,'extra')
"""
Explanation: Define the image objects. Input arguments: file name, field coordinates in deg, image type
The colorbar() below may produce a warning message if your matplotlib version is older than 1.5.0
( https://github.com/matplotlib/matplotlib/issues/5209 )
It would probably be better if we do background subtraction before feeding the images to cwfs. Seems like when background is low (in our case, ~300 vs ~40000 for signal) we are still fine.
Another thing to note is that we don't wan the image stamps to be much larger than the donuts themselves, otherwise things becomes very slow. Below we will re-cut the image stamps.
End of explanation
"""
inst=Instrument('AuxTel',I1.sizeinPix)
print("Expected image diameter in pixels = %.0f"%(inst.offset/inst.fno/inst.pixelSize))
I1.image = I1.image[300-80:300+80,400-80:400+80]
I1.sizeinPix = I1.image.shape[0]
I2.image = I2.image[300-80:300+80,400-80:400+80]
I2.sizeinPix = I2.image.shape[0]
I1p.image = I1p.image[350-80:350+80,400-80:400+80]
I1p.sizeinPix = I1p.image.shape[0]
I2p.image = I2p.image[350-80:350+80,400-80:400+80]
I2p.sizeinPix = I2p.image.shape[0]
inst=Instrument('AuxTel',I1.sizeinPix)
plots.plotImage(I1.image,'intra')
plots.plotImage(I2.image,'extra')
plots.plotImage(I1p.image,'intra')
plots.plotImage(I2p.image,'extra')
"""
Explanation: Define the instrument. Input arguments: instrument name, size of image stamps
End of explanation
"""
algo=Algorithm('exp',inst,0)
algop=Algorithm('exp',inst,0)
"""
Explanation: Define the algorithm being used. Input arguments: baseline algorithm, instrument, debug level
End of explanation
"""
algo.runIt(inst,I1,I2,'paraxial')
algop.runIt(inst,I1p,I2p,'paraxial')
"""
Explanation: Run it
End of explanation
"""
print(algo.zer4UpNm)
print(algop.zer4UpNm)
"""
Explanation: Print the Zernikes Zn (n>=4)
End of explanation
"""
plt.plot(range(4,23), algo.zer4UpNm,'b.-',label = '1')
plt.plot(range(4,23), algop.zer4UpNm,'r.-',label = '2')
plt.legend()
plt.grid()
#the Zernike do look a bit different,
#but,
#the images above (especially the intra focal images do look kind of different?)
plots.plotImage(I1.image0,'original intra',mask=I1.cMask)
plots.plotImage(I2.image0,'original extra', mask=I2.cMask)
"""
Explanation: plot the Zernikes Zn (n>=4)
End of explanation
"""
nanMask = np.ones(I1.image.shape)
nanMask[I1.pMask==0] = np.nan
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(algo.Wconverge*nanMask, origin='lower')
ax[0].set_title('Final WF = estimated + residual')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(algo.West*nanMask, origin='lower')
ax[1].set_title('residual wavefront')
fig.colorbar(img, ax=ax[1])
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(I1.image, origin='lower')
ax[0].set_title('Intra residual image')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(I2.image, origin='lower')
ax[1].set_title('Extra residual image')
fig.colorbar(img, ax=ax[1])
"""
Explanation: Patrick asked the question: can we show the results of the fit in intensity space, and also the residual?
Great question. The short answer is no.
The long answer: the current approach implemented is the so-called inversion approach, i.e., to inversely solve the Transport of Intensity Equation with boundary conditions. It is not a forward fit. If you think of the unperturbed image as I0, and the real image as I, we iteratively map I back toward I0 using the estimated wavefront. Upon convergence, our "residual images" should have intensity distributions that are nearly uniform. We always have an estimated wavefront, and a residual wavefront. The residual wavefront is obtained from the two residual images.
However, using tools availabe in the cwfs package, we can easily make forward prediction of the images using the wavefront solution. This is basically to take the slope of the wavefront at any pupil position, and raytrace to the image plane. We will demostrate these below.
End of explanation
"""
oversample = 10
projSamples = I1.image0.shape[0]*oversample
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
"""
Explanation: Now we do the forward raytrace using our wavefront solutions
The code is simply borrowed from existing cwfs code.
We first set up the pupil grid. Oversample means how many ray to trace from each grid point on the pupil.
End of explanation
"""
lutxp, lutyp, J = aperture2image(I1, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I1fit = Image(show_lutxyp, fieldXY, Image.INTRA)
I1fit.downResolution(oversample, I1.image0.shape[0], I1.image0.shape[1])
"""
Explanation: We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class.
End of explanation
"""
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
lutxp, lutyp, J = aperture2image(I2, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I2fit = Image(show_lutxyp, fieldXY, Image.EXTRA)
I2fit.downResolution(oversample, I2.image0.shape[0], I2.image0.shape[1])
#The atmosphere used here is just a random Gaussian smearing. We do not care much about the size at this point
from scipy.ndimage import gaussian_filter
atmSigma = .6/3600/180*3.14159*21.6/1.44e-5
I1fit.image[np.isnan(I1fit.image)]=0
a = gaussian_filter(I1fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I1fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Intra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(a, origin='lower')
ax[1].set_title('Forward prediction (w atm) Intra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I1.image0, origin='lower')
ax[2].set_title('Real Image, Intra')
fig.colorbar(img, ax=ax[2])
I2fit.image[np.isnan(I2fit.image)]=0
b = gaussian_filter(I2fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I2fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Extra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(b, origin='lower')
ax[1].set_title('Forward prediction (w atm) Extra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I2.image0, origin='lower')
ax[2].set_title('Real Image, Extra')
fig.colorbar(img, ax=ax[2])
"""
Explanation: Now do the same thing for extra focal image
End of explanation
"""
|
smorton2/think-stats
|
code/chap10ex.ipynb
|
gpl-3.0
|
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import random
import thinkstats2
import thinkplot
"""
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
ages = live.agepreg
weights = live.totalwgt_lb
"""
Explanation: Least squares
One more time, let's load up the NSFG data.
End of explanation
"""
from thinkstats2 import Mean, MeanVar, Var, Std, Cov
def LeastSquares(xs, ys):
meanx, varx = MeanVar(xs)
meany = Mean(ys)
slope = Cov(xs, ys, meanx, meany) / varx
inter = meany - slope * meanx
return inter, slope
"""
Explanation: The following function computes the intercept and slope of the least squares fit.
End of explanation
"""
inter, slope = LeastSquares(ages, weights)
inter, slope
"""
Explanation: Here's the least squares fit to birth weight as a function of mother's age.
End of explanation
"""
inter + slope * 25
"""
Explanation: The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.
End of explanation
"""
slope * 10
"""
Explanation: And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).
End of explanation
"""
def FitLine(xs, inter, slope):
fit_xs = np.sort(xs)
fit_ys = inter + slope * fit_xs
return fit_xs, fit_ys
"""
Explanation: The following function evaluates the fitted line at the given xs.
End of explanation
"""
fit_xs, fit_ys = FitLine(ages, inter, slope)
"""
Explanation: And here's an example.
End of explanation
"""
thinkplot.Scatter(ages, weights, color='blue', alpha=0.1, s=10)
thinkplot.Plot(fit_xs, fit_ys, color='white', linewidth=3)
thinkplot.Plot(fit_xs, fit_ys, color='red', linewidth=2)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Birth weight (lbs)',
axis=[10, 45, 0, 15],
legend=False)
"""
Explanation: Here's a scatterplot of the data with the fitted line.
End of explanation
"""
def Residuals(xs, ys, inter, slope):
xs = np.asarray(xs)
ys = np.asarray(ys)
res = ys - (inter + slope * xs)
return res
"""
Explanation: Residuals
The following functon computes the residuals.
End of explanation
"""
live['residual'] = Residuals(ages, weights, inter, slope)
"""
Explanation: Now we can add the residuals as a column in the DataFrame.
End of explanation
"""
bins = np.arange(10, 48, 3)
indices = np.digitize(live.agepreg, bins)
groups = live.groupby(indices)
age_means = [group.agepreg.mean() for _, group in groups][1:-1]
age_means
"""
Explanation: To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.
First I'll make the groups and compute the average age in each group.
End of explanation
"""
cdfs = [thinkstats2.Cdf(group.residual) for _, group in groups][1:-1]
"""
Explanation: Next I'll compute the CDF of the residuals in each group.
End of explanation
"""
def PlotPercentiles(age_means, cdfs):
thinkplot.PrePlot(3)
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(age_means, weight_percentiles, label=label)
"""
Explanation: The following function plots percentiles of the residuals against the average age in each group.
End of explanation
"""
PlotPercentiles(age_means, cdfs)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
"""
Explanation: The following figure shows the 25th, 50th, and 75th percentiles.
Curvature in the residuals suggests a non-linear relationship.
End of explanation
"""
def SampleRows(df, nrows, replace=False):
"""Choose a sample of rows from a DataFrame.
df: DataFrame
nrows: number of rows
replace: whether to sample with replacement
returns: DataDf
"""
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
def ResampleRows(df):
"""Resamples rows from a DataFrame.
df: DataFrame
returns: DataFrame
"""
return SampleRows(df, len(df), replace=True)
"""
Explanation: Sampling distribution
To estimate the sampling distribution of inter and slope, I'll use resampling.
End of explanation
"""
def SamplingDistributions(live, iters=101):
t = []
for _ in range(iters):
sample = ResampleRows(live)
ages = sample.agepreg
weights = sample.totalwgt_lb
estimates = LeastSquares(ages, weights)
t.append(estimates)
inters, slopes = zip(*t)
return inters, slopes
"""
Explanation: The following function resamples the given dataframe and returns lists of estimates for inter and slope.
End of explanation
"""
inters, slopes = SamplingDistributions(live, iters=1001)
"""
Explanation: Here's an example.
End of explanation
"""
def Summarize(estimates, actual=None):
mean = Mean(estimates)
stderr = Std(estimates, mu=actual)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.ConfidenceInterval(90)
print('mean, SE, CI', mean, stderr, ci)
"""
Explanation: The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.
End of explanation
"""
Summarize(inters)
"""
Explanation: Here's the summary for inter.
End of explanation
"""
Summarize(slopes)
"""
Explanation: And for slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Exercise: Use ResampleRows and generate a list of estimates for the mean birth weight. Use Summarize to compute the SE and CI for these estimates.
End of explanation
"""
for slope, inter in zip(slopes, inters):
fxs, fys = FitLine(age_means, inter, slope)
thinkplot.Plot(fxs, fys, color='gray', alpha=0.01)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
"""
Explanation: Visualizing uncertainty
To show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.
End of explanation
"""
def PlotConfidenceIntervals(xs, inters, slopes, percent=90, **options):
fys_seq = []
for inter, slope in zip(inters, slopes):
fxs, fys = FitLine(xs, inter, slope)
fys_seq.append(fys)
p = (100 - percent) / 2
percents = p, 100 - p
low, high = thinkstats2.PercentileRows(fys_seq, percents)
thinkplot.FillBetween(fxs, low, high, **options)
"""
Explanation: Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.
End of explanation
"""
PlotConfidenceIntervals(age_means, inters, slopes, percent=90,
color='gray', alpha=0.3, label='90% CI')
PlotConfidenceIntervals(age_means, inters, slopes, percent=50,
color='gray', alpha=0.5, label='50% CI')
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
"""
Explanation: This example shows the confidence interval for the fitted values at each mother's age.
End of explanation
"""
def CoefDetermination(ys, res):
return 1 - Var(res) / Var(ys)
"""
Explanation: Coefficient of determination
The coefficient compares the variance of the residuals to the variance of the dependent variable.
End of explanation
"""
inter, slope = LeastSquares(ages, weights)
res = Residuals(ages, weights, inter, slope)
r2 = CoefDetermination(weights, res)
r2
"""
Explanation: For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.
End of explanation
"""
print('rho', thinkstats2.Corr(ages, weights))
print('R', np.sqrt(r2))
"""
Explanation: We can confirm that $R^2 = \rho^2$:
End of explanation
"""
print('Std(ys)', Std(weights))
print('Std(res)', Std(res))
"""
Explanation: To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.
End of explanation
"""
var_ys = 15**2
rho = 0.72
r2 = rho**2
var_res = (1 - r2) * var_ys
std_res = np.sqrt(var_res)
std_res
"""
Explanation: As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.
End of explanation
"""
class SlopeTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
ages, weights = data
_, slope = thinkstats2.LeastSquares(ages, weights)
return slope
def MakeModel(self):
_, weights = self.data
self.ybar = weights.mean()
self.res = weights - self.ybar
def RunModel(self):
ages, _ = self.data
weights = self.ybar + np.random.permutation(self.res)
return ages, weights
"""
Explanation: Hypothesis testing with slopes
Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.
End of explanation
"""
ht = SlopeTest((ages, weights))
pvalue = ht.PValue()
pvalue
"""
Explanation: And it is.
End of explanation
"""
ht.actual, ht.MaxTestStat()
"""
Explanation: Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.
End of explanation
"""
sampling_cdf = thinkstats2.Cdf(slopes)
"""
Explanation: We can also use resampling to estimate the sampling distribution of the slope.
End of explanation
"""
thinkplot.PrePlot(2)
thinkplot.Plot([0, 0], [0, 1], color='0.8')
ht.PlotCdf(label='null hypothesis')
thinkplot.Cdf(sampling_cdf, label='sampling distribution')
thinkplot.Config(xlabel='slope (lbs / year)',
ylabel='CDF',
xlim=[-0.03, 0.03],
legend=True, loc='upper left')
"""
Explanation: The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.
To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.
End of explanation
"""
pvalue = sampling_cdf[0]
pvalue
"""
Explanation: Here's how to get a p-value from the sampling distribution.
End of explanation
"""
def ResampleRowsWeighted(df, column='finalwgt'):
weights = df[column]
cdf = thinkstats2.Cdf(dict(weights))
indices = cdf.Sample(len(weights))
sample = df.loc[indices]
return sample
"""
Explanation: Resampling with weights
Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.
The following function resamples rows with probabilities proportional to weights.
End of explanation
"""
iters = 100
estimates = [ResampleRowsWeighted(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
"""
Explanation: We can use it to estimate the mean birthweight and compute SE and CI.
End of explanation
"""
estimates = [thinkstats2.ResampleRows(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
"""
Explanation: And here's what the same calculation looks like if we ignore the weights.
End of explanation
"""
import brfss
df = brfss.ReadBrfss(nrows=None)
df = df.dropna(subset=['htm3', 'wtkg2'])
heights, weights = df.htm3, df.wtkg2
log_weights = np.log10(weights)
"""
Explanation: The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.
Exercises
Exercise: Using the data from the BRFSS, compute the linear least squares fit for log(weight) versus height. How would you best present the estimated parameters for a model like this where one of the variables is log-transformed? If you were trying to guess someone’s weight, how much would it help to know their height?
Like the NSFG, the BRFSS oversamples some groups and provides a sampling weight for each respondent. In the BRFSS data, the variable name for these weights is totalwt. Use resampling, with and without weights, to estimate the mean height of respondents in the BRFSS, the standard error of the mean, and a 90% confidence interval. How much does correct weighting affect the estimates?
Read the BRFSS data and extract heights and log weights.
End of explanation
"""
# Solution goes here
"""
Explanation: Estimate intercept and slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Make a scatter plot of the data and show the fitted line.
End of explanation
"""
# Solution goes here
"""
Explanation: Make the same plot but apply the inverse transform to show weights on a linear (not log) scale.
End of explanation
"""
# Solution goes here
"""
Explanation: Plot percentiles of the residuals.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute correlation.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute coefficient of determination.
End of explanation
"""
# Solution goes here
"""
Explanation: Confirm that $R^2 = \rho^2$.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute Std(ys), which is the RMSE of predictions that don't use height.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute Std(res), the RMSE of predictions that do use height.
End of explanation
"""
# Solution goes here
"""
Explanation: How much does height information reduce RMSE?
End of explanation
"""
# Solution goes here
"""
Explanation: Use resampling to compute sampling distributions for inter and slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Plot the sampling distribution of slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute the p-value of the slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute the 90% confidence interval of slope.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute the mean of the sampling distribution.
End of explanation
"""
# Solution goes here
"""
Explanation: Compute the standard deviation of the sampling distribution, which is the standard error.
End of explanation
"""
# Solution goes here
"""
Explanation: Resample rows without weights, compute mean height, and summarize results.
End of explanation
"""
# Solution goes here
"""
Explanation: Resample rows with weights. Note that the weight column in this dataset is called finalwt.
End of explanation
"""
|
topix-hackademy/pandas-for-dummies
|
01_SERIES/CSV-Reader.ipynb
|
mit
|
import pandas as pd
asd = pd.read_csv("data/input.csv")
print type(asd)
asd.head()
# This is a Dataframe because we have multiple columns!
"""
Explanation: Read Data From CSV
Method:
read_csv
End of explanation
"""
data = pd.read_csv("data/input.csv", usecols=["name"], squeeze=True)
print type(data)
data.head()
data.index
"""
Explanation: To create a Series we need to set the column (using usecols) to use and set the parameter squeeze to True.
End of explanation
"""
data = pd.read_csv("data/input_with_one_column.csv", squeeze=True)
print type(data)
# HEAD
print data.head(2), "\n"
# TAIL
print data.tail()
"""
Explanation: If the input file has only 1 column we don't need to provide the usecols argument.
End of explanation
"""
list(data)
dict(data)
max(data)
min(data)
dir(data)
type(data)
sorted(data)
data = pd.read_csv("data/input_with_two_column.csv", index_col="name", squeeze=True)
data.head()
data[["Alex", "asd"]]
data["Alex":"Vale"]
"""
Explanation: On Series we can perform classic python operation using Built-In Functions!
End of explanation
"""
|
ucsd-ccbb/Oncolist
|
notebooks/Oncolist Server API Examples.ipynb
|
mit
|
import os
import sys
sys.path.append(os.getcwd().replace("notebooks", "cfncluster"))
## S3 input and output address.
s3_input_files_address = "s3://path/to/input folder"
s3_output_files_address = "s3://path/to/output folder"
## CFNCluster name
your_cluster_name = "cluster_name"
## The private key pair for accessing cluster.
private_key = "/path/to/private_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
"""
Explanation: <h1 align="center">Oncolist Server API Examples</h1>
<h3 align="center">Author: Guorong Xu</h3>
<h3 align="center">2016-09-19</h3>
The notebook is an example that tells you how to calculate correlation, annotate gene clusters and generate JSON files on AWS.
<font color='red'>Notice: Please open the notebook under /notebooks/BasicCFNClusterSetup.ipynb to install CFNCluster package on your Jupyter-notebook server before running the notebook.</font>
1. Configure AWS key pair, data location on S3 and the project information
End of explanation
"""
import CFNClusterManager, ConnectionManager
## Create a new cluster
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
"""
Explanation: <font color='blue'> Notice: </font>
The file name of the expression file should follow the rule if you want to annotate correct in the output JSON file:
"GSE number_Author name_Disease name_Number of Arrays_Institue name.txt".
For example: GSE65216_Maire_Breast_Tumor_159_Arrays_Paris.txt
2. Create CFNCluster
Notice: The CFNCluster package can be only installed on Linux box which supports pip installation.
End of explanation
"""
import PipelineManager
## You can call this function to check the disease names included in the annotation.
PipelineManager.check_disease_name()
## Define the disease name from the below list of disease names.
disease_name = "BreastCancer"
"""
Explanation: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Checking the disease names
End of explanation
"""
import PipelineManager
## define operation
## calculate: calculate correlation;"
## oslom_cluster: clustering the gene moudules;"
## print_oslom_cluster_json: print json files;"
## all: run all operations;"
operation = "all"
## run the pipeline
PipelineManager.run_analysis(ssh_client, disease_name, operation, s3_input_files_address, s3_output_files_address)
"""
Explanation: Run the pipeline with the specific operation.
End of explanation
"""
import PipelineManager
PipelineManager.check_processing_status(ssh_client)
"""
Explanation: To check the processing status
End of explanation
"""
import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
"""
Explanation: To delete the cluster, you just need to set the cluster name and call the below function.
End of explanation
"""
|
snowicecat/umich-eecs445-f16
|
lecture16_pgms_latent_vars_cond_independence/lecture16_pgms_latent_vars_cond_independence.ipynb
|
mit
|
from __future__ import division
# scientific
%matplotlib inline
from matplotlib import pyplot as plt;
import numpy as np;
import sklearn as skl;
import sklearn.datasets;
import sklearn.cluster;
# ipython
import IPython;
# python
import os;
#####################################################
# image processing
import PIL;
# trim and scale images
def trim(im, percent=100):
print("trim:", percent);
bg = PIL.Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = PIL.ImageChops.difference(im, bg)
diff = PIL.ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
x = im.crop(bbox)
return x.resize(((x.size[0]*percent)//100, (x.size[1]*percent)//100), PIL.Image.ANTIALIAS);
#####################################################
# daft (rendering PGMs)
import daft;
# set to FALSE to load PGMs from static images
RENDER_PGMS = False;
# decorator for pgm rendering
def pgm_render(pgm_func):
def render_func(path, percent=100, render=None, *args, **kwargs):
print("render_func:", percent);
# render
render = render if (render is not None) else RENDER_PGMS;
if render:
print("rendering");
# render
pgm = pgm_func(*args, **kwargs);
pgm.render();
pgm.figure.savefig(path, dpi=300);
# trim
img = trim(PIL.Image.open(path), percent);
img.save(path, 'PNG');
else:
print("not rendering");
# error
if not os.path.isfile(path):
raise "Error: Graphical model image %s not found. You may need to set RENDER_PGMS=True.";
# display
return IPython.display.Image(filename=path);#trim(PIL.Image.open(path), percent);
return render_func;
######################################################
"""
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
$$
End of explanation
"""
X, y = skl.datasets.make_blobs(1000, cluster_std=[1.0, 2.5, 0.5], random_state=170)
plt.scatter(X[:,0], X[:,1])
"""
Explanation: EECS 445: Machine Learning
Lecture 16: Latent Variables, d-Separation, Gaussian Mixture Models
Instructor: Jacob Abernethy
Date: November 9, 2016
References
[MLAPP] Murphy, Kevin. Machine Learning: A Probabilistic Perspective. 2012.
[PRML] Bishop, Christopher. Pattern Recognition and Machine Learning. 2006.
[Koller & Friedman 2009] Koller, Daphne and Nir Friedman. Probabilistic Graphical Models. 2009.
Book Chapter: The language of directed acyclic
graphical models
This notes are really nice
Outline
Review of Exponential Families
MLE
Brief discussion of conjugate priors and MAP estimation
Probabilistic Graphical Models
Review of Conditional Indep. Assumptions
Intro to Hidden Markov Models
Latent Variable Models in general
d-separation in Bayesian Networks
Mixture Models
Gaussian Mixture Model
Relationship to Clustering
Exponential Family Distributions
DEF: $p(x | \theta)$ has exponential family form if:
$$
\begin{align}
p(x | \theta)
&= \frac{1}{Z(\theta)} h(x) \exp\left[ \eta(\theta)^T \phi(x) \right] \
&= h(x) \exp\left[ \eta(\theta)^T \phi(x) - A(\theta) \right]
\end{align}
$$
$Z(\theta)$ is the partition function for normalization
$A(\theta) = \log Z(\theta)$ is the log partition function
$\phi(x) \in \R^d$ is a vector of sufficient statistics
$\eta(\theta)$ maps $\theta$ to a set of natural parameters
$h(x)$ is a scaling constant, usually $h(x)=1$
Exponential Family: MLE
To find the maximum, recall $\nabla_\theta A(\theta) = E_\theta[\phi(x)]$, so
\begin{align}
\nabla_\theta \log p(\D | \theta) & =
\nabla_\theta(\theta^T \phi(\D) - N A(\theta)) \
& = \phi(\D) - N E_\theta[\phi(X)] = 0
\end{align}
Which gives
$$E_\theta[\phi(X)] = \frac{\phi(\D)}{N} = \frac{1}{N} \sum_{k=1}^N \phi(x_k)$$
Obtaining the maximum likelihood is simply solving the calculus problem $\nabla_\theta A(\theta) = \frac{1}{N} \sum_{k=1}^N \phi(x_k)$ which is often easy
Bayes for Exponential Family
Exact Bayesian analysis is considerably simplified if the prior is conjugate to the likelihood.
- Simply, this means that prior $p(\theta)$ has the same form as the posterior $p(\theta|\mathcal{D})$.
This requires likelihood to have finite sufficient statistics
* Exponential family to the rescue!
Note: We will release some notes on cojugate priors + exponential families. It's hard to learn from slides and needs a bit more description.
Likelihood for exponential family
Likelihood:
$$ p(\mathcal{D}|\theta) \propto g(\theta)^N \exp[\eta(\theta)^T s_N]\
s_N = \sum_{i=1}^{N}\phi(x_i)$$
In terms of canonical parameters:
$$ p(\mathcal{D}|\eta) \propto \exp[N\eta^T \bar{s} -N A(\eta)] \
\bar s = \frac{1}{N}s_N $$
Conjugate prior for exponential family
The prior and posterior for an exponential family involve two parameters, $\tau$ and $\nu$, initially set to $\tau_0, \nu_0$
$$ p(\theta| \nu_0, \tau_0) \propto g(\theta)^{\nu_0} \exp[\eta(\theta)^T \tau_0] $$
Denote $ \tau_0 = \nu_0 \bar{\tau}_0$ to separate out the size of the prior pseudo-data, $\nu_0$ , from the mean of the sufficient statistics on this pseudo-data, $\tau_0$ . Hence,
$$ p(\theta| \nu_0, \bar \tau_0) \propto \exp[\nu_0\eta(\theta)^T \bar \tau_0 - \nu_0 A(\eta)] $$
Think of $\tau_0$ as a "guess" of the future sufficient statistics, and $\nu_0$ as the strength of this guess
Prior: Example
$$
\begin{align}
p(\theta| \nu_0, \tau_0)
&\propto (1-\theta)^{\nu_0} \exp[\tau_0\log(\frac{\theta}{1-\theta})] \
&= \theta^{\tau_0}(1-\theta)^{\nu_0 - \tau_0}
\end{align}
$$
Define $\alpha = \tau_0 +1 $ and $\beta = \nu_0 - \tau_0 +1$ to see that this is a beta distribution.
Posterior
Posterior:
$$ p(\theta|\mathcal{D}) = p(\theta|\nu_N, \tau_N) = p(\theta| \nu_0 +N, \tau_0 +s_N) $$
Note that we obtain hyper-parameters by adding. Hence,
$$ \begin{align}
p(\eta|\mathcal{D})
&\propto \exp[\eta^T (\nu_0 \bar\tau_0 + N \bar s) - (\nu_0 + N) A(\eta) ] \
&= p(\eta|\nu_0 + N, \frac{\nu_0 \bar\tau_0 + N \bar s}{\nu_0 + N})
\end{align}$$
where $\bar s = \frac 1 N \sum_{i=1}^{N}\phi(x_i)$.
posterior hyper-parameters are a convex combination of the prior mean hyper-parameters and the average of the sufficient statistics.
Back to Graphical Models!
Recall: Bayesian Networks: Definition
A Bayesian Network $\mathcal{G}$ is a directed acyclic graph whose nodes represent random variables $X_1, \dots, X_n$.
- Let $\Parents_\G(X_k)$ denote the parents of $X_k$ in $\G$
- Let $\NonDesc_\G(X_k)$ denote the variables in $\G$ who are not descendants of $X_k$.
Examples will come shortly...
Bayesian Networks: Local Independencies
Every Bayesian Network $\G$ encodes a set $\I_\ell(\G)$ of local independence assumptions:
For each variable $X_k$, we have $(X_k \perp \NonDesc_\G(X_k) \mid \Parents_\G(X_k))$
Every node $X_k$ is conditionally independent of its nondescendants given its parents.
Example: Naive Bayes
The graphical model for Naive Bayes is shown below:
- $\Parents_\G(X_k) = { C }$, $\NonDesc_\G(X_k) = { X_j }_{j\neq k} \cup { C }$
- Therefore $X_j \perp X_k \mid C$ for any $j \neq k$
<img src="../lecture15_exp_families_bayesian_networks/images/naive-bayes.png">
Factorization Theorem: Statement
Theorem: (Koller & Friedman 3.1) If $\G$ is an I-map for $P$, then $P$ factorizes as follows:
$$
P(X_1, \dots, X_N) = \prod_{k=1}^N P(X_k \mid \Parents_\G(X_k))
$$
Example: Fully Connected Graph
A fully connected graph makes no independence assumptions.
$$
P(A,B,C) = P(A) P(B|A) P(C|A,B)
$$
<img src="../lecture15_exp_families_bayesian_networks/images/fully-connected-b.png">
Importan PGM Example: Markov Chain
State at time $t$ depends only on state at time $t-1$.
$$
P(X_0, X_1, \dots, X_N) = P(X_0) \prod_{t=1}^N P(X_t \mid X_{t-1})
$$
<img src="../lecture15_exp_families_bayesian_networks/images/markov-chain.png">
Compact Representation of PGM: Plate Notation
We can represent (conditionally) iid variables using plate notation.
A box around a variable (or set of variables), with a $K$, means that we have access to $K$ iid samples of this variable.
<img src="../lecture15_exp_families_bayesian_networks/images/plate-example.png">
Unobserved Variables: Hidden Markov Model
Noisy observations $X_k$ generated from hidden Markov chain $Y_k$. (More on this soon)
<img src="../lecture15_exp_families_bayesian_networks/images/hmm.png">
This Brings us to: Latent Variable Models
Uses material from [MLAPP] §10.1-10.4, §11.1-11.2
Latent Variable Models
In general, the goal of probabilistic modeling is to
Use what we know to make inferences about what we don't know.
Graphical models provide a natural framework for this problem.
- Assume unobserved variables are correlated due to the influence of unobserved latent variables.
- Latent variables encode beliefs about the generative process.
In a graphical model, we will often shade in the observed variables to distinguish them from hidden variables.
Example: Gaussian Mixture Models
This dataset is hard to explain with a single distribution.
- Underlying density is complicated overall...
- But it's clearly three Gaussians!
End of explanation
"""
@pgm_render
def pgm_gmm():
pgm = daft.PGM([4,4], origin=[-2,-1], node_unit=0.8, grid_unit=2.0);
# nodes
pgm.add_node(daft.Node("pi", r"$\pi$", 0, 1));
pgm.add_node(daft.Node("z", r"$Z_j$", 0.7, 1));
pgm.add_node(daft.Node("x", r"$X_j$", 1.3, 1, observed=True));
pgm.add_node(daft.Node("mu", r"$\mu$", 0.7, 0.3));
pgm.add_node(daft.Node("sigma", r"$\Sigma$", 1.3, 0.3));
# edges
pgm.add_edge("pi", "z", head_length=0.08);
pgm.add_edge("z", "x", head_length=0.08);
pgm.add_edge("mu", "x", head_length=0.08);
pgm.add_edge("sigma", "x", head_length=0.08);
pgm.add_plate(daft.Plate([0.4,0.8,1.3,0.5], label=r"$\qquad\qquad\qquad\;\; N$",
shift=-0.1))
return pgm;
%%capture
pgm_gmm("images/pgm/pgm-gmm.png")
"""
Explanation: Example: Mixture Models
Instead, introduce a latent cluster label $z_j \in [K]$ for each datapoint $x_j$,
$$
\begin{align}
z_j &\sim \mathrm{Cat}(\pi_1, \dots, \pi_K)
& \forall\, j=1,\dots,N \
x_j \mid z_j &\sim \mathcal{N}(\mu_{z_j}, \Sigma_{z_j})
& \forall\, j=1,\dots,N \
\end{align}
$$
This allows us to explain a complicated density as a mixture of simpler densities:
$$
P(x | \mu, \Sigma) = \sum_{k=1}^K \pi_k \mathcal{N}(x | \mu_k, \Sigma_k)
$$
Example: Mixture Models
End of explanation
"""
@pgm_render
def pgm_hmm():
pgm = daft.PGM([7, 7], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("Y1", r"$Y_1$", 1, 3.5))
pgm.add_node(daft.Node("Y2", r"$Y_2$", 2, 3.5))
pgm.add_node(daft.Node("Y3", r"$\dots$", 3, 3.5, plot_params={'ec':'none'}))
pgm.add_node(daft.Node("Y4", r"$Y_N$", 4, 3.5))
pgm.add_node(daft.Node("x1", r"$X_1$", 1, 2.5, observed=True))
pgm.add_node(daft.Node("x2", r"$X_2$", 2, 2.5, observed=True))
pgm.add_node(daft.Node("x3", r"$\dots$", 3, 2.5, plot_params={'ec':'none'}))
pgm.add_node(daft.Node("x4", r"$X_N$", 4, 2.5, observed=True))
# Add in the edges.
pgm.add_edge("Y1", "Y2", head_length=0.08)
pgm.add_edge("Y2", "Y3", head_length=0.08)
pgm.add_edge("Y3", "Y4", head_length=0.08)
pgm.add_edge("Y1", "x1", head_length=0.08)
pgm.add_edge("Y2", "x2", head_length=0.08)
pgm.add_edge("Y4", "x4", head_length=0.08)
return pgm;
%%capture
pgm_hmm("images/pgm/hmm.png");
"""
Explanation: Example: Hidden Markov Models
Noisy observations $X_k$ generated from hidden Markov chain $Y_k$.
$$
P(\vec{X}, \vec{Y}) = P(Y_1) P(X_1 \mid Y_1) \prod_{k=2}^N \left(P(Y_k \mid Y_{k-1}) P(X_k \mid Y_k)\right)
$$
End of explanation
"""
@pgm_render
def pgm_unsupervised():
pgm = daft.PGM([6, 6], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("d1", r"$Z_1$", 2, 3.5))
pgm.add_node(daft.Node("di", r"$Z_2$", 3, 3.5))
pgm.add_node(daft.Node("dn", r"$Z_3$", 4, 3.5))
pgm.add_node(daft.Node("f1", r"$X_1$", 1, 2.50, observed=True))
pgm.add_node(daft.Node("fi-1", r"$X_2$", 2, 2.5, observed=True))
pgm.add_node(daft.Node("fi", r"$X_3$", 3, 2.5, observed=True))
pgm.add_node(daft.Node("fi+1", r"$X_4$", 4, 2.5, observed=True))
pgm.add_node(daft.Node("fm", r"$X_N$", 5, 2.5, observed=True))
# Add in the edges.
pgm.add_edge("d1", "f1", head_length=0.08)
pgm.add_edge("d1", "fi-1", head_length=0.08)
pgm.add_edge("d1", "fi", head_length=0.08)
pgm.add_edge("d1", "fi+1", head_length=0.08)
pgm.add_edge("d1", "fm", head_length=0.08)
pgm.add_edge("di", "f1", head_length=0.08)
pgm.add_edge("di", "fi-1", head_length=0.08)
pgm.add_edge("di", "fi", head_length=0.08)
pgm.add_edge("di", "fi+1", head_length=0.08)
pgm.add_edge("di", "fm", head_length=0.08)
pgm.add_edge("dn", "f1", head_length=0.08)
pgm.add_edge("dn", "fi-1", head_length=0.08)
pgm.add_edge("dn", "fi", head_length=0.08)
pgm.add_edge("dn", "fi+1", head_length=0.08)
pgm.add_edge("dn", "fm", head_length=0.08)
return pgm
%%capture
pgm_unsupervised("images/pgm/unsupervised.png");
"""
Explanation: Example: Unsupervised Learning
Latent variables are fundamental to unsupervised and deep learning.
- Serve as a bottleneck
- Compute a compressed representation of data
End of explanation
"""
@pgm_render
def pgm_question1():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("c", "a", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm;
%%capture
pgm_question1("images/pgm/question1.png")
"""
Explanation: Other Latent Variable Models
Many other models in machine learning involve latent variables:
Neural Networks / Multilayer Perceptrons
Restricted Boltzmann Machines
Deep Belief Networks
Probabilistic PCA
Latent Variable Models: Complexity
Latent variable models exhibit emergent complexity.
- Although each conditional distribution is simple,
- The joint distribution is capable of modeling complex interactions.
However, latent variables make learning difficult.
- Inference is challening in models with latent variables.
- They can introduce new dependencies between observed variables.
Break time
<img src="images/boxing_cat.gif"/>
Bayesian Networks
Part II: Inference, Learning, and d-Separation
Uses material from [Koller & Friedman 2009] Chapter 3, [MLAPP] Chapter 10, and [PRML] §8.2.1
Bayesian Networks: Terminology
Typically, our models will have
- Observed variables $X$
- Hidden variables $Z$
- Parameters $\theta$
Occasionally, we will distinguish between inference and learning.
Bayesian Networks: Inference
Inference: Estimate hidden variables $Z$ from observed variables $X$.
$$
P(Z | X,\theta) = \frac{P(X,Z | \theta)}{P(X|\theta)}
$$
Denominator $P(X|\theta)$ is sometimes called the probability of the evidence.
Occasionally we care only about a subset of the hidden variables, and marginalize out the rest.
Bayesian Networks: Learning
Learning: Estimate parameters $\theta$ from observed data $X$.
$$
P(\theta \mid X) = \sum_{z \in Z} P(\theta, z \mid X) = \sum_{z \in Z} P(\theta \mid z, X) P(z \mid X)
$$
To Bayesians, parameters are hidden variables, so inference and learning are equivalent.
Bayesian Networks: Probability Queries
In general, it is useful to compute $P(A|B)$ for arbitrary collections $A$ and $B$ of variables.
- Both inference and learning take this form.
To accomplish this, we must understand the independence structure of any given graphical model.
Review: Local Independencies
Every Bayesian Network $\G$ encodes a set $\I_\ell(\G)$ of local independence assumptions:
For each variable $X_k$, we have $(X_k \perp \NonDesc_\G(X_k) \mid \Parents_\G(X_k))$
Every node $X_k$ is conditionally independent of its nondescendants given its parents.
For arbitrary sets of variables, when does $(A \perp B \mid C)$ hold?
Review: I-Maps
If $P$ satisfies the independence assertions made by $\G$, we say that
- $\G$ is an I-Map for $P$
- or that $P$ satisfies $\G$.
Any distribution satisfying $\G$ shares common structure.
- We will exploit this structure in our algorithms
- This is what makes graphical models so powerful!
Review: Factorization Theorem
Last time, we proved that for any $P$ satisfying $\G$,
$$
P(X_1, \dots, X_N) = \prod_{k=1}^N P(X_k \mid \Parents_\G(X_k))
$$
If we understand independence structure, we can factorize arbitrary conditional distributions:
$$
P(A_1, \dots, A_n \mid B_1, \dots, B_m) = \;?
$$
Question 1: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question2():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5,
observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("c", "a", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question2("images/pgm/question2.png")
"""
Explanation: Answer 1: No!
No! $A$ and $B$ are not marginally independent.
- Note $C$ is not shaded, so we don't observe it.
In general,
$$
P(A,B) = \sum_{c \in C} P(A,B,c) = \sum_{c \in C} P(A|c)P(B|c)P(c) \neq P(A)P(B)
$$
Question 2: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_question3():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question3("images/pgm/question3.png")
"""
Explanation: Answer 2: Yes!
Yes! $(A \perp B | C)$ follows from the local independence properties of Bayesian networks.
Every variable is conditionally independent of its nondescendants given its parents.
Observing $C$ blocks the path of influence from $A$ to $B$. Or, using factorization theorem:
$$
\begin{align}
P(A,B|C) & = \frac{P(A,B,C)}{P(C)} \
& = \frac{P(C)P(A|C)P(B|C)}{P(C)} \
& = P(A|C)P(B|C)
\end{align}
$$
Question 3: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question4():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5, observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question4("images/pgm/question4.png")
"""
Explanation: Answer 3: No!
Again, $C$ is not given, so $A$ and $B$ are dependent.
Question 4: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_question5():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("b", "c", head_length=0.08)
return pgm
%%capture
pgm_question5("images/pgm/question5.png")
"""
Explanation: Answer 4: Yes!
Again, observing $C$ blocks influence from $A$ to $B$.
Every variable is conditionally independent of its nondescendants given its parents.
Question 5: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question6():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5, observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("b", "c", head_length=0.08)
return pgm
%%capture
pgm_question6("images/pgm/question6.png")
"""
Explanation: Answer 5: Yes!
Using the factorization rule,
$$
P(A,B,C) = P(A)P(B)P(C\mid A,B)
$$
Therefore, marginalizing out $C$,
$$
\begin{align}
P(A,B) & = \sum_{c \in C} P(A,B,c) \
& = \sum_{c \in C} P(A)P(B) P(c \mid A,B) \
& = P(A)P(B) \sum_{c \in C} P(c \mid A,B) = P(A)P(B)
\end{align}
$$
Question 6: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_bfg_1():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_1("images/pgm/bfg-1.png")
"""
Explanation: Answer: No!
$A$ can influence $B$ via $C$.
$$
P(A,B | C) = \frac{P(A,B,C)}{P(C)} = \frac{P(A)P(B)P(C|A,B)}{P(C)}
$$
This does not factorize in general to $P(A|C)P(B|C)$.
Example: Battery, Fuel, and Gauge
Consider three binary random variables
- Battery $B$ is either charged $(B=1)$ or dead, $(B=0)$
- Fuel tank $F$ is either full $(F=1)$ or empty, $(F=0)$
- Fuel gauge $G$ either indicates full $(G=1)$ or empty, $(G=0)$
Assume $(B \perp F)$ with priors
- $P(B = 1) = 0.9$
- $P(F = 1) = 0.9$
Example: Battery, Fuel, and Gauge
Given the state of the fuel tank and the battery, the fuel gauge reads full with probabilities:
- $p(G = 1 \mid B = 1, F = 1) = 0.8$
- $p(G = 1 \mid B = 1, F = 0) = 0.2$
- $p(G = 1 \mid B = 0, F = 1) = 0.2$
- $p(G = 1 \mid B = 0, F = 0) = 0.1$
Example: Battery, Fuel, and Gauge
Without any observations, the probability of an empty fuel tank is
$$
P(F=0) = 1 - P(F = 1) = 0.1
$$
End of explanation
"""
@pgm_render
def pgm_bfg_2():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5, offset=(0, 20)))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5, offset=(0, 20)))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_2("images/pgm/bfg-2.png");
"""
Explanation: Example: Empty Gauge
Now, suppose the gauge reads $G=0$. We have
$$
P(G=0) = \sum \limits_{B \in {0, 1}} \sum \limits_{F \in {0, 1}}
P(G = 0 \mid B, F) P(B) P(F) = 0.315
$$
Verify this!
Example: Emtpy Gauge
End of explanation
"""
@pgm_render
def pgm_bfg_3():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5, offset=(0, 20)))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_3("images/pgm/bfg-3.png")
"""
Explanation: Example: Empty Gauge
Now, we also have
$$
p(G = 0 \mid F = 0)
= \sum \limits_{B \in {0, 1}} p(G = 0 \mid B, F = 0) p(B) = 0.81
$$
Applying Bayes' Rule,
$$
\begin{align}
p(F = 0 \mid G = 0)
&= \frac{p(G = 0 \mid F = 0) p(F = 0)}{p(G = 0)} \
&\approx 0.257 > p(F = 0) = 0.10
\end{align}
$$
Observing an empty gauge makes it more likely that the tank is empty!
Example: Emtpy Gauge, Dead Battery
Now, suppose we also observe a dead battery $B =0$. Then,
$$
\begin{align}
p(F = 0 \mid G = 0, B = 0)
&= \frac{p(G = 0 \mid B = 0, F = 0) p(F = 0)}{\sum_{F \in {0, 1}} p(G = 0 \mid B = 0, F) p(F)} \
&\approx 0.111
\end{align}
$$
Example: Emtpy Gauge, Dead Battery
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/ml_fairness_explainability/explainable_ai/labs/xai_image_vertex.ipynb
|
apache-2.0
|
# Install needed deps
!pip install opencv-python
"""
Explanation: AI Explanations: Deploying an Explainable Image Model with Vertex AI
Overview
This lab shows how to train a classification model on image data and deploy it to Vertex AI to serve predictions with explanations (feature attributions). In this lab you will:
* Explore the dataset
* Build and train a custom image classification model with Vertex AI
* Deploy the model to an endpoint
* Serve predictions with explanations
* Visualize feature attributions from Integrated Gradients
End of explanation
"""
import base64
import os
import random
from datetime import datetime
import cv2
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from google.cloud import aiplatform
from matplotlib import pyplot as plt
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET = PROJECT
REGION = "us-central1"
GCS_PATTERN = "gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec"
DATA_PATH = f"gs://{BUCKET}/flowers/data"
OUTDIR = f"gs://{BUCKET}/flowers/model_{TIMESTAMP}"
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["DATA_PATH"] = DATA_PATH
os.environ["OUTDIR"] = OUTDIR
os.environ["TIMESTAMP"] = TIMESTAMP
print(f"Project: {PROJECT}")
"""
Explanation: Restart Kernel
Setup
Import libraries
Import the libraries for this tutorial.
End of explanation
"""
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist.
End of explanation
"""
TRAINING_DATA_PATH = DATA_PATH + "/training"
EVAL_DATA_PATH = DATA_PATH + "/validation"
VALIDATION_SPLIT = 0.2
# Split data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
# Copy training files to GCS
for file in training_filenames:
!gsutil -m cp $file $TRAINING_DATA_PATH/
# Copy eval files to GCS
for file in validation_filenames:
!gsutil -m cp $file $EVAL_DATA_PATH/
"""
Explanation: Explore the Dataset
The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets. This section shows how to shuffle, split, and copy the files to your GCS bucket.
Load, split, and copy the dataset to your GCS bucket
End of explanation
"""
!gsutil ls -l $TRAINING_DATA_PATH
!gsutil ls -l $EVAL_DATA_PATH
"""
Explanation: Run the following commands. You should see a number of .tfrec files in your GCS bucket at both gs://{BUCKET}/flowers/data/training and gs://{BUCKET}/flowers/data/validation
End of explanation
"""
IMAGE_SIZE = [192, 192]
BATCH_SIZE = 32
# Do not change, maps to the labels in the data
CLASSES = [
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips",
]
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature(
[], tf.string
), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example["image"], channels=3)
image = (
tf.cast(image, tf.float32) / 255.0
) # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3])
one_hot_class = tf.sparse.to_dense(example["one_hot_class"])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
# Load tfrecords into tf.data.Dataset
def load_dataset(gcs_pattern):
filenames = filenames = tf.io.gfile.glob(gcs_pattern + "/*")
ds = tf.data.TFRecordDataset(filenames).map(read_tfrecord)
return ds
# Converts N examples in dataset to numpy arrays
def dataset_to_numpy(dataset, N):
numpy_images = []
numpy_labels = []
for images, labels in dataset.take(N):
numpy_images.append(images.numpy())
numpy_labels.append(labels.numpy())
return numpy_images, numpy_labels
def display_one_image(image, title, subplot):
plt.subplot(subplot)
plt.axis("off")
plt.imshow(image)
plt.title(title, fontsize=16)
return subplot + 1
def display_9_images_from_dataset(dataset):
subplot = 331
plt.figure(figsize=(13, 13))
images, labels = dataset_to_numpy(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_image(image, title, subplot)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
# Display 9 examples from the dataset
ds = load_dataset(gcs_pattern=TRAINING_DATA_PATH)
display_9_images_from_dataset(ds)
"""
Explanation: Create ingest functions and visualize some of the examples
Define and execute helper functions to plot the images and corresponding labels.
End of explanation
"""
%%bash
mkdir -p flowers/trainer
touch flowers/trainer/__init__.py
"""
Explanation: Build training pipeline
In this section you will build an application with keras to train an image classification model on Vertex AI Custom Training.
Create a directory for the training application and an init .py file (this is required for a Python application but it can be empty).
End of explanation
"""
%%writefile flowers/trainer/train.py
import datetime
import fire
import os
import tensorflow as tf
import tensorflow_hub as hub
IMAGE_SIZE = [192, 192]
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature(
[], tf.string
), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example["image"], channels=3)
image = (
tf.cast(image, tf.float32) / 255.0
) # convert image to floats in [0, 1] range
image = tf.reshape(
image, [*IMAGE_SIZE, 3]
)
one_hot_class = tf.sparse.to_dense(example["one_hot_class"])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(gcs_pattern, batch_size=32, training=True):
filenames = filenames = tf.io.gfile.glob(gcs_pattern)
ds = tf.data.TFRecordDataset(filenames).map(
read_tfrecord).batch(batch_size)
if training:
return ds.repeat()
else:
return ds
def build_model():
# MobileNet model for feature extraction
mobilenet_v2 = 'https://tfhub.dev/google/imagenet/'\
'mobilenet_v2_100_192/feature_vector/5'
feature_extractor_layer = hub.KerasLayer(
mobilenet_v2,
input_shape=[*IMAGE_SIZE, 3],
trainable=False
)
# Instantiate model
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(5, activation="softmax")
])
model.compile(optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"])
return model
def train_and_evaluate(train_data_path,
eval_data_path,
output_dir,
batch_size,
num_epochs,
train_examples):
model = build_model()
train_ds = load_dataset(gcs_pattern=train_data_path,
batch_size=batch_size)
eval_ds = load_dataset(gcs_pattern=eval_data_path,
training=False)
num_batches = batch_size * num_epochs
steps_per_epoch = train_examples // num_batches
history = model.fit(
train_ds,
validation_data=eval_ds,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
)
tf.saved_model.save(
obj=model, export_dir=output_dir
) # with default serving function
print("Exported trained model to {}".format(output_dir))
if __name__ == "__main__":
fire.Fire(train_and_evaluate)
"""
Explanation: Create training application in train.py
This code contains the training logic. Here you build an application to ingest data from GCS and train an image classification model using mobileNet as a feature extractor, then sending it's output feature vector through a tf.keras.dense layer with 5 units and softmax activation (because there are 5 possible labels). Also, use the fire library which enables arguments to train_and_evaluate to be passed via the command line.
End of explanation
"""
%%bash
OUTDIR_LOCAL=local_test_training
rm -rf ${OUTDIR_LOCAL}
export PYTHONPATH=${PYTHONPATH}:${PWD}/flowers
python3 -m trainer.train \
--train_data_path=gs://${BUCKET}/flowers/data/training/*.tfrec \
--eval_data_path=gs://${BUCKET}/flowers/data/validation/*.tfrec \
--output_dir=${OUTDIR_LOCAL} \
--batch_size=1 \
--num_epochs=1 \
--train_examples=10
"""
Explanation: Test training application locally
It's always a good idea to test out a training application locally (with only a few training steps) to make sure the code runs as expected.
End of explanation
"""
%%writefile flowers/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='flowers_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
install_requires=['fire==0.4.0'],
description='Flowers image classifier training application.'
)
%%bash
cd flowers
python ./setup.py sdist --formats=gztar
cd ..
"""
Explanation: Package code as source distribution
Now that you have validated your model training code, we need to package our code as a source distribution in order to submit a custom training job to Vertex AI.
End of explanation
"""
%%bash
gsutil cp flowers/dist/flowers_trainer-0.1.tar.gz gs://${BUCKET}/flowers/
"""
Explanation: Store the package in GCS
End of explanation
"""
%%bash
JOB_NAME=flowers_${TIMESTAMP}
PYTHON_PACKAGE_URI=gs://${BUCKET}/flowers/flowers_trainer-0.1.tar.gz
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.train
echo > ./config.yaml \
"workerPoolSpecs:
machineSpec:
machineType: n1-standard-8
replicaCount: 1
pythonPackageSpec:
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris: $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
args:
- --train_data_path=gs://${BUCKET}/flowers/data/training/*.tfrec
- --eval_data_path=gs://${BUCKET}/flowers/data/validation/*.tfrec
- --output_dir=$OUTDIR
- --num_epochs=15
- --train_examples=15000
- --batch_size=32
"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--config=config.yaml
"""
Explanation: To submit to the Cloud we use gcloud custom-jobs create and simply specify some additional parameters for the Vertex AI Training Service:
- display-name: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- region: Cloud region to train in. See here for supported Vertex AI Training Service regions
You might have earlier seen gcloud ai custom-jobs create executed with the worker pool spec and pass-through Python arguments specified directly in the command call, here we will use a YAML file, this will make it easier to transition to hyperparameter tuning.
Through the args: argument we add in the passed-through arguments for our task.py file.
End of explanation
"""
local_model = tf.keras.models.load_model(OUTDIR)
local_model.summary()
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = #TODO: Resize decoded image
rescale = #TODO: Rescale image
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
# the function that sends data through the model itself and returns
# the output probabilities
m_call = tf.function(local_model.call).get_concrete_function(
[
tf.TensorSpec(
shape=[None, 192, 192, 3], dtype=tf.float32, name=CONCRETE_INPUT
)
]
)
tf.saved_model.save(
local_model,
OUTDIR,
signatures={
"serving_default": serving_fn,
# Required for XAI
"xai_preprocess": preprocess_fn,
"xai_model": m_call,
},
)
"""
Explanation: NOTE Model training will take 5 minutes or so. You have to wait for training to finish before moving forward.
Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32.
- image.resize - Resizes the image to match the input shape for the model.
- resized / 255.0 - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (m_call).
XAI Signatures
When the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default.
For XAI image models, you need to save two additional signatures from the serving function:
xai_preprocess: The preprocessing function in the serving function.
xai_model: The concrete function for calling the model.
Load the model into memory. NOTE This directory will not exist if your model has not finished training. Please wait for training to complete before moving forward
End of explanation
"""
loaded = tf.saved_model.load(OUTDIR)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(
loaded.signatures["serving_default"].structured_outputs.keys()
)[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
parameters = aiplatform.explain.ExplanationParameters(
{"integrated_gradients_attribution": {"step_count": 50}}
)
"""
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
"""
MODEL_NAME = "flower_classifier_v1"
INPUT_METADATA = {"input_tensor_name": CONCRETE_INPUT, "modality": "image"}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aiplatform.explain.ExplanationMetadata.InputMetadata(
INPUT_METADATA
)
output_metadata = aiplatform.explain.ExplanationMetadata.OutputMetadata(
OUTPUT_METADATA
)
metadata = aiplatform.explain.ExplanationMetadata(
inputs={"image": input_metadata}, outputs={"class": output_metadata}
)
"""
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
"""
aiplatform.init(project=PROJECT, staging_bucket=BUCKET)
model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=OUTDIR,
serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest",
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
"""
Explanation: NOTE This can take a few minutes to run.
End of explanation
"""
endpoint = model.deploy(
deployed_model_display_name=MODEL_NAME,
traffic_split={"0": 100},
machine_type="n1-standard-4",
min_replica_count=1,
max_replica_count=1,
)
"""
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
NOTE This can take a few minutes.
End of explanation
"""
eval_ds = load_dataset(EVAL_DATA_PATH)
x_test, y_test = dataset_to_numpy(eval_ds, 5)
# Single image from eval dataset
test_image = x_test[0] * 255.0
# Write image out as jpg
cv2.imwrite("tmp.jpg", test_image)
"""
Explanation: Prepare the request content
You are going to send the flower image as compressed JPG image, instead of the raw uncompressed bytes:
cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
Denormalize the image data from [0,1) range back to [0,255). We need to do this because load_dataset scales pixel values between [0,1) however JPG files expect pixel values in the range [0,255).
Convert the 32-bit floating point values to 8-bit unsigned integers.
tf.io.read_file: Read the compressed JPG images back into memory as raw bytes.
base64.b64encode: Encode the raw bytes into a base 64 encoded string.
End of explanation
"""
# Read image and base64 encode
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
instances_list = [{serving_input: {"b64": b64str}}]
response = #TODO: Get prediction with explanation from endpoint
print(response)
"""
Explanation: Read the JPG image and encode it with base64 to send to the model endpoint. Send the encoded image to the endpoint with endpoint.explain. Then you can parse the response for the prediction and explanation. Full documentation on endpoint.explain can be found here.
End of explanation
"""
import io
from io import BytesIO
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
CLASSES = [
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips",
]
# Parse prediction
for prediction in response.predictions:
label_index = np.argmax(prediction)
class_name = CLASSES[label_index]
confidence_score = prediction[label_index]
print(
"Predicted class: "
+ class_name
+ "\n"
+ "Confidence score: "
+ str(confidence_score)
)
image = base64.b64decode(b64str)
image = BytesIO(image)
img = mpimg.imread(image, format="JPG")
# Parse explanation
for explanation in response.explanations:
attributions = dict(explanation.attributions[0].feature_attributions)
xai_label_index = explanation.attributions[0].output_index[0]
xai_class_name = CLASSES[xai_label_index]
xai_b64str = attributions["image"]["b64_jpeg"]
xai_image = base64.b64decode(xai_b64str)
xai_image = io.BytesIO(xai_image)
xai_img = mpimg.imread(xai_image, format="JPG")
# Plot image, feature attribution mask, and overlayed image
fig = plt.figure(figsize=(13, 18))
fig.add_subplot(1, 3, 1)
plt.title("Input Image")
plt.imshow(img)
fig.add_subplot(1, 3, 2)
plt.title("Feature Attribution Mask")
plt.imshow(xai_img)
fig.add_subplot(1, 3, 3)
plt.title("Overlayed Attribution Mask")
plt.imshow(img)
plt.imshow(xai_img, alpha=0.6)
plt.show()
"""
Explanation: Visualize feature attributions from Integrated Gradients.
Query the response to get predictions and feature attributions. Use Matplotlib to visualize.
End of explanation
"""
|
ljubisap/ml-dojo-part-I
|
Do it yourself.ipynb
|
apache-2.0
|
# TODO Create one string, int, float and boolean variable and print them out
"""
Explanation: Do it yourself...
Python basics
End of explanation
"""
# TODO Check what above given functions will produce from following variables:
a = 'Some test string...'
b = 'WE ARE LEARNING...'
c = 123
# TODO Concatenate all variables a, b and c into one and print it out
# TODO String formatting, usefull for logging and debugging
print "The %s who %s %s!" % ("Knights", "say", "Ni")
string1 = ' embedded string '
string2 = ' This is one string {}'.format(string1)
string2
import random
# TODO print the biggest number from the three given bellow num1, num2 and num3
num1 = random.randint(1, 100)
num2 = random.randint(1, 100)
num3 = random.randint(1, 100)
# TODO if number1 is bigger, print "number1 is bigger"
# if number2 is bigger, print "number2 is bigger"
# if they are equal, print "Numbers are equal, you had 1% chance to get this..."
number1 = random.randint(1, 100)
number2 = random.randint(1, 100)
# TODO if you are German and n is greater than m
# print upper case lc variable otherwise
# print lower case up variable
n = random.randint(1, 100)
m = random.randint(1, 100)
german = ? (True/False)
lc = 'lower case string'
up = 'UPPER CASE STRING'
# TODO remove false Beatle form the list
beatles = ["john","paul","george","ringo","stuart"]
# TODO print out all the beatles with the loop
# TODO make John and Ringo switch their places in the list
# TODO So as a reminder the Beatles are John Lennon, Paul McCartney, George Harrison and Ringo Starr
# in that respect attach proper last name to every Beatle in the list
# Now just execute this...
%run man.py
"""
Explanation: Most basic Python string functions are:
1. len() - checks length of the string
usage example:
python
len(string)
2. lower() - creates lower case string
usage example:
python
string.lower()
3. upper() - creates upper case string
usage example:
python
string.upper()
4. str() - creates string / implicit string conversion
usage example:
python
str(string)
End of explanation
"""
# TODO import proper Python libraries to examine and investigate Titanic data set
import matplotlib.pyplot as plt
# TODO load Titanic training set. File name is titanic_train.csv
import pandas as pd
df = pd.read_csv('titanic_train.csv')
"""
Explanation: Data - Titanic data set
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
* Survived: Outcome of survival (0 = No; 1 = Yes)
* Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
* Name: Name of passenger
* Sex: Sex of the passenger
* Age: Age of the passenger (Some entries contain NaN)
* SibSp: Number of siblings and spouses of the passenger aboard
* Parch: Number of parents and children of the passenger aboard
* Ticket: Ticket number of the passenger
* Fare: Fare paid by the passenger
* Cabin: Cabin number of the passenger (Some entries contain NaN)
* Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
More about this data set can be found on Kaggle website.
End of explanation
"""
# TODO Check does any column in Titanic data set contains NaN values
# TODO Plot Pclass and Fare data distribution
# TODO Count how many passangers are over 40 years
# TODO Count how many men among passangers are over 40 years
# TODO Count how many men among passangers are over 40 years survived
# TODO Plot data distribution
# TODO if children are considered to be under the age of 16, how many children were in Titanic
# TODO How many men named Edward were among the passangers
# TODO experiment yourself a bit ;-)
"""
Explanation: Data investigation
End of explanation
"""
|
adityaka/misc_scripts
|
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_11/Final/.ipynb_checkpoints/Resampling-checkpoint.ipynb
|
bsd-3-clause
|
# min: minutes
my_index = pd.date_range('9/1/2016', periods=9, freq='min')
my_index
"""
Explanation: Resampling
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html
For arguments to 'freq' parameter, please see Offset Aliases
create a date range to use as an index
End of explanation
"""
my_series = pd.Series(np.arange(9), index=my_index)
my_series
"""
Explanation: create a time series that includes a simple pattern
End of explanation
"""
my_series.resample('3min').sum()
"""
Explanation: Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin
End of explanation
"""
my_series.resample('3min', label='right').sum()
"""
Explanation: Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left
Notice the difference in the time indices; the sum in each bin is the same
End of explanation
"""
my_series.resample('3min', label='right', closed='right').sum()
"""
Explanation: Downsample the series into 3 minute bins as above, but close the right side of the bin interval
"count backwards" from end of time series
End of explanation
"""
#select first 5 rows
my_series.resample('30S').asfreq()[0:5]
"""
Explanation: Upsample the series into 30 second bins
asfreq()
End of explanation
"""
def custom_arithmetic(array_like):
temp = 3 * np.sum(array_like) + 5
return temp
"""
Explanation: define a custom function to use with resampling
End of explanation
"""
my_series.resample('3min').apply(custom_arithmetic)
"""
Explanation: apply custom resampling function
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/solutions/LSTM_IMDB_Sentiment_Example.ipynb
|
apache-2.0
|
# keras.datasets.imdb is broken in TensorFlow 1.13 and 1.14 due to numpy 1.16.3
!pip install numpy==1.16.2
# All the imports!
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing import sequence
from numpy import array
# Supress deprecation warnings
import logging
logging.getLogger('tensorflow').disabled = True
# Fetch "IMDB Movie Review" data, constraining our reviews to
# the 10000 most commonly used words
vocab_size = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=vocab_size)
# Map for readable classnames
class_names = ["Negative", "Positive"]
"""
Explanation: LSTM Recurrent Neural Network
Learning Objectives
Create map for converting IMDB dataset to readable reviews.
Create and build LSTM Recurrent Neural Network.
Visualise the Model and train the LSTM.
Evaluate model with test data and view results.
What is this?
This Jupyter Notebook contains Python code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on the IMDB Movie Review Sentiment Analysis Dataset.
More information is given on this blogpost.
Introduction
Long Short Term Memory networks – usually just called “LSTMs” – are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. They work tremendously well on a large variety of problems, and are now widely used.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setting up
When running this for the first time you may get a warning telling you to restart the Runtime. You can ignore this, but feel free to select "Kernel->Restart Kernel" from the overhead menu if you encounter problems.
End of explanation
"""
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: Note: Please ignore any incompatibility errors or warnings as it does not impact the notebook's functionality.
This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
"""
# Get the word index from the dataset
word_index = tf.keras.datasets.imdb.get_word_index()
# Ensure that "special" words are mapped into human readable terms
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNKNOWN>"] = 2
word_index["<UNUSED>"] = 3
# Perform reverse word lookup and make it callable
# TODO
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
"""
Explanation: Create map for converting IMDB dataset to readable reviews
Reviews in the IMDB dataset have been encoded as a sequence of integers. Luckily the dataset also
contains an index for converting the reviews back into human readable form.
End of explanation
"""
# Concatenate test and training datasets
allreviews = np.concatenate((x_train, x_test), axis=0)
# Review lengths across test and training whole datasets
print("Maximum review length: {}".format(len(max((allreviews), key=len))))
print("Minimum review length: {}".format(len(min((allreviews), key=len))))
result = [len(x) for x in allreviews]
print("Mean review length: {}".format(np.mean(result)))
# Print a review and it's class as stored in the dataset. Replace the number
# to select a different review.
print("")
print("Machine readable Review")
print(" Review Text: " + str(x_train[60]))
print(" Review Sentiment: " + str(y_train[60]))
# Print a review and it's class in human readable format. Replace the number
# to select a different review.
print("")
print("Human Readable Review")
print(" Review Text: " + decode_review(x_train[60]))
print(" Review Sentiment: " + class_names[y_train[60]])
"""
Explanation: Data Insight
Here we take a closer look at our data. How many words do our reviews contain?
And what do our reviews look like in machine and human readable form?
End of explanation
"""
# The length of reviews
review_length = 500
# Padding / truncated our reviews
x_train = sequence.pad_sequences(x_train, maxlen = review_length)
x_test = sequence.pad_sequences(x_test, maxlen = review_length)
# Check the size of our datasets. Review data for both test and training should
# contain 25000 reviews of 500 integers. Class data should contain 25000 values,
# one for each review. Class values are 0 or 1, indicating a negative
# or positive review.
print("Shape Training Review Data: " + str(x_train.shape))
print("Shape Training Class Data: " + str(y_train.shape))
print("Shape Test Review Data: " + str(x_test.shape))
print("Shape Test Class Data: " + str(y_test.shape))
# Note padding is added to start of review, not the end
print("")
print("Human Readable Review Text (post padding): " + decode_review(x_train[60]))
"""
Explanation: Pre-processing Data
We need to make sure that our reviews are of a uniform length. This is for the LSTM's parameters.
Some reviews will need to be truncated, while others need to be padded.
End of explanation
"""
# We begin by defining an empty stack. We'll use this for building our
# network, later by layer.
model = tf.keras.models.Sequential()
# The Embedding Layer provides a spatial mapping (or Word Embedding) of all the
# individual words in our training set. Words close to one another share context
# and or meaning. This spatial mapping is learning during the training process.
model.add(
tf.keras.layers.Embedding(
input_dim = vocab_size, # The size of our vocabulary
output_dim = 32, # Dimensions to which each words shall be mapped
input_length = review_length # Length of input sequences
)
)
# Dropout layers fight overfitting and forces the model to learn multiple
# representations of the same data by randomly disabling neurons in the
# learning phase.
# TODO
model.add(
tf.keras.layers.Dropout(
rate=0.25 # Randomly disable 25% of neurons
)
)
# We are using a fast version of LSTM which is optimised for GPUs. This layer
# looks at the sequence of words in the review, along with their word embeddings
# and uses both of these to determine the sentiment of a given review.
# TODO
model.add(
tf.keras.layers.LSTM(
units=32 # 32 LSTM units in this layer
)
)
# Add a second dropout layer with the same aim as the first.
# TODO
model.add(
tf.keras.layers.Dropout(
rate=0.25 # Randomly disable 25% of neurons
)
)
# All LSTM units are connected to a single node in the dense layer. A sigmoid
# activation function determines the output from this node - a value
# between 0 and 1. Closer to 0 indicates a negative review. Closer to 1
# indicates a positive review.
model.add(
tf.keras.layers.Dense(
units=1, # Single unit
activation='sigmoid' # Sigmoid activation function (output from 0 to 1)
)
)
# Compile the model
model.compile(
loss=tf.keras.losses.binary_crossentropy, # loss function
optimizer=tf.keras.optimizers.Adam(), # optimiser function
metrics=['accuracy']) # reporting metric
# Display a summary of the models structure
model.summary()
"""
Explanation: Create and build LSTM Recurrent Neural Network
End of explanation
"""
tf.keras.utils.plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=False)
"""
Explanation: Visualise the Model
End of explanation
"""
# Train the LSTM on the training data
history = model.fit(
# Training data : features (review) and classes (positive or negative)
x_train, y_train,
# Number of samples to work through before updating the
# internal model parameters via back propagation. The
# higher the batch, the more memory you need.
batch_size=256,
# An epoch is an iteration over the entire training data.
epochs=3,
# The model will set apart his fraction of the training
# data, will not train on it, and will evaluate the loss
# and any model metrics on this data at the end of
# each epoch.
validation_split=0.2,
verbose=1
)
"""
Explanation: Train the LSTM
End of explanation
"""
# Get Model Predictions for test data
# TODO
from sklearn.metrics import classification_report
predicted_classes = np.argmax(model.predict(x_test), axis=-1)
print(classification_report(y_test, predicted_classes, target_names=class_names))
"""
Explanation: Evaluate model with test data and view results
End of explanation
"""
predicted_classes_reshaped = np.reshape(predicted_classes, 25000)
incorrect = np.nonzero(predicted_classes_reshaped!=y_test)[0]
# We select the first 10 incorrectly classified reviews
for j, incorrect in enumerate(incorrect[0:20]):
predicted = class_names[predicted_classes_reshaped[incorrect]]
actual = class_names[y_test[incorrect]]
human_readable_review = decode_review(x_test[incorrect])
print("Incorrectly classified Test Review ["+ str(j+1) +"]")
print("Test Review #" + str(incorrect) + ": Predicted ["+ predicted + "] Actual ["+ actual + "]")
print("Test Review Text: " + human_readable_review.replace("<PAD> ", ""))
print("")
"""
Explanation: View some incorrect predictions
Let's have a look at some of the incorrectly classified reviews. For readability we remove the padding.
End of explanation
"""
# Write your own review
review = "this was a terrible film with too much sex and violence i walked out halfway through"
#review = "this is the best film i have ever seen it is great and fantastic and i loved it"
#review = "this was an awful film that i will never see again"
# Encode review (replace word with integers)
tmp = []
for word in review.split(" "):
tmp.append(word_index[word])
# Ensure review is 500 words long (by padding or truncating)
tmp_padded = sequence.pad_sequences([tmp], maxlen=review_length)
# Run your processed review against the trained model
rawprediction = model.predict(array([tmp_padded][0]))[0][0]
prediction = int(round(rawprediction))
# Test the model and print the result
print("Review: " + review)
print("Raw Prediction: " + str(rawprediction))
print("Predicted Class: " + class_names[prediction])
"""
Explanation: Run your own text against the trained model
This is a fun way to test out the limits of the trained model. To avoid getting errors - type in lower case only and do not use punctuation!
You'll see the raw prediction from the model - basically a value between 0 and 1.
End of explanation
"""
|
musketeer191/job_analytics
|
.ipynb_checkpoints/Skill_Analysis-checkpoint.ipynb
|
gpl-3.0
|
import numpy as np
import pandas as pd
import sklearn.feature_extraction.text as text_manip
import matplotlib.pyplot as plt
import gc
from sklearn.decomposition import NMF, LatentDirichletAllocation
from time import time
from scipy.sparse import *
from my_util import *
"""
Explanation: Preparations
Import libraries:
End of explanation
"""
def skill_length_hist(skill_df):
min_n_word = np.min(skill_df['n_word'])
max_n_word = np.max(skill_df['n_word'])
n, bins, patches = plt.hist(skill_df['n_word'], bins= range(min_n_word, max_n_word+1), facecolor='blue',
log=True, align='left', rwidth=.5)
plt.xlabel('No. of words in skill (skill length)')
plt.ylabel('No. of skills (log scale)')
plt.title('Distribution of skill length')
plt.xticks(range(min_n_word, max_n_word+1))
plt.grid(True)
# plt.savefig(REPORT_DIR + 'skill_length.pdf')
plt.show()
plt.close()
# end
def freq(skills=None, docs=None, max_n_word=1):
t0 = time()
print('Counting occurrence of skills with length <= %d ...' %max_n_word)
count_vectorizer = text_manip.CountVectorizer(vocabulary=skills, ngram_range=(1, max_n_word))
doc_term_mat = count_vectorizer.fit_transform(docs)
print('Done after %.1fs' %(time() - t0))
# Sum over all documents to obtain total occurrence of each skill token
token_counts = np.asarray(doc_term_mat.sum(axis=0)).ravel()
df = pd.DataFrame({'skill': skills})
df['occurrence'] = token_counts
return df, doc_term_mat
def skillsPresentInJD(df):
occur_skills = df.query('occurrence > 0')
no_occur_skills = df.query('occurrence == 0')
return occur_skills, no_occur_skills
def n_match_skills(df):
occur_skills = df.query('occurrence > 0')
return occur_skills.shape[0]
def get_top_words(n_top_words, word_dist, feature_names):
norm_word_dist = np.divide(word_dist, sum(word_dist))
sorting_idx = word_dist.argsort()
top_words = [feature_names[i] for i in sorting_idx[:-n_top_words - 1:-1]]
probs = [norm_word_dist[i] for i in sorting_idx[:-n_top_words - 1:-1]]
return pd.DataFrame({'top_words': top_words, 'word_probs': probs})
HOME_DIR = 'd:/larc_projects/job_analytics/'
DATA_DIR = HOME_DIR + 'data/clean/'
REPORT_DIR = HOME_DIR + 'reports/skill_cluster/'
"""
Explanation: Define needed helpers:
End of explanation
"""
jd_df = pd.read_csv(DATA_DIR + 'jd_df.csv')
n_jd = jd_df.shape[0]
jd_df['clean_text'] = [' '.join(words_in_doc(d)) for d in jd_df['text']]
print('Some sample JD records:')
jd_df.head(3)
"""
Explanation: Load clean job descriptions (w/o html tags):
End of explanation
"""
jd_docs = jd_df['clean_text'].apply(str.lower)
"""
Explanation: Get the text of JD records for further analysis. We need to use lower cases for JDs so that we can match with lowercased skills later.
End of explanation
"""
linkedin_skill_df = pd.read_csv(DATA_DIR + 'LinkedInSkillsList_10.csv')
linkedin_skills = linkedin_skill_df['skill']
onet_skill_df = pd.read_csv(DATA_DIR + 'onet_skills_list_all.csv')
onet_skills = onet_skill_df['skill'].apply(str.lower)
"""
Explanation: Load skill lists obtained from LinkedIn & ONET:
End of explanation
"""
skills = linkedin_skills.append(onet_skills)
skills = list(set(skills))
pd.DataFrame({'skill': skills}).to_csv(DATA_DIR + 'all_skills.csv')
"""
Explanation: Join two skill lists & remove duplicated skills:
End of explanation
"""
skill_df = pd.DataFrame({'skill': skills})
skill_df['n_word'] = skill_df['skill'].apply(n_word)
quantile(skill_df['n_word'])
"""
Explanation: Average no. of words in skills
End of explanation
"""
skill_length_hist(skill_df)
"""
Explanation: Distribution of skill length (no. of words in skill):
End of explanation
"""
stats = pd.DataFrame({'# JDs': n_jd,
'# LinkedIn skills': len(linkedin_skills), '# ONET skills': len(onet_skills),
'Total no. of unique skills': len(skills),
'min skill length': min(skill_df['n_word']), 'max skill length': max(skill_df['n_word'])},
index=[0])
stats.to_csv(DATA_DIR + 'stats.csv')
stats
"""
Explanation: Based on the quartile summary and the distribution, we can try the following options:
+ including only 1-gram, 2-gram, 3-gram skills in our vocabulary (as 75% of skills have no more than 3 words)
+ including up to 7-gram skills in our vocabulary (as skills with more than 7 words only occuppy a small portion)
Statistics of data
End of explanation
"""
unigram_skills, doc_unigram_freq = freq(skills, jd_docs, max_n_word=1)
trigram_skills, doc_trigram_freq = freq(skills, jd_docs, max_n_word=3)
# sevengram_skills, doc_7gram_freq = freq(skills, jd_docs, max_n_word=7)
unigram_match, trigram_match = n_match_skills(df=unigram_skills), n_match_skills(df=trigram_skills)
# sevengram_match = n_match_skills(df=sevengram_skills)
pd.DataFrame({'# matching skills': [unigram_match, trigram_match]},
index=['Tokens_1', 'Tokens_3'])
"""
Explanation: 1. Skill occurrence
Each skill is considered as a token. We count occurrence of each skill in documents and return the counts in a term-document matrix.
Tokens_1 = {skills with length = 1} (uni-gram skills)
Tokens_3 = {skills with length <= 3}
Tokens_7 = {skills with length <= 7}
End of explanation
"""
t0 = time()
print('Counting occurrence of uni-gram skills...')
uni_gram_vectorizer = text_manip.CountVectorizer(vocabulary=skills)
doc_unigram_freq = uni_gram_vectorizer.fit_transform(jd_docs)
print('Done after %.1fs' %(time() - t0))
"""
Explanation: As the difference between Tokens_3 and Tokens_7 is negligible, we can just use the former for analysis. This means we only include 1-, 2- and 3-gram skills in our subsequent analysis.
2. Skill occurrence per document
Uni-gram skills
End of explanation
"""
## For each doc, "its no. of unique uni-grams = no. of non-zero counts" in its row in doc-term mat
def n_non_zero(r, sp_mat):
return len(sp_mat.getrow(r).nonzero()[1])
n_unigram = [n_non_zero(r, doc_unigram_freq) for r in range(n_jd)]
# sum(n_unigram) == len(doc_unigram_freq.nonzero()[0]) # sanity check
jd_df['n_unigram'] = n_unigram
print quantile(n_unigram)
# pull up some JDs to check
tmp = jd_df.query('n_unigram == 2')
print tmp.shape
tmp[['job_id', 'clean_text', 'n_unigram']].head(10)
"""
Explanation: No. of unique uni-grams per document
End of explanation
"""
fig = plt.figure(figsize=(10, 6))
ns, bins, patches = plt.hist(n_unigram, bins=np.unique(n_unigram), rwidth=.5)
plt.xlabel('No. of unique uni-gram skills in job description')
plt.ylabel('No. of job descriptions')
plt.title('Distribution of uni-gram skills in job descriptions')
plt.grid(True)
plt.show()
fig.savefig(REPORT_DIR + 'unigram_in_jd.pdf')
plt.close(fig)
print('Ratio of JDs with no uni-gram skills: %.3f' %round(n_unigram.count(0)/float(n_jd), 3))
"""
Explanation: Based on the quartile summary, $50$% of documents contain $\leq 14$ uni-gram skills and $75$% of them contain $\leq 22$ uni-gram skills (which looks reasonable).
End of explanation
"""
jd_df['length'] = [n_words_in_doc(d) for d in jd_df['text']]
# print sum(jd_df['length'] == 0)
jd_df['unigram_freq'] = doc_unigram_freq.sum(axis=1).A1
clean_jd_df = jd_df.query('length > 0')
del clean_jd_df['text']
clean_jd_df['unigram_ratio'] = np.divide(clean_jd_df['unigram_freq'], clean_jd_df['length']*1.)
clean_jd_df = clean_jd_df.sort_values(by='unigram_ratio', ascending=False)
print quantile(clean_jd_df['unigram_ratio'], dec=2)
# clean_jd_df.to_csv(DATA_DIR + 'jd_df.csv')
plt.hist(clean_jd_df['unigram_ratio'])
plt.xlabel('Ratio of unigram skill tokens in job description')
plt.ylabel('No. of job descriptions')
plt.title('Distribution of ratio of unigram skills')
plt.savefig(REPORT_DIR + 'unigram_ratio.pdf')
plt.show()
plt.close()
"""
Explanation: Ratio of unigram skill tokens in JDs
End of explanation
"""
jd_df.columns
sub_df = jd_df.query('n_unigram >= 5')
sub_df = sub_df.sort_values(by='length', ascending=False)
print sub_df.shape
# sub_df.head(10)
"""
Explanation: Decision:
Based on the distribution, half of JDs have $\leq 20$% of their words are uni-gram skills (and 75% of them having unigram skills occupying $\leq 26$% of their words). Thus, we may first want to try on the sub dataset of JDs where uni-gram skills occuppy at least $20$%.
End of explanation
"""
unigram_df, doc_unigram_freq = freq(skills, max_n_word=1, docs=sub_df['clean_text'])
doc_unigram_freq.shape
occur_unigrams = unigram_df.query('occurrence > 0')['skill']
unigram_df, doc_unigram_freq = freq(skills=occur_unigrams, docs=sub_df['clean_text'])
doc_unigram_freq.shape
n_unigram_skill = doc_unigram_freq.shape[1]
n_document = [n_non_zero(r, doc_unigram_freq.transpose()) for r in range(n_unigram_skill)]
tmp_df = pd.DataFrame({'skill' : occur_unigrams, 'n_doc' : n_document})
tmp_df = tmp_df.sort_values(by='n_doc', ascending=False)
sum(tmp_df['n_doc'] == 1)
quantile(n_document)
plt.hist(n_document, bins=np.unique(n_document), log=True)
# plt.xlabel('No. of documents containing the skill')
# plt.ylabel
plt.show()
"""
Explanation: First, we try using only uni-gram skills as features.
End of explanation
"""
n_ins = sub_df.shape[0]
train_idx, test_idx = mkPartition(n_instances=n_ins)
X_train, X_test = doc_unigram_freq[train_idx, :], doc_unigram_freq[test_idx, :]
"""
Explanation: Skill Clustering by NMF & LDA
Split into training and test sets
End of explanation
"""
# ks, n_top_words = range(2, 11), 10
ks, n_top_words = range(5, 25, 5), 10
RES_DIR = REPORT_DIR + 'r4/'
"""
Explanation: Set global arguments:
no. of topics: k in {5, 10, ..., 50}
no. of top words to be printed out in result
directory to save results
End of explanation
"""
rnmf = {k: NMF(n_components=k, random_state=0) for k in ks}
print( "Fitting NMF using %d uni-gram skills on %d job descriptions..." % (len(occur_unigrams), n_ins) ) # (random initialization)
print('No. of topics, Error, Running time')
rnmf_error = []
for k in ks:
t0 = time()
rnmf[k].fit(X_train)
elapsed = time() - t0
err = rnmf[k].reconstruction_err_
print('%d, %0.1f, %0.1fs' %(k, err, elapsed))
rnmf_error.append(err)
# end
"""
Explanation: A. Using NMF
Trainning NMF using random initialization
End of explanation
"""
from numpy import linalg as la
def cal_test_err(models):
test_error = []
print('No. of topics, Test error, Running time')
for k in ks:
t0 = time()
H = models[k].components_
W_test = models[k].fit_transform(X_test, H=H)
err = la.norm(X_test - np.matmul(W_test, H))
# sp_W_test = csr_matrix(W_test)
# sp_H = csc_matrix(H)
# print(sp_W_test.shape)
# print(sp_H.shape)
# err = la.norm(X_test - sp_W_test * sp_H)
test_error.append(err)
print('%d, %0.1f, %0.1fs' %(k, err, time() - t0))
return test_error
print('Calculating test errors of random NMF to choose best no. of topics...')
rnmf_test_error = cal_test_err(models=rnmf)
best_k = ks[np.argmin(rnmf_test_error)]
print('The best no. of topics is %d' %best_k)
rnmf_best = rnmf[best_k]
rnmf_word_dists = pd.DataFrame(rnmf_best.components_).apply(normalize, axis=1)
# rnmf_word_dists.to_csv(RES_DIR + 'rnmf_word_dists.csv', index=False)
"""
Explanation: Evaluating NMF on test data
First, we choose the best no. of topics $k^*$ for random NMF as the one that minimizes the error of predicting test data. For that, we compute the error for different $k$'s by the following function.
End of explanation
"""
# count_vectorizer = text_manip.CountVectorizer(vocabulary= occur_skills['skill'], ngram_range=(1, 3))
# count_vectorizer.fit_transform(jd_docs)
# nmf_features = count_vectorizer.get_feature_names()
# nmf_top_words = top_words_df(rnmf_best, n_top_words, nmf_features)
# nmf_top_words.to_csv(REPORT_DIR + 'nmf_top_words.csv', index=False)
# pd.DataFrame(nmf_features).to_csv(REPORT_DIR + 'nmf_features.csv')
"""
Explanation: We now validate NMF results in the following ways:
+ if learnt topics make sense
+ if topic it predicts for each JD in test set makes sense
Learnt topics
We manually label each topic based on its top 10 words.
End of explanation
"""
# H = rnmf_best.components_
# W_test = pd.DataFrame(rnmf_best.fit_transform(X_test, H=H))
# W_test.to_csv(REPORT_DIR + 'nmf_doc_topic_distr.csv', index=False)
"""
Explanation: Topic prediction on test JDs
End of explanation
"""
scores = []
lda = {k: LatentDirichletAllocation(n_topics=k, max_iter=5, learning_method='online', learning_offset=50.,
random_state=0) # verbose=1
for k in ks}
print("Fitting LDA using %d uni-gram skills on %d job descriptions..." % (len(occur_unigrams), n_ins))
print('No. of topics, Log-likelihood, Running time')
for k in ks:
t0 = time()
lda[k].fit(X_train)
s = lda[k].score(X_train)
print('%d, %0.1f, %0.1fs' %(k, s, time() - t0))
scores.append(s)
# end
"""
Explanation: B. Using LDA
Trainning
End of explanation
"""
perp = [lda[k].perplexity(X_test) for k in ks]
perp_df = pd.DataFrame({'No. of topics': ks, 'Perplexity': perp})
perp_df.to_csv(RES_DIR + 'perplexity.csv', index=False)
"""
Explanation: Perplexity of LDA on test set
End of explanation
"""
best_k = ks[np.argmin(perp)]
print('Best no. of topics: %d' %best_k)
"""
Explanation: Choose the best no. of topics as the one minimizing perplexity.
End of explanation
"""
# lda_best = lda[best_k]
# lda_word_dists = pd.DataFrame(lda_best.components_).apply(normalize, axis=1)
# lda_word_dists.to_csv(RES_DIR + 'lda_word_dists.csv', index=False)
# lda_features = count_vectorizer.get_feature_names()
# lda_topics = top_words_df(lda_best, n_top_words, lda_features)
# lda_topics.to_csv(REPORT_DIR + 'lda_topics.csv', index=False)
# pd.DataFrame(lda_features).to_csv(RES_DIR + 'lda_features.csv')
"""
Explanation: Save the best LDA model:
End of explanation
"""
# doc_topic_distr = lda_best.transform(X_test)
# pd.DataFrame(doc_topic_distr).to_csv(RES_DIR + 'lda_doc_topic_distr.csv', index=False)
"""
Explanation: Using the best LDA to perform topic prediction on test JDs
End of explanation
"""
# Put all model metrics on training & test datasets into 2 data frames
model_list = ['LDA', 'randomNMF']
train_metric = pd.DataFrame({'No. of topics': ks, 'LDA': np.divide(scores, 10**6), 'randomNMF': rnmf_error})
test_metric = pd.DataFrame({'No. of topics': ks, 'LDA': perp, 'randomNMF': rnmf_test_error, })
ks
"""
Explanation: C. Model Comparison
End of explanation
"""
fig = plt.figure(figsize=(10, 6))
for i, model in enumerate(model_list):
plt.subplot(2, 2, i+1)
plt.subplots_adjust(wspace=.5, hspace=.5)
# train metric
plt.title(model)
plt.plot(ks, train_metric[model], '--')
plt.xlabel('No. of topics')
if model == 'LDA':
plt.ylabel(r'Log likelihood ($\times 10^6$)')
else:
plt.ylabel(r'$\| X_{train} - W_{train} H \|_2$')
plt.grid(True)
# test metric
# plt.subplot(2, 2, i+3)
# plt.title(model)
# plt.plot(ks, test_metric[model], 'r')
# plt.xlabel('No. of topics')
# if model == 'LDA':
# plt.ylabel(r'Perplexity')
# else:
# plt.ylabel(r'$\| X_{test} - W_{test} H \|_2$')
# plt.grid(True)
# end
plt.show()
# fig.savefig(RES_DIR + 'new_lda_vs_nmf.pdf')
plt.close(fig)
"""
Explanation: Performance of models for different number of topics
End of explanation
"""
t0 = time()
print('Counting occurrence of multi-gram skills...')
multi_gram_vectorizer = text_manip.CountVectorizer(vocabulary=skills, ngram_range=(2, 3))
multigram_doc_mat = multi_gram_vectorizer.fit_transform(jd_docs)
print('Done after %.1fs' %(time() - t0))
"""
Explanation: Multi-gram skills
End of explanation
"""
n_multigram = [n_non_zero(r, doc_multigram_freq) for r in range(n_jd)]
multigram_df = pd.DataFrame({'jd_id': jd_df['job_id'], 'n_multigram': n_multigram})
quantile(n_multigram)
plt.hist(n_multigram, bins=np.unique(n_multigram))
plt.xlabel('No. of multi-grams in job description')
plt.ylabel('No. of job descriptions')
plt.title('Distribution of multi-grams in job descriptions')
plt.grid(True)
plt.show()
plt.savefig(REPORT_DIR + 'multigram_in_jd.pdf')
plt.close()
"""
Explanation: Distribution of multigrams in documents
End of explanation
"""
unigram_skills, bigram_skills = occur_skills.query('n_word == 1')['skill'], occur_skills.query('n_word == 2')['skill']
## for each unigram, find no. of bigrams containing it (aka super-bigrams)
def super_bigrams(unigram='business', bigrams=bigram_skills):
idx = [s.find(unigram) for s in bigrams]
df = pd.DataFrame({'bigram': bigrams, 'idx_of_given_unigram': idx})
return df.query('idx_of_given_unigram > -1') # those bigrams not containing the unigram give -1 indices
def n_super_bigrams(unigram='business', bigrams=bigram_skills):
idx = [s.find(unigram) for s in bigrams]
return len(bigrams) - idx.count(-1)
# super_bigrams(unigram='business')
n_super_bigrams = [n_super_bigrams(ug) for ug in unigram_skills]
overlap_df = pd.DataFrame({'unigram': unigram_skills, 'n_super_bigrams': n_super_bigrams})
quantile(n_super_bigrams)
plt.hist(n_super_bigrams)
plt.show()
plt.close()
overlapped_unigrams = overlap_df.query('n_super_bigrams > 0')
overlapped_unigrams = overlapped_unigrams.sort_values(by='n_super_bigrams', ascending=False)
# overlapped_unigrams.head(10)
n_overlap = overlapped_unigrams.shape[0]
n_unigram_skills = overlap_df.shape[0]
print n_overlap
print n_unigram_skills
n_overlap*1./n_unigram_skills
"""
Explanation: Overlapping between uni-gram & multi-gram skills
End of explanation
"""
unigram_df = unigram_df.sort_values(by='occurrence', ascending=False)
top10 = unigram_df[['skill', 'occurrence']].head(10)
# top10.reset_index(inplace=True, drop=True)
top10
"""
Explanation: 10 most common skills:
End of explanation
"""
bottom10 = unigram_df[['skill', 'occurrence']].tail(10)
bottom10.reset_index(inplace=True, drop=True)
print(bottom10)
"""
Explanation: 10 least common skills:
End of explanation
"""
quantile(occur_skills.query('n_word == 1')['occurrence'])
"""
Explanation: Frequency of uni-gram skills in JDs
End of explanation
"""
quantile(occur_skills.query('n_word == 2')['occurrence'])
"""
Explanation: Frequency of bi-gram skills in JDs
End of explanation
"""
quantile(occur_skills.query('n_word == 3')['occurrence'])
"""
Explanation: Frequency of tri-gram skills in JDs
End of explanation
"""
n, bins, patches = plt.hist(x=occur_skills['occurrence']/10**3, bins=50, facecolor='blue', alpha=0.75, log=True)
plt.title('Histogram of skill frequency')
plt.xlabel(r'Frequency ($\times 10^3$)') # in thousand
plt.ylabel('No. of skills (log scale)')
plt.ylim(1, 10**4)
plt.grid(True)
plt.savefig(REPORT_DIR + 'skill_occur.pdf')
plt.show()
plt.close()
# bi_gram_skills.sort_values(by='occurrence', inplace=True, ascending=False)
# print('10 most common bi-gram skills')
# print(bi_gram_skills.head(10))
"""
Explanation: Distribution of skill frequency
End of explanation
"""
occur_skills, no_occur_skills = skillsPresentInJD(df=trigram_skills)
occur_skills = occur_skills.sort_values(by='occurrence', ascending=False) # inplace=True,
"""
Explanation: Skills which actually occur in JDs from JobBanks
Filter out all skills that never occur in JDs to reduce size of doc-skill feature matrix.
End of explanation
"""
# occur_skills['skill'].head(3)
trigram_skills, doc_trigram_freq = freq(occur_skills['skill'], max_n_word=3)
print('No. of skills in the new doc-skill matrix after re-building: %d' %doc_trigram_freq.shape[1] )
trigram_skills.to_csv(REPORT_DIR + 'trigram_skills.csv')
pd.DataFrame(doc_trigram_freq.data).to_csv(REPORT_DIR + 'doc_trigram_freq.csv')
# unigram_skills.to_csv(REPORT_DIR + 'unigram_skills.csv')
# pd.DataFrame(doc_unigram_freq.data).to_csv(REPORT_DIR + 'doc_unigram_freq.csv')
# sevengram_skills.to_csv(REPORT_DIR + 'sevengram_skills.csv')
"""
Explanation: Re-build the doc-skill matrix where each skill occurs at least once.
End of explanation
"""
|
jtwhite79/pyemu
|
examples/Freyberg/.ipynb_checkpoints/verify_unc_results-checkpoint.ipynb
|
bsd-3-clause
|
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
"""
Explanation: verify pyEMU results with the henry problem
End of explanation
"""
la = pyemu.Schur("freyberg.jcb",verbose=False)
la.drop_prior_information()
jco_ord = la.jco.get(la.pst.obs_names,la.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
la.pst.write(ord_base+".pst")
"""
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
"""
pv_names = []
predictions = ["travel_time", "sw_gw_0","sw_gw_1","sw_gw_2"]
for pred in predictions:
pv = jco_ord.extract(pred).T
pv_name = pred + ".vec"
pv.to_ascii(pv_name)
pv_names.append(pv_name)
"""
Explanation: extract and save the forecast sensitivity vectors
End of explanation
"""
prior_uncfile = "pest.unc"
la.parcov.to_uncfile(prior_uncfile,covmat_file=None)
"""
Explanation: save the prior parameter covariance matrix as an uncertainty file
End of explanation
"""
post_mat = "post.cov"
post_unc = "post.unc"
args = [ord_base + ".pst","1.0",prior_uncfile,
post_mat,post_unc,"1"]
pd7_in = "predunc7.in"
f = open(pd7_in,'w')
f.write('\n'.join(args)+'\n')
f.close()
out = "pd7.out"
pd7 = os.path.join("exe","i64predunc7.exe")
os.system(pd7 + " <" + pd7_in + " >"+out)
for line in open(out).readlines():
print line,
"""
Explanation: PRECUNC7
write a response file to feed stdin to predunc7
End of explanation
"""
post_pd7 = pyemu.Cov()
post_pd7.from_ascii(post_mat)
la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions)
post_pyemu = la_ord.posterior_parameter
#post_pyemu = post_pyemu.get(post_pd7.row_names)
"""
Explanation: load the posterior matrix written by predunc7
End of explanation
"""
delta = (post_pd7 - post_pyemu).x
(post_pd7 - post_pyemu).to_ascii("delta.cov")
print delta.sum()
print delta.max(),delta.min()
"""
Explanation: The cumulative difference between the two posterior matrices:
End of explanation
"""
args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"]
pd1_in = "predunc1.in"
pd1 = os.path.join("exe", "i64predunc1.exe")
pd1_results = {}
for pv_name in pv_names:
args[3] = pv_name
f = open(pd1_in, 'w')
f.write('\n'.join(args) + '\n')
f.close()
out = "predunc1" + pv_name + ".out"
os.system(pd1 + " <" + pd1_in + ">" + out)
f = open(out,'r')
for line in f:
if "pre-cal " in line.lower():
pre_cal = float(line.strip().split()[-2])
elif "post-cal " in line.lower():
post_cal = float(line.strip().split()[-2])
f.close()
pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal]
"""
Explanation: PREDUNC1
write a response file to feed stdin. Then run predunc1 for each forecast
End of explanation
"""
pyemu_results = {}
for pname in la_ord.prior_prediction.keys():
pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]),
np.sqrt(la_ord.posterior_prediction[pname])]
"""
Explanation: organize the pyemu results into a structure for comparison
End of explanation
"""
f = open("predunc1_textable.dat",'w')
for pname in pd1_results.keys():
print pname
f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\
.format(pd1_results[pname][0],pyemu_results[pname][0],
pd1_results[pname][1],pyemu_results[pname][1]))
print "prior",pname,pd1_results[pname][0],pyemu_results[pname][0]
print "post",pname,pd1_results[pname][1],pyemu_results[pname][1]
f.close()
"""
Explanation: compare the results:
End of explanation
"""
f = open("pred_list.dat",'w')
out_files = []
for pv in pv_names:
out_name = pv+".predvar1b.out"
out_files.append(out_name)
f.write(pv+" "+out_name+"\n")
f.close()
args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"]
for i in xrange(36):
args.append(str(i))
args.append('')
args.append("n")
args.append("n")
args.append("y")
args.append("n")
args.append("n")
f = open("predvar1b.in", 'w')
f.write('\n'.join(args) + '\n')
f.close()
os.system("exe\\predvar1b.exe <predvar1b.in")
pv1b_results = {}
for out_file in out_files:
pred_name = out_file.split('.')[0]
f = open(out_file,'r')
for _ in xrange(3):
f.readline()
arr = np.loadtxt(f)
pv1b_results[pred_name] = arr
"""
Explanation: PREDVAR1b
write the nessecary files to run predvar1b
End of explanation
"""
omitted_parameters = [pname for pname in la.pst.parameter_data.parnme if pname.startswith("wf")]
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
omitted_parameters=omitted_parameters,
verbose=False)
df = la_ord_errvar.get_errvar_dataframe(np.arange(36))
df
"""
Explanation: now for pyemu
End of explanation
"""
fig = plt.figure(figsize=(6,6))
max_idx = 15
idx = np.arange(max_idx)
for ipred,pred in enumerate(predictions):
arr = pv1b_results[pred][:max_idx,:]
first = df[("first", pred)][:max_idx]
second = df[("second", pred)][:max_idx]
third = df[("third", pred)][:max_idx]
ax = plt.subplot(len(predictions),1,ipred+1)
#ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5)
#ax.plot(first,color='b')
#ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(second,color='g')
#ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(third,color='r')
ax.scatter(idx,arr[:,1],marker='x',s=40,color='g',
label="PREDVAR1B - first term")
ax.scatter(idx,arr[:,2],marker='x',s=40,color='b',
label="PREDVAR1B - second term")
ax.scatter(idx,arr[:,3],marker='x',s=40,color='r',
label="PREVAR1B - third term")
ax.scatter(idx,first,marker='o',facecolor='none',
s=50,color='g',label='pyEMU - first term')
ax.scatter(idx,second,marker='o',facecolor='none',
s=50,color='b',label="pyEMU - second term")
ax.scatter(idx,third,marker='o',facecolor='none',
s=50,color='r',label="pyEMU - third term")
ax.set_ylabel("forecast variance")
ax.set_title("forecast: " + pred)
if ipred == len(predictions) -1:
ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75),
scatterpoints=1,ncol=2)
ax.set_xlabel("singular values")
#break
plt.savefig("predvar1b_ver.eps")
"""
Explanation: generate some plots to verify
End of explanation
"""
cmd_args = [os.path.join("exe","i64identpar.exe"),ord_base,"5",
"null","null","ident.out","/s"]
cmd_line = ' '.join(cmd_args)+'\n'
print(cmd_line)
print(os.getcwd())
os.system(cmd_line)
identpar_df = pd.read_csv("ident.out",delim_whitespace=True)
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
verbose=False)
df = la_ord_errvar.get_identifiability_dataframe(5)
df
"""
Explanation: Identifiability
End of explanation
"""
fig = plt.figure()
ax = plt.subplot(111)
axt = plt.twinx()
ax.plot(identpar_df["identifiability"])
ax.plot(df["ident"])
ax.set_xlim(-10,600)
diff = identpar_df["identifiability"].values - df["ident"].values
#print(diff)
axt.plot(diff)
axt.set_ylim(-1,1)
ax.set_xlabel("parmaeter")
ax.set_ylabel("identifiability")
axt.set_ylabel("difference")
"""
Explanation: cheap plot to verify
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/ukesm1-0-ll/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
morganics/bayesianpy
|
examples/notebook/iris_gaussian_mixture_model.ipynb
|
apache-2.0
|
%matplotlib notebook
import pandas as pd
import sys
sys.path.append("../../../bayesianpy")
import bayesianpy
from bayesianpy.network import Builder as builder
import logging
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
# Using the latent variable to cluster data points. Based upon the Iris dataset which has 3 distinct clusters
# (not all of which are linearly separable). Using a joint probability distribution, first based upon the class
# variable 'iris_class' and subsequently the cluster variable as a tail variable. Custom query currently only supports
# a single discrete tail variable and multiple continuous head variables.
jd = bayesianpy.visual.JointDistribution()
def plot(head_variables, results):
fig = plt.figure(figsize=(10, 10))
n = len(head_variables)-1
total = n*(n+1)/2
k = 1
for i, hv in enumerate(head_variables):
for j in range(i + 1, len(head_variables)):
ax = fig.add_subplot(total/2, 2, k)
jd.plot_distribution_with_covariance(ax, iris,
(head_variables[i], head_variables[j]), results)
k+=1
plt.show()
"""
Explanation: Visualising clusters on the Iris dataset
It can sometimes be a bit difficult to understand what's going on without a good plot, so I wanted to start trying to visualise structures vs cluster definition on the Iris dataset. This will involve building the network structure and then querying the joint distribution of petal width, petal length, sepal width and sepal length given a discrete variable; the joint distribution will be a Gaussian mixture model.
The Python SDK is currently a bit limited for 'custom' queries, so currently only supports a Gaussian mixture query, with multiple continuous variables as the head variables, and a single discrete variable for the tail, e.g. P(petallength, sepalwidth, sepallength, petalwidth | irisclass). Bayes Server obviously supports a lot more.
First off, imports plus some code for plotting ellipses based upon a covariance matrix.
End of explanation
"""
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
bayesianpy.jni.attach(logger)
db_folder = bayesianpy.utils.get_path_to_parent_dir("")
iris = pd.read_csv(os.path.join(db_folder, "data/iris.csv"), index_col=False)
"""
Explanation: Next, just a bit of setup code to load the data and setup the Jpype instance.
End of explanation
"""
network = bayesianpy.network.create_network()
petal_length = builder.create_continuous_variable(network, "petal_length")
petal_width = builder.create_continuous_variable(network, "petal_width")
sepal_length = builder.create_continuous_variable(network, "sepal_length")
sepal_width = builder.create_continuous_variable(network, "sepal_width")
nodes = [petal_length, petal_width, sepal_length, sepal_width]
class_variable = builder.create_discrete_variable(network, iris, 'iris_class', iris['iris_class'].unique())
for i, node in enumerate(nodes):
builder.create_link(network, class_variable, node)
plt.figure()
layout = bayesianpy.visual.NetworkLayout(network)
graph = layout.build_graph()
pos = layout.fruchterman_reingold_layout(graph)
layout.visualise(graph, pos)
"""
Explanation: Naive Bayes
The next step is creating the model by hand. The Python SDK also supports multivariate nodes, but for clarity each node is individually stored here. This is not a fully connected model, as the continuous nodes are only connected through the cluster.
End of explanation
"""
with bayesianpy.data.DataSet(iris, db_folder, logger) as dataset:
model = bayesianpy.model.NetworkModel(network, logger)
model.train(dataset)
"""
Explanation: The network still needs training, so kick that off.
End of explanation
"""
with bayesianpy.data.DataSet(iris, db_folder, logger) as dataset:
head_variables = ['sepal_length','sepal_width','petal_length','petal_width']
query_type_class = bayesianpy.model.QueryConditionalJointProbability(
head_variables=head_variables,
tail_variables=['iris_class'])
engine = bayesianpy.model.InferenceEngine(network).create_engine()
# pass in an inference engine so that multiple queries can be performed, or evidence can be set.
query = bayesianpy.model.Query(network, engine, logger)
results_class = query.execute([query_type_class], aslist=False)
plot(head_variables, results_class)
"""
Explanation: Now it gets interesting, as we can query the priors that the network has setup. GaussianMixtureQuery returns the same format of covariance matrix that would be output by numpy.conv.
End of explanation
"""
network = bayesianpy.network.create_network()
petal_length = builder.create_continuous_variable(network, "petal_length")
petal_width = builder.create_continuous_variable(network, "petal_width")
sepal_length = builder.create_continuous_variable(network, "sepal_length")
sepal_width = builder.create_continuous_variable(network, "sepal_width")
nodes = [petal_length, petal_width, sepal_length, sepal_width]
class_variable = builder.create_discrete_variable(network, iris, 'iris_class', iris['iris_class'].unique())
for i, node in enumerate(nodes):
builder.create_link(network, class_variable, node)
for j in range(i+1, len(nodes)):
builder.create_link(network, node, nodes[j])
plt.figure()
layout = bayesianpy.visual.NetworkLayout(network)
graph = layout.build_graph()
pos = layout.fruchterman_reingold_layout(graph)
layout.visualise(graph, pos)
with bayesianpy.data.DataSet(iris, db_folder, logger) as dataset:
model = bayesianpy.model.NetworkModel(network, logger)
model.train(dataset)
head_variables = ['sepal_length','sepal_width','petal_length','petal_width']
query_type_class = bayesianpy.model.QueryConditionalJointProbability(
head_variables=head_variables,
tail_variables=['iris_class'])
engine = bayesianpy.model.InferenceEngine(network).create_engine()
# pass in an inference engine so that multiple queries can be performed, or evidence can be set.
query = bayesianpy.model.Query(network, engine, logger)
results_class = query.execute([query_type_class], aslist=False)
plot(head_variables, results_class)
"""
Explanation: Performance doesn't seem too bad with the naive Bayes model, however it's possible to note that the ellipses have 0 covariance, as each variable is independent of the other (apart from iris_class). To improve performance, the variables could be fully connected.
Fully observed mixture model
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
stable/_downloads/fcc5782db3e2930fc79f31bc745495ed/60_ctf_bst_auditory.ipynb
|
bsd-3-clause
|
# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD-3-Clause
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Working with CTF data: the Brainstorm auditory dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see :footcite:TadelEtAl2011 and the
associated brainstorm site.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker_.
End of explanation
"""
use_precomputed = True
"""
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change use_precomputed to
False. With use_precomputed = False running time of this script can
be several minutes even on a fast computer.
End of explanation
"""
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', subject, 'S01_Noise_20131218_01.ds')
"""
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
"""
raw = read_raw_ctf(raw_fname1)
n_times_run1 = raw.n_times
# Here we ignore that these have different device<->head transforms
mne.io.concatenate_raws(
[raw, read_raw_ctf(raw_fname2)], on_mismatch='ignore')
raw_erm = read_raw_ctf(erm_fname)
"""
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded into memory.
End of explanation
"""
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg']).load_data()
"""
Explanation: The data array consists of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz). In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
Notice also that the digitized electrode positions (stored in a .pos file)
were automatically loaded and added to the ~mne.io.Raw object.
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
"""
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
"""
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
"""
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
"""
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
"""
raw.plot(block=True)
"""
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
"""
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
"""
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
"""
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
"""
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
"""
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
"""
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
"""
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
"""
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
"""
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
"""
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
"""
epochs.drop_bad()
# avoid warning about concatenating with annotations
epochs.set_annotations(None)
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
"""
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
"""
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
"""
Explanation: The averages for each conditions are computed.
End of explanation
"""
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
"""
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
"""
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
"""
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
"""
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
"""
Explanation: Show activations as topography figures.
End of explanation
"""
evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1])
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
"""
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
"""
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
"""
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
"""
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
"""
Explanation: The transformation is read from a file:
End of explanation
"""
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
"""
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
bem-model, :func:mne.bem.make_watershed_bem.
End of explanation
"""
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
"""
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
"""
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
"""
Explanation: Deviant condition.
End of explanation
"""
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
"""
Explanation: Difference.
End of explanation
"""
|
ericfourrier/auto-clean
|
examples/other_notebooks/Tidy Data.ipynb
|
mit
|
import pandas as pd
import numpy as np
# tuberculosis (TB) dataset
path_tb = '/Users/ericfourrier/Documents/ProjetR/tidy-data/data/tb.csv'
df_tb = pd.read_csv(path_tb)
df_tb.head(20)
"""
Explanation: Tidy Data
Thsis notebbok is designed to explore Hadley Wickman article about tidy data using pandas
The datasets are available on github : https://github.com/hadley/tidy-data/blob/master/data/
Import Packages
End of explanation
"""
# clean column names
df_tb = df_tb.rename(columns={'iso2':'country'}) # rename iso2 in country
df_tb = df_tb.drop(['new_sp'],axis = 1)
df_tb.columns = [c.replace('new_sp_','') for c in df_tb.columns] # remove new_sp_
df_tb.head()
df_tb_wide = pd.melt(df_tb,id_vars = ['country','year'])
df_tb_wide = df_tb_wide.rename(columns={'variable':'column','value':'cases'})
df_tb_wide
"""
Explanation: Original TB dataset. Corresponding to each ‘m’ column for males, there is also an ‘f’ column
for females, f1524, f2534 and so on. These are not shown to conserve space. Note the mixture of 0s
and missing values. This is due to the data collection process and the distinction is important for
this dataset.
End of explanation
"""
# create sex:
ages = {"04" : "0-4", "514" : "5-14", "014" : "0-14",
"1524" : "15-24","2534" : "25-34", "3544" : "35-44",
"4554" : "45-54", "5564" : "55-64", "65": "65+", "u" : np.nan}
# Create genre and age from the mixed type column
df_tb_wide['age']=df_tb_wide['column'].str[1:]
df_tb_wide['genre']=df_tb_wide['column'].str[0]
df_tb_wide = df_tb_wide.drop('column', axis=1)
# change category
df_tb_wide['age'] = df_tb_wide['age'].map(lambda x: ages[x])
# clean dataset
df_tb_wide
"""
Explanation: Create sex and age columns from variable 'column'
End of explanation
"""
|
jonathf/chaospy
|
docs/user_guide/advanced_topics/gaussian_mixture_model.ipynb
|
mit
|
import chaospy
means = ([0, 1], [1, 1], [1, 0])
covariances = ([[1.0, -0.9], [-0.9, 1.0]],
[[1.0, 0.9], [ 0.9, 1.0]],
[[0.1, 0.0], [ 0.0, 0.1]])
distribution = chaospy.GaussianMixture(means, covariances)
distribution
import numpy
from matplotlib import pyplot
pyplot.rc("figure", figsize=[15, 6], dpi=75)
xloc, yloc = numpy.mgrid[-2:3:100j, -1:3:100j]
density = distribution.pdf([xloc, yloc])
pyplot.contourf(xloc, yloc, density)
pyplot.show()
"""
Explanation: Gaussian mixture model
Gaussian mixture model (GMM) is a probabilistic model created by averaging multiple Gaussian density functions.
It is not uncommon to think of these models as a clustering technique because when a model is fitted, it can be used to backtrack which individual density each samples is created from.
However, in chaospy, which first and foremost deals with forward problems, sees GMM as a very flexible class of distributions.
On the most basic level constructing GMM in chaospy can be done from a sequence of means and covariances:
End of explanation
"""
from sklearn import datasets, mixture
model = mixture.GaussianMixture(3, random_state=1234)
model.fit(datasets.load_iris().data)
means = model.means_[:, :2]
covariances = model.covariances_[:, :2, :2]
print(means.round(4))
print(covariances.round(4))
distribution = chaospy.GaussianMixture(means, covariances)
xloc, yloc = numpy.mgrid[4:8:100j, 1.5:4.5:100j]
density = distribution.pdf([xloc, yloc])
pyplot.contourf(xloc, yloc, density)
pyplot.show()
"""
Explanation: Model fitting
chaospy supports Gaussian mixture model representation, but does not provide an automatic method for constructing them from data.
However, this is something for example scikit-learn supports.
It is possible to use scikit-learn to fit a model, and use the generated parameters in the chaospy implementation.
For example, let us consider the Iris example from scikit-learn's documentation ("full" implementation in 2-dimensional representation):
End of explanation
"""
pseudo_samples = distribution.sample(500, rule="additive_recursion")
pyplot.scatter(*pseudo_samples)
pyplot.show()
"""
Explanation: Like scikit-learn, chaospy also support higher dimensions, but that would make the visualization harder.
Low discrepancy sequences
chaospy support low-discrepancy sequences through inverse mapping.
This support extends to mixture models, making the following possible:
End of explanation
"""
expansion = chaospy.generate_expansion(1, distribution, rule="cholesky")
expansion.round(4)
"""
Explanation: Chaos expansion
To be able to do point collocation method it requires the user to have access to sampler from the input distribution and orthogonal polynomials with respect to the input distribution.
The former is available above, while the latter is available as follows:
End of explanation
"""
|
BBN-Q/Auspex
|
doc/examples/Example-Calibrations.ipynb
|
apache-2.0
|
from QGL import *
from auspex.qubit import *
"""
Explanation: Example Q6: Calibrations
This example notebook shows how to use the pulse calibration framework.
© Raytheon BBN Technologies 2019
End of explanation
"""
cl = ChannelLibrary("my_config")
pl = PipelineManager()
"""
Explanation: We use a pre-existing database containing a channel library and pipeline we have established.
End of explanation
"""
spec_an = cl.new_spectrum_analzyer("SpecAn", "ASRL/dev/ttyACM0::INSTR", cl["spec_an_LO"])
cal = MixerCalibration(q2, spec_an, mixer="measure")
cal.calibrate()
"""
Explanation: Calibrating Mixers
The APS2 requires mixers to upconvert to qubit and cavity frequencies. We must tune the offset of these mixers and the amplitude factors of the quadrature channels to ensure the best possible results. We repeat the definition of the spectrum analyzer here, assuming that the LO driving this instrument is present in the channel library as spec_an_LO.
End of explanation
"""
cals = RabiAmpCalibration(q2)
cal.calibrate()
cal = RamseyCalibration(q2)
cal.calibrate()
"""
Explanation: If the plot server and client are open, then the data will be plotted along with fits from the calibration procedure. The calibration procedure automatically knows which digitizer and AWG units are needed in the process. The relevant instrument parameters are updated but not commited to the database. Therefore they may be rolled back if undesirable results are found.
Pulse Calibrations
A simple set of calibrations is performed as follows.
End of explanation
"""
cals = [RabiAmpCalibration, RamseyCalibration, Pi2Calibration, PiCalibration]
[cal(q2).calibrate() for cal in cals]
"""
Explanation: Of course this is somewhat repetetive and can be sped up:
End of explanation
"""
cal = QubitTuneup(q2, f_start=5.2e9, f_stop=5.8e9, coarse_step=50e6, fine_step=0.5e6, averages=250, amp=0.1)
cal.calibrate()
"""
Explanation: Automatic Tuneup
While we develop algorithms for fully automated tuneup, some segments of the analysis are (primitively) automated as seen below:
End of explanation
"""
|
Tatiana-Krivosheev/ipython-notebooks-physics
|
PHYS2211.Measurement.ipynb
|
cc0-1.0
|
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import sympy
%matplotlib inline
"""
Explanation: PHYS 2211 - Introductory Physics Laboratory I
Measurement andError Propagation
Name: Tatiana Krivosheev
Partners: Oleg Krivosheev
Annex A
End of explanation
"""
class ListTable(list):
""" Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook. """
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
# plain text
plt.title('alpha > beta')
# math text
plt.title(r'$\alpha > \beta$')
from sympy import symbols, init_printing
init_printing(use_latex=True)
delta = symbols('delta')
delta**2/3
from sympy import symbols, init_printing
init_printing(use_latex=True)
delta = symbols('delta')
table = ListTable()
table.append(['measuring device', ' ', 'delta', 'w', 'delta w', 'h', 'delta h'])
table.append([' ', '(cm)', '(cm)', '(cm)','(cm)', '(cm)', '(cm)'])
lr=4.9
wr=2.5
hr=1.2
lc=4.90
wc=2.54
hc=1.27
deltar=0.1
deltac=0.01
table.append(['ruler',lr, deltar, wr, deltar, hr, deltar])
table.append(['vernier caliper', lc, deltac, wc, deltac, hc, deltac])
table
s(t) = \mathcal{A}\/\sin(2 \omega t)
table = ListTable()
table.append(['l', 'deltal', 'w', 'deltaw', 'h', 'deltah'])
table.append(['(cm)', '(cm)', '(cm)','(cm)', '(cm)', '(cm)'])
lr=4.9
wr=2.5
hr=1.2
lc=4.90
wc=2.54
hc=1.27
deltar=0.1
deltac=0.01
for i in range(0,len(x)):
xx = x[i]
yy = y[i]
ttable.append([lr, deltar, wr, deltar, hr, deltar])able.append([lr, deltar, wr, deltar, hr, deltar])
table
# code below demonstrates...
import numpy as np
x = [7,10,15,20,25,30,35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95]
y= [0.228,0.298,0.441,0.568,0.697,0.826,0.956, 1.084, 1.211, 1.339,1.468, 1.599, 1.728, 1.851, 1.982, 2.115, 2.244, 2.375, 2.502]
plt.scatter(x, y)
plt.title('Linearity test')
plt. xlabel('Length (cm)')
plt. ylabel('Voltage (V)')
fit = np.polyfit(x,y,1)
fit_fn = np.poly1d(fit)
plt.plot(x,y, 'yo', x, fit_fn(x), '--k')
m,b = np.polyfit(x, y, 1)
print ('m={0}'.format(m))
print ('b={0}'.format(b))
plt.show()
"""
Explanation: Annex A - Data and Calculations
1. Rectangular Block
End of explanation
"""
Rk = 3.5 # kOhms
table = ListTable()
table.append(['Ru', 'Ru, acc', 'L1', 'L2', 'Ru, wheatstone', 'Disc'])
table.append(['(kOhms)', '(kOhms)', '(cm)', '(cm)', '(kOhms)', ' % '])
x = [0.470,0.680,1.000, 1.500]
y= [0.512,0.712,1.131,1.590]
z= [88.65, 84.50, 76.90, 69.80]
for i in range(0,len(x)):
xx = x[i]
yy = y[i]
zz = z[i]
Rw = (100.0 - zz)/zz*Rk
Disc = (Rw-yy)/yy*100.0
table.append([xx, yy, zz, 100.0-zz,Rw, Disc])
table
x = [0.470,0.680,1.000, 1.500]
y= [0.512,0.712,1.131,1.590]
z= [88.65, 84.50, 76.90, 69.80]
for i in range(0,len(x)):
xx = x[i]
yy = y[i]
zz = z[i]
Rw = (100.0 - zz)/zz*Rk
Disc = (Rw-yy)/yy*100.0
plt.scatter(yy, Disc)
plt.title('Discrepancy vs Resistance')
plt. xlabel('Resistance (kOhms)')
plt. ylabel('Discrepancy (%)')
plt.show()
"""
Explanation: 2. Wheatstone bridge measurements
End of explanation
"""
|
darienmt/intro-to-tensorflow
|
LeNet-Lab.ipynb
|
mit
|
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./datasets/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
"""
Explanation: LeNet Lab
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
"""
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
"""
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
"""
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
"""
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
"""
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
"""
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
"""
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
"""
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
"""
Explanation: TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
"""
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
"""
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './models/lenet')
print("Model saved")
"""
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./models'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/kubeflow_pipelines/cicd/labs/kfp_cicd_vertex.ipynb
|
apache-2.0
|
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = "us-central1"
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
"""
Explanation: CI/CD for a Kubeflow pipeline on Vertex AI
Learning Objectives:
1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines
1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP
1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline
In this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Configuring environment settings
End of explanation
"""
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
"""
Explanation: Let us make sure that the artifact store exists:
End of explanation
"""
%%writefile kfp-cli_vertex/Dockerfile
# TODO
"""
Explanation: Creating the KFP CLI builder for Vertex AI
Exercise
In the cell below, write a docker file that
* Uses gcr.io/deeplearning-platform-release/base-cpu as base image
* Install the python packages kfp with version 1.6.6 and google-cloud-aiplatform with version 1.3.0
* Starts /bin/bash as entrypoint
End of explanation
"""
KFP_CLI_IMAGE_NAME = "kfp-cli-vertex"
KFP_CLI_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest"
KFP_CLI_IMAGE_URI
"""
Explanation: Build the image and push it to your project's Container Registry.
End of explanation
"""
!gcloud builds # COMPLETE THE COMMAND
"""
Explanation: Exercise
In the cell below, use gcloud builds to build the kfp-cli-vertex Docker image and push it to the project gcr.io registry.
End of explanation
"""
%%writefile cloudbuild_vertex.yaml
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
# file except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
steps:
# Build the trainer image
- name: # TODO
args: # TODO
dir: # TODO
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
dsl-compile-v2 # TODO
env:
- 'PIPELINE_ROOT=gs://$PROJECT_ID-kfp-artifact-store/pipeline'
- 'PROJECT_ID=$PROJECT_ID'
- 'REGION=$_REGION'
- 'SERVING_CONTAINER_IMAGE_URI=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'
- 'TRAINING_CONTAINER_IMAGE_URI=gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest'
- 'TRAINING_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/training/dataset.csv'
- 'VALIDATION_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/validation/dataset.csv'
dir: pipeline_vertex
# Run the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
python kfp-cli_vertex/run_pipeline.py # TODO
# Push the images to Container Registry
# TODO: List the images to be pushed to the project Docker registry
images: # TODO
# This is required since the pipeline run overflows the default timeout
timeout: 10800s
"""
Explanation: Understanding the Cloud Build workflow.
Exercise
In the cell below, you'll complete the cloudbuild_vertex.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables.
The CI/CD workflow automates the steps you walked through manually during lab-02_vertex:
1. Builds the trainer image
1. Compiles the pipeline
1. Uploads and run the pipeline to the Vertex AI Pipeline environment
1. Pushes the trainer to your project's Container Registry
The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI.
End of explanation
"""
SUBSTITUTIONS = f"_REGION={REGION},_PIPELINE_FOLDER=./"
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
"""
Explanation: Manually triggering CI/CD runs
You can manually trigger Cloud Build runs using the gcloud builds submit command.
End of explanation
"""
|
hanhanwu/Hanhan_Data_Science_Practice
|
AI_Experiments/LSTM_changing_batch_size.ipynb
|
mit
|
from tensorflow import set_random_seed
set_random_seed(410)
from keras.layers import Dense
from keras.layers import LSTM
from keras.models import Sequential
import pandas as pd
# Generate data
## create sequence
length = 10
sequence = [i/float(length) for i in range(length)]
print sequence
## create X/y pairs
df = pd.DataFrame(sequence)
df = pd.concat([df, df.shift(1)], axis=1) # add second column which is the first column shift 1 period
df.dropna(inplace=True)
df
## convert to LSTM friendly format
values = df.values
X, y = values[:, 0], values[:, 1]
X = X.reshape(len(X), 1, 1) # LSTM needs (number of records, timesteps, number of features)
print(X.shape, y.shape)
"""
Explanation: LSTM Changing Batch Size for Training and Testing
When using built-in method of keras, the batch size limits the number of samples to be shown to the network before a weight update can be performed. Specifically, the batch size used when fitting your model controls how many predictions you must make at a time.
This will become an error when the number of predictions is lower than the batch size. For example, you may get the best results with a large batch size, but are required to make predictions for one observation at a time on something like a time series or sequence problem.
So, it will be better to have different batch size for training and testing.
Reference: https://machinelearningmastery.com/use-different-batch-sizes-training-predicting-python-keras/
End of explanation
"""
# Solution 3
## Model 1 - 3 batches for training
### configure network
n_batch = 3
n_epoch = 1000
n_neurons = 10
### design network
model = Sequential()
model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
### fit network
for i in range(n_epoch):
model.fit(X, y, epochs=1, batch_size=n_batch, verbose=1, shuffle=False)
model.reset_states() # manually set internal state update after epoch, otherwise state will reset after each batch
## Model 2 - 1 batch for prediction
### re-define the batch size
n_batch = 1
### re-define model
new_model = Sequential()
new_model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
new_model.add(Dense(1))
### copy weights
old_weights = model.get_weights()
new_model.set_weights(old_weights)
### compile model
new_model.compile(loss='mean_squared_error', optimizer='adam')
# Use original data to check the prediction (NOT suggested)
for i in range(len(X)):
testX, testy = X[i], y[i]
testX = testX.reshape(1, 1, 1)
yhat = new_model.predict(testX, batch_size=n_batch)
print('>Expected=%.1f, Predicted=%.1f' % (testy, yhat))
# Try a new set of data but has the same values as original data
length = 5
sequence = [i/10.0 for i in range(length)]
print sequence
## create X/y pairs
df = pd.DataFrame(sequence)
df = pd.concat([df, df.shift(1)], axis=1) # add second column which is the first column shift 1 period
df.dropna(inplace=True)
df
values = df.values
X, y = values[:, 0], values[:, 1]
X = X.reshape(len(X), 1, 1) # LSTM needs (number of records, timesteps, number of features)
print(X.shape, y.shape)
# predict
for i in range(len(X)):
testX, testy = X[i], y[i]
testX = testX.reshape(1, 1, 1)
yhat = new_model.predict(testX, batch_size=n_batch)
print('>Expected=%.1f, Predicted=%.1f' % (testy, yhat))
# Try a new set of data but has the different values as original data
length = 20
sequence = [i/10.0 for i in range(10, length)]
print sequence
## create X/y pairs
df = pd.DataFrame(sequence)
df = pd.concat([df, df.shift(1)], axis=1) # add second column which is the first column shift 1 period
df.dropna(inplace=True)
df
values = df.values
X, y = values[:, 0], values[:, 1]
X = X.reshape(len(X), 1, 1) # LSTM needs (number of records, timesteps, number of features)
print(X.shape, y.shape)
# predict
for i in range(len(X)):
testX, testy = X[i], y[i]
testX = testX.reshape(1, 1, 1)
yhat = new_model.predict(testX, batch_size=n_batch)
print('>Expected=%.1f, Predicted=%.1f' % (testy, yhat))
"""
Explanation: 3 Solutions
If we just want to predict 1 step, there are 3 solutions:
Solution 1 - batch size = 1
Both training and testing are using batch_size=1.
This can have the effect of faster learning, but also adds instability to the learning process as the weights widely vary with each batch.
Solution 2 - batch size = n
Make all the predictions at once in a batch.
But later you need to use all predictions made at once, or only keep the first prediction and discard the rest.
Solution 3 - different batch size for training & testing
The better solution
majorly is to copy the weights from the fit network and to create a new network with the pre-trained weights.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/guide/function.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
"""
Explanation: 使用 tf.function 提高性能
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/function" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/function.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
在 TensorFlow 2 中,默认情况下会打开 Eager Execution 模式。这种模式下的用户界面非常灵活直观(执行一次性运算要简单快速得多),但可能会牺牲一定的性能和可部署性。
您可以使用 tf.function 将程序转换为计算图。这是一个转换工具,用于从 Python 代码创建独立于 Python 的数据流图。它可以帮助您创建高效且可移植的模型,并且如果要使用 SavedModel,则必须使用此工具。
本指南介绍 tf.function 的底层工作原理,让您形成概念化理解,从而有效地加以利用。
要点和建议包括:
先在 Eager 模式下调试,然后使用 @tf.function 进行装饰。
不依赖 Python 的副作用,如对象变异或列表追加。
tf.function 最适合处理 TensorFlow 运算;NumPy 和 Python 调用会转换为常量。
设置
End of explanation
"""
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
"""
Explanation: 定义一个辅助函数来演示可能遇到的错误类型:
End of explanation
"""
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2])) # [[2., 2.], [2., 2.]]
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 1.0)
tape.gradient(result, v)
"""
Explanation: 基础知识
用法
您定义的 Function 就像核心 TensorFlow 运算:您可以在 Eager 模式下执行,可以计算梯度,等等。
End of explanation
"""
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
"""
Explanation: Function 中可以嵌套其他 Function。
End of explanation
"""
import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)
@tf.function
def conv_fn(image):
return conv_layer(image)
image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")
"""
Explanation: Function 的执行速度比 Eager 代码快,尤其是对于包含很多简单运算的计算图。但是,对于包含一些复杂运算(如卷积)的计算图,速度提升不会太明显。
End of explanation
"""
# Functions are polymorphic
@tf.function
def double(a):
print("Tracing with", a)
return a + a
print(double(tf.constant(1)))
print()
print(double(tf.constant(1.1)))
print()
print(double(tf.constant("a")))
print()
"""
Explanation: 跟踪
Python 的动态类型意味着您可以调用包含各种参数类型的函数,在各种场景下,Python 的行为可能有所不同。
但是,创建 TensorFlow 计算图需要静态 dtype 和形状维度。tf.function 通过包装一个 Python 函数来创建 Function 对象,弥补了这一缺陷。根据提供的输入,Function 为其选择相应的计算图,从而在必要时追溯 Python 函数。理解发生跟踪的原因和时机后,有效运用 tf.function 就会容易得多!
您可以通过调用包含不同类型参数的 Function 来切实观察这种多态行为。
End of explanation
"""
# This doesn't print 'Tracing with ...'
print(double(tf.constant("b")))
"""
Explanation: 请注意,如果重复调用包含相同参数类型的 Function,TensorFlow 会重复使用之前跟踪的计算图,因为后面的调用生成的计算图将相同。
End of explanation
"""
print(double.pretty_printed_concrete_signatures())
"""
Explanation: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
您可以使用 pretty_printed_concrete_signatures() 查看所有可用跟踪:
End of explanation
"""
print("Obtaining concrete trace")
double_strings = double.get_concrete_function(tf.constant("a"))
print("Executing traced function")
print(double_strings(tf.constant("a")))
print(double_strings(a=tf.constant("b")))
# You can also call get_concrete_function on an InputSpec
double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))
print(double_strings_from_inputspec(tf.constant("c")))
"""
Explanation: 目前,您已经了解 tf.function 通过 TensorFlow 的计算图跟踪逻辑创建缓存的动态调度层。对于术语的含义,更具体的解释如下:
tf.Graph 与语言无关,是对计算的原始可移植表示。
ConcreteFunction 是 tf.Graph 的 Eeager 执行包装器。
Function 管理 ConcreteFunction 的缓存,并为输入选择正确的缓存。
tf.function 包装 Python 函数,并返回一个 Function 对象。
获取具体函数
每次跟踪函数时都会创建一个新的具体函数。您可以使用 get_concrete_function 直接获取具体函数。
End of explanation
"""
print(double_strings)
"""
Explanation: (以下更改存在于 TensorFlow Nightly 版本中,并且将在 TensorFlow 2.3 中提供。)
打印 ConcreteFunction 会显示其输入参数(及类型)和输出类型的摘要。
End of explanation
"""
print(double_strings.structured_input_signature)
print(double_strings.structured_outputs)
"""
Explanation: 您也可以直接检索具体函数的签名。
End of explanation
"""
with assert_raises(tf.errors.InvalidArgumentError):
double_strings(tf.constant(1))
"""
Explanation: 对不兼容的类型使用具体跟踪会引发错误
End of explanation
"""
@tf.function
def pow(a, b):
return a ** b
square = pow.get_concrete_function(a=tf.TensorSpec(None, tf.float32), b=2)
print(square)
assert square(tf.constant(10.0)) == 100
with assert_raises(TypeError):
square(tf.constant(10.0), b=3)
"""
Explanation: 您可能会注意到,在具体函数的输入签名中对 Python 参数进行了特别处理。TensorFlow 2.3 之前的版本会将 Python 参数直接从具体函数的签名中删除。从 TensorFlow 2.3 开始,Python 参数会保留在签名中,但是会受到约束,只能获取在跟踪期间设置的值。
End of explanation
"""
graph = double_strings.graph
for node in graph.as_graph_def().node:
print(f'{node.input} -> {node.name}')
"""
Explanation: 获取计算图
每个具体函数都是 tf.Graph 的可调用包装器。虽然一般不需要检索实际 tf.Graph 对象,不过,您可以从任何具体函数轻松获得实际对象。
End of explanation
"""
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def next_collatz(x):
print("Tracing with", x)
return tf.where(x % 2 == 0, x // 2, 3 * x + 1)
print(next_collatz(tf.constant([1, 2])))
# We specified a 1-D tensor in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([[1, 2], [3, 4]]))
# We specified an int32 dtype in the input signature, so this should fail.
with assert_raises(ValueError):
next_collatz(tf.constant([1.0, 2.0]))
"""
Explanation: 调试
通常,在 Eager 模式下调试代码比在 tf.function 中简单。在使用 tf.function 进行装饰之前,进行装饰之前,您应该先确保代码可在 Eager 模式下无错误执行。为了帮助调试,您可以调用 tf.config.run_functions_eagerly(True) 来全局停用和重新启用 tf.function。
追溯仅在 tf.function 中出现的问题时,可参考下面的几点提示:
普通旧 Python print 调用仅在跟踪期间执行,可用于追溯(重新)跟踪函数的时间。
tf.print 调用每次都会执行,可用于追溯执行过程中产生的中间值。
利用 tf.debugging.enable_check_numerics 很容易追溯到 NaN 和 Inf 在何处创建。
pdb 可以帮助您理解跟踪的详细过程。(提醒:使用 PDB 调试时,AutoGraph 会自动转换 Python 源代码。)
跟踪语义
缓存键规则
通过从输入的参数和关键词参数计算缓存键,Function 可以确定是否重复使用跟踪的具体函数。
为 tf.Tensor 参数生成的键是其形状和 dtype。
从 TensorFlow 2.3 开始,为 tf.Variable 参数生成的键是其 id()。
为 Python 基元生成的键是其值。为嵌套 dict、 list、 tuple、 namedtuple 和 attr 生成的键是扁平化元祖。(由于这种扁平化处理,如果调用的具体函数的嵌套结构与跟踪期间使用的不同,则会导致 TypeError)。
对于所有其他 Python 类型,键基于对象 id(),以便为类的每个实例独立跟踪方法。
控制回溯
回溯可以确保 TensorFlow 为每组输入生成正确的计算图。但是,跟踪操作非常消耗资源!如果 Function 为每一次调用都回溯新的计算图,您会发现代码的执行速度远不如不使用 tf.function。
要控制跟踪行为,可以采用以下技巧:
在 tf.function 中指定 input_signature 来限制跟踪。
End of explanation
"""
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))
def g(x):
print('Tracing with', x)
return x
# No retrace!
print(g(tf.constant([1, 2, 3])))
print(g(tf.constant([1, 2, 3, 4, 5])))
"""
Explanation: 在 tf.TensorSpec 中指定 [None] 维度可灵活运用跟踪重用。
由于 TensorFlow 根据其形状匹配张量,因此,对于可变大小输入,使用 None 维度作为通配符可以让 Function 重复使用跟踪。对于每个批次,如果有不同长度的序列或不同大小的计算图,则会出现可变大小输入(请参阅 Transformer 和 Deep Dream 教程了解示例)。
End of explanation
"""
def train_one_step():
pass
@tf.function
def train(num_steps):
print("Tracing with num_steps = ", num_steps)
tf.print("Executing with num_steps = ", num_steps)
for _ in tf.range(num_steps):
train_one_step()
print("Retracing occurs for different Python arguments.")
train(num_steps=10)
train(num_steps=20)
print()
print("Traces are reused for Tensor arguments.")
train(num_steps=tf.constant(10))
train(num_steps=tf.constant(20))
"""
Explanation: 将 Python 参数转换为张量以减少回溯。
通常,Python 参数用于控制超参数和计算图构造,例如 num_layers=10、training=True 或 nonlinearity='relu'。所以,如果 Python 参数改变,则有必要回溯计算图。
但是,Python 参数有可能并未用于控制计算图构造。在这些情况下,Python 值的改变可能触发非必要的回溯。例如,在此训练循环中,AutoGraph 会动态展开。尽管有多个跟踪,但生成的计算图实际上是相同的,所以没有必要进行回溯。
End of explanation
"""
def f():
print('Tracing!')
tf.print('Executing')
tf.function(f)()
tf.function(f)()
"""
Explanation: 如果需要强制执行回溯,可以创建一个新的 Function。单独的 Function 对象肯定不会共享跟踪记录。
End of explanation
"""
@tf.function
def f(x):
print("Traced with", x)
tf.print("Executed with", x)
f(1)
f(1)
f(2)
"""
Explanation: Python 副作用
Python 副作用(如打印、追加到列表、改变全局变量)仅在第一次使用一组输入调用 Function 时才会发生。随后重新执行跟踪的 tf.Graph,而不执行 Python 代码。
一般经验法则是仅使用 Python 副作用来调试跟踪记录。另外,对于每一次调用,TensorFlow 运算(如 tf.Variable.assign、tf.print 和 tf.summary)是确保代码得到 TensorFlow 运行时跟踪并执行的最佳方法。
End of explanation
"""
external_var = tf.Variable(0)
@tf.function
def buggy_consume_next(iterator):
external_var.assign_add(next(iterator))
tf.print("Value of external_var:", external_var)
iterator = iter([0, 1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
"""
Explanation: 很多 Python 功能(如生成器和迭代器)依赖 Python 运行时来跟踪状态。通常,虽然这些构造在 Eager 模式下可以正常工作,但由于跟踪行为,tf.function 中会发生许多意外情况:
举一个例子,推进迭代器状态是 Python 的一个副作用,因此只在跟踪过程中发生。
End of explanation
"""
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def f(x):
tf.py_function(side_effect, inp=[x], Tout=[])
f(1)
f(1)
f(1)
# The list append happens all three times!
assert len(external_list) == 3
# The list contains tf.constant(1), not 1, because py_function casts everything to tensors.
assert external_list[0].numpy() == 1
"""
Explanation: 某些迭代构造通过 AutoGraph 获得支持。有关概述,请参阅 AutoGraph 转换部分。
如果希望在每次调用 Function 时都执行 Python 代码,tf.py_function 可以作为退出舱口。tf.py_function 的缺点是不可移植,性能不高,并且在分布式(多 GPU、TPU)设置中效果不佳。另外,由于 tf.py_function 必须连接到计算图中,它会将所有输入/输出转换为张量。
tf.gather、tf.stack 和 tf.TensorArray 之类的 API 可帮助您在原生 TensorFlow 中实现常见循环模式。
End of explanation
"""
@tf.function
def f(x):
v = tf.Variable(1.0)
v.assign_add(x)
return v
with assert_raises(ValueError):
f(1.0)
"""
Explanation: 变量
在函数中创建新的 tf.Variable 时可能遇到错误。该错误是为了防止重复调用发生行为背离:在 Eager 模式下,每次调用函数时都会创建一个新变量,但是在 Function 中则不一定,这是因为重复使用了跟踪记录。
End of explanation
"""
class Count(tf.Module):
def __init__(self):
self.count = None
@tf.function
def __call__(self):
if self.count is None:
self.count = tf.Variable(0)
return self.count.assign_add(1)
c = Count()
print(c())
print(c())
"""
Explanation: 您也可以在 Function 内部创建变量,不过只能在第一次执行该函数时创建这些变量。
End of explanation
"""
external_var = tf.Variable(3)
@tf.function
def f(x):
return x * external_var
traced_f = f.get_concrete_function(4)
print("Calling concrete function...")
print(traced_f(4))
del external_var
print()
print("Calling concrete function after garbage collecting its closed Variable...")
with assert_raises(tf.errors.FailedPreconditionError):
traced_f(4)
"""
Explanation: 您可能遇到的另一个错误是变量被回收。与常规 Python 函数不同,具体函数只会保留对它们闭包时所在变量的弱引用,因此,您必须保留对任何变量的引用。
End of explanation
"""
# Simple loop
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
tf.print(x)
x = tf.tanh(x)
return x
f(tf.random.uniform([5]))
"""
Explanation: AutoGraph 转换
AutoGraph 是一个库,在 tf.function 中默认处于启用状态。它可以将 Python Eager 代码的子集转换为与计算图兼容的 TensorFlow 运算。这包括 if、for、while 等控制流。
tf.cond 和 tf.while_loop 等 TensorFlow 运算仍然可以运行,但是使用 Python 编写时,控制流通常更易于编写,代码也更易于理解。
End of explanation
"""
print(tf.autograph.to_code(f.python_function))
"""
Explanation: 如果您有兴趣,可以检查 Autograph 生成的代码。
End of explanation
"""
@tf.function
def fizzbuzz(n):
for i in tf.range(1, n + 1):
print('Tracing for loop')
if i % 15 == 0:
print('Tracing fizzbuzz branch')
tf.print('fizzbuzz')
elif i % 3 == 0:
print('Tracing fizz branch')
tf.print('fizz')
elif i % 5 == 0:
print('Tracing buzz branch')
tf.print('buzz')
else:
print('Tracing default branch')
tf.print(i)
fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
"""
Explanation: 条件语句
AutoGraph 会将某些 if <condition> 语句转换为等效的 tf.cond 调用。如果 <condition> 是张量,则会执行这种替换,否则会将 if 语句作为 Python 条件语句执行。
Python 条件语句在跟踪时执行,因此会将该条件语句的一个分支添加到计算图。如果不使用 AutoGraph,当存在依赖于数据的控制流时,此跟踪计算图将无法选择替代分支。
tf.cond 跟踪并将条件的两个分支添加到计算图,在执行时动态选择分支。跟踪可能产生意外的副作用;有关详细信息,请参阅 AutoGraph 跟踪作用。
End of explanation
"""
def measure_graph_size(f, *args):
g = f.get_concrete_function(*args).graph
print("{}({}) contains {} nodes in its graph".format(
f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))
@tf.function
def train(dataset):
loss = tf.constant(0)
for x, y in dataset:
loss += tf.abs(y - x) # Some dummy computation.
return loss
small_data = [(1, 1)] * 3
big_data = [(1, 1)] * 10
measure_graph_size(train, small_data)
measure_graph_size(train, big_data)
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: small_data, (tf.int32, tf.int32)))
measure_graph_size(train, tf.data.Dataset.from_generator(
lambda: big_data, (tf.int32, tf.int32)))
"""
Explanation: 有关 AutoGraph 转换的 if 语句的其他限制,请参阅参考文档。
循环
AutoGraph 会将某些 for 和 while 语句转换为等效的 TensorFlow 循环运算,例如 tf.while_loop。如果不转换,则会将 for 或 while 循环作为 Python 循环执行。
以下情形会执行这种替换:
for x in y:如果 y 是一个张量,则转换为 tf.while_loop。在特殊情况下,如果 y 是 tf.data.Dataset,则会生成 tf.data.Dataset 运算的组合。
while <condition>:如果 <condition> 是张量,则转换为 tf.while_loop。
Python 循环在跟踪时执行,因而循环每迭代一次,都会将额外的运算添加到 tf.Graph。
TensorFlow 循环会跟踪循环体,并在执行时动态选择迭代的运行次数。循环体仅在生成的 tf.Graph 中出现一次。
有关 AutoGraph 转换的 for 和 while 语句的其他限制,请参阅参考文档。
在 Python 数据上循环
一个常见陷阱是在 tf.function 中的 Python/Numpy 数据上循环。此循环在跟踪过程中执行,因而循环每迭代一次,都会将模型的一个副本添加到 tf.Graph。
如果要在 tf.function 中包装整个训练循环,最安全的方法是将数据包装为 tf.data.Dataset,以便 AutoGraph 动态展开训练循环。
End of explanation
"""
batch_size = 2
seq_len = 3
feature_size = 4
def rnn_step(inp, state):
return inp + state
@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
max_seq_len = input_data.shape[0]
states = tf.TensorArray(tf.float32, size=max_seq_len)
state = initial_state
for i in tf.range(max_seq_len):
state = rnn_step(input_data[i], state)
states = states.write(i, state)
return tf.transpose(states.stack(), [1, 0, 2])
dynamic_rnn(rnn_step,
tf.random.uniform([batch_size, seq_len, feature_size]),
tf.zeros([batch_size, feature_size]))
"""
Explanation: 在数据集中包装 Python/Numpy 数据时,要注意 tf.data.Dataset.from_generator 与 tf.data.Dataset.from_tensors。前者将数据保留在 Python 中,并通过 tf.py_function 获取,这可能会影响性能;后者将数据的副本捆绑成计算图中的一个大 tf.constant() 节点,这可能会消耗较多内存。
通过 TFRecordDataset/CsvDataset 等从文件中读取数据是最高效的数据使用方式,因为这样 TensorFlow 就可以自行管理数据的异步加载和预提取,不必利用 Python。要了解详细信息,请参阅 tf.data 指南。
累加循环值
一种常见模式是不断累加循环的中间值。通常,这可以通过将元素追加到 Python 列表或将条目添加到 Python 字典来实现。但是,由于存在 Python 副作用,在动态展开循环中,这些方法无法达到预期效果。要从动态展开循环累加结果,可以使用 tf.TensorArray 来实现。
End of explanation
"""
|
enakai00/jupyter_ml4se_commentary
|
Solutions/06-pandas DataFrame-02-solution.ipynb
|
apache-2.0
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import Series, DataFrame
"""
Explanation: データフレームからのデータの抽出
End of explanation
"""
from numpy.random import normal
def create_dataset(num):
data_x = np.linspace(0,1,num)
data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num)
return DataFrame({'x': data_x, 'y': data_y})
data = create_dataset(10)
data
square_error = 0.0
for i, line in data.iterrows():
square_error += (np.sin(2*np.pi*line.x) - line.y) ** 2
rmse = np.sqrt(square_error / len(data))
rmse
"""
Explanation: 練習問題
(1) 次の関数 create_dataset() を用いて、num=10 個のデータからなるデータフレーム data を作成します。その後、iterrowsメソッドを利用して、データポイント (x,y) のy値と関数 sin(2πx) の値について、平方根平均二乗誤差 √sum(sin(2πx) - y)**2 / num を計算してください。
End of explanation
"""
x = create_dataset(10).x
x2 = x**2
x2.name = 'x2'
x3 = x**3
x3.name = 'x3'
x4 = x**4
x4.name = 'x4'
"""
Explanation: (2) (1)のDataFrameから列 'x' だけを取り出したSeriesオブジェクトを変数 x に格納してください。さらに、x**2 (各要素を2乗した値)を要素とするSeriesオブジェクトを作成して、変数 x2 に格納してください。同様に、x**3、x**4 を要素とするSeriesオブジェクトを変数 x3, x4 に格納します。それぞれのSeriesオブジェクトのnameプロパティは、'x2', 'x3', 'x4' とします。
End of explanation
"""
dataset = pd.concat([x,x2, x3, x4], axis=1)
dataset
"""
Explanation: (3) (2)で作成した x, x2, x3, x4 を結合して、x, x2, x3, x4を列に持ったデータフレーム dataset を作成してください。
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone
|
notebooks/prod/n00_datasets_generation.ipynb
|
mit
|
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
"""
Explanation: In this notebook the datsets for the predictor will be generated.
End of explanation
"""
# Input values
GOOD_DATA_RATIO = 0.99 # The ratio of non-missing values for a symbol to be considered good
SAMPLES_GOOD_DATA_RATIO = 0.9 # The ratio of non-missing values for an interval to be considered good
train_val_time = -1 # In real time days (-1 is for the full interval)
''' Step days will be fixed. That means that the datasets with longer base periods will have samples
that are more correlated. '''
step_days = 7 # market days
base_days = [7, 14, 28, 56, 112] # In market days
ahead_days = [1, 7, 14, 28, 56] # market days
datasets_params_list_df = pd.DataFrame([(x,y) for x in base_days for y in ahead_days],
columns=['base_days', 'ahead_days'])
datasets_params_list_df['train_val_time'] = train_val_time
datasets_params_list_df['step_days'] = step_days
datasets_params_list_df['GOOD_DATA_RATIO'] = GOOD_DATA_RATIO
datasets_params_list_df['SAMPLES_GOOD_DATA_RATIO'] = SAMPLES_GOOD_DATA_RATIO
datasets_params_list_df
"""
Explanation: Let's first define the list of parameters to use in each dataset.
End of explanation
"""
def generate_one_set(params):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
tic = time()
train_val_time = int(params['train_val_time'])
base_days = int(params['base_days'])
step_days = int(params['step_days'])
ahead_days = int(params['ahead_days'])
print('Generating: base{}_ahead{}'.format(base_days, ahead_days))
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Getting the data
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
today = data_df.index[-1] # Real date
print(pid + ') data_df loaded')
# Drop symbols with many missing points
data_df = pp.drop_irrelevant_symbols(data_df, params['GOOD_DATA_RATIO'])
print(pid + ') Irrelevant symbols dropped.')
# Generate the intervals for the predictor
x, y = fe.generate_train_intervals(data_df,
train_val_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
print(pid + ') Intervals generated')
# Drop "bad" samples and fill missing data
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO'])
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
print(pid + ') Irrelevant samples dropped and missing data filled.')
# Pickle that
x.to_pickle('../../data/x_{}.pkl'.format(pid))
y.to_pickle('../../data/y_{}.pkl'.format(pid))
toc = time()
print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic)))
return pid, x, y
for ind in range(datasets_params_list_df.shape[0]):
pid, x, y = generate_one_set(datasets_params_list_df.iloc[ind,:])
datasets_params_list_df['x_filename'] = datasets_params_list_df.apply(lambda x:
'x_base{}_ahead{}.pkl'.format(int(x['base_days']),
int(x['ahead_days'])), axis=1)
datasets_params_list_df['y_filename'] = datasets_params_list_df.apply(lambda x:
'y_base{}_ahead{}.pkl'.format(int(x['base_days']),
int(x['ahead_days'])), axis=1)
datasets_params_list_df
datasets_params_list_df.to_pickle('../../data/datasets_params_list_df.pkl')
"""
Explanation: Now, let's define the function to generate each dataset.
Note: The way to treat the missing data was carefully thought. Never the missing data is filled across samples. Some symbols are discarded before the intervals generation. Some intervals are later discarded. Only after that, and only within training sample intervals the missing data is filled. That is done first forward and then backwards, to preserve, as much as possible, the causality.
End of explanation
"""
|
kjihee/lab_study_group
|
2018/CodingInterview/Lecture_note/lecture_1.ipynb
|
mit
|
def find_overlap(string):
# 아스키 코드로 변환
convert_ord = [ord(i) for i in string]
# 아스키 코드는 0~255 의 수 : ex("A":65)
if len(set(convert_ord)) > 255:
return False
hash = [False] * 256
for i in convert_ord:
if hash[i] is True:
return False
else:
hash[i] = True
return True
input_list = ['A','B','D','F']
find_overlap(input_list)
"""
Explanation: Lecture 1.1
문자열 안에 문자들이 전부 유일한지를 확인하는 알고리즘
["가", "나", "다", "라] -> return True
["가", "가", "나", "다"] -> return False
process
형변환 (String to ASCII)
해쉬 생성 (해쉬의 초기값은 아스키 코드의 범위인 0~255의 길이를 가지는 리스트)
입력 리스트의 요소들을 읽어 들이면서 해쉬안에 처음 들어가는 요소라면 True 로 바꾸어준다.
만약 리스트의 요소값의 인덱스가 True 라면 중복되었다는 의미이므로 return False
End of explanation
"""
input_list = [1,2,3,4]
def reverse_str_1(input_list):
input_list.reverse()
return input_list
def reverse_str_2(input_list):
return input_list[::-1]
"""
Explanation: Lecture 1.2
문자열 뒤집기 알고리즘
Method
파이썬의 reverse() 함수이용
리스트 인덱싱 사용
End of explanation
"""
def permutation(str_1,str_2):
if ''.join(sorted(list(str_1))).lower().strip() == ''.join(sorted(list(str_2))).lower().strip():
return True
else:
return False
permutation("abed","abde")
"""
Explanation: Lecture 1.3
두개의 문자열이 순열인지 파악하는 알고리즘 (대, 소문자는 구분하지 않는다, 중복 고려x)
["I" "A" "E"] , ["A" "E" "I] -> return True
["A" "A"] , ["B", "A"] -> return False
Process
문자열 소문자 처리.
문자열 Sort
Strip 을 이용한 공백 제거
비교 연산자
Tip
파이썬의 sort() 는 공백을 우선적으로 처리한다.
End of explanation
"""
def change_str(input_str):
result = input_str.replace(" ","%20")
return result
change_str("열공 하세요")
"""
Explanation: Lecture 1.4
문자열에 공백이 있다면 "%20" 의 문자로 대체하기
Method
파이썬 replace()
End of explanation
"""
def zip_str(input_str):
buffer = None
list_str = list(input_str)
result = []
count = 1
for i in range(len(list_str)):
if i == 0:
result.append(list_str[i])
buffer = list_str[i]
else:
if buffer != list_str[i]:
result.append(str(count))
result.append(list_str[i])
count = 1
else:
count += 1
result.append(str(count))
result = "".join(result)
return result
zip_str("aaabbccda")
"""
Explanation: Lecture 1.5
같은 문자 반복횟수를 이용한 문자열 압축
aabbccda -> a2b2c2d1a1
Process
형변환 String to List
List 요소 순차적으로 읽기 (다음 Index 의 요소와 비교하기 위해 임수 변수 생성 필요)
List 첫번째 요소값 처리
버퍼와 List 와의 값이 같다면 count +1
버퍼와 List 값이 다르다면 결과 리스트에 count 변수, 다음 요소값 추가, count 초기화
End of explanation
"""
|
brclark-usgs/flopy
|
examples/Notebooks/flopy3_array_outputformat_options.ipynb
|
bsd-3-clause
|
%matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
#Set name of MODFLOW exe
# assumes executable is in users path statement
version = 'mf2005'
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
mfexe = exe_name
#Set the paths
loadpth = os.path.join('..', 'data', 'freyberg')
modelpth = os.path.join('data')
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
ml = flopy.modflow.Modflow.load('freyberg.nam', model_ws=loadpth,
exe_name=exe_name, version=version)
ml.model_ws = modelpth
ml.write_input()
success, buff = ml.run_model()
if not success:
print ('Something bad happened.')
files = ['freyberg.hds', 'freyberg.cbc']
for f in files:
if os.path.isfile(os.path.join(modelpth, f)):
msg = 'Output file located: {}'.format(f)
print (msg)
else:
errmsg = 'Error. Output file cannot be found: {}'.format(f)
print (errmsg)
"""
Explanation: FloPy
A quick demo of how to control the ASCII format of numeric arrays written by FloPy
load and run the Freyberg model
End of explanation
"""
print(ml.lpf.hk[0].format)
"""
Explanation: Each Util2d instance now has a .format attribute, which is an ArrayFormat instance:
End of explanation
"""
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
"""
Explanation: The ArrayFormat class exposes each of the attributes seen in the ArrayFormat.___str___() call. ArrayFormat also exposes .fortran, .py and .numpy atrributes, which are the respective format descriptors:
End of explanation
"""
ml.dis.botm[0].format.fortran = "(6f10.4)"
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
ml.write_input()
success, buff = ml.run_model()
"""
Explanation: (re)-setting .format
We can reset the format using a standard fortran type format descriptor
End of explanation
"""
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
"""
Explanation: Let's load the model we just wrote and check that the desired botm[0].format was used:
End of explanation
"""
ml.dis.botm[0].format.width = 9
ml.dis.botm[0].format.decimal = 1
print(ml1.dis.botm[0].format)
"""
Explanation: We can also reset individual format components (we can also generate some warnings):
End of explanation
"""
ml.dis.botm[0].format.free = True
print(ml1.dis.botm[0].format)
ml.write_input()
success, buff = ml.run_model()
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
"""
Explanation: We can also select free format. Note that setting to free format resets the format attributes to the default, max precision:
End of explanation
"""
|
weikang9009/pysal
|
notebooks/explore/segregation/decomposition_wrapper_example.ipynb
|
bsd-3-clause
|
import pandas as pd
import pickle
import numpy as np
import matplotlib.pyplot as plt
from pysal.explore import segregation
from pysal.explore.segregation.decomposition import DecomposeSegregation
"""
Explanation: Decomposition framework of the PySAL segregation module
This is a notebook that explains a step-by-step procedure to perform decomposition on comparative segregation measures.
First, let's import all the needed libraries.
End of explanation
"""
#filepath = '~/LTDB_Std_2010_fullcount.csv'
"""
Explanation: In this example, we are going to use census data that the user must download its own copy, following similar guidelines explained in https://github.com/spatialucr/geosnap/tree/master/geosnap/data where you should download the full type file of 2010. The zipped file download will have a name that looks like LTDB_Std_All_fullcount.zip. After extracting the zipped content, the filepath of the data should looks like this:
End of explanation
"""
df = pd.read_csv(filepath, encoding = "ISO-8859-1", sep = ",")
"""
Explanation: Then, we read the data:
End of explanation
"""
# This file can be download here: https://drive.google.com/open?id=1gWF0OCn6xuR_WrEj7Ot2jY6KI2t6taIm
with open('data/tracts_US.pkl', 'rb') as input:
map_gpd = pickle.load(input)
map_gpd['INTGEOID10'] = pd.to_numeric(map_gpd["GEOID10"])
gdf_pre = map_gpd.merge(df, left_on = 'INTGEOID10', right_on = 'tractid')
gdf = gdf_pre[['GEOID10', 'geometry', 'pop10', 'nhblk10']]
"""
Explanation: We are going to work with the variable of the nonhispanic black people (nhblk10) and the total population of each unit (pop10). So, let's read the map of all census tracts of US and select some specific columns for the analysis:
End of explanation
"""
# You can download this file here: https://drive.google.com/open?id=10HUUJSy9dkZS6m4vCVZ-8GiwH0EXqIau
with open('data/tract_metro_corresp.pkl', 'rb') as input:
tract_metro_corresp = pickle.load(input).drop_duplicates()
"""
Explanation: In this notebook, we use the Metropolitan Statistical Area (MSA) of US (we're also using the word 'cities' here to refer them). So, let's read the correspondence table that relates the tract id with the corresponding Metropolitan area...
End of explanation
"""
merged_gdf = gdf.merge(tract_metro_corresp, left_on = 'GEOID10', right_on = 'geoid10')
"""
Explanation: ..and merge them with the previous data.
End of explanation
"""
merged_gdf['compo'] = np.where(merged_gdf['pop10'] == 0, 0, merged_gdf['nhblk10'] / merged_gdf['pop10'])
merged_gdf.head()
"""
Explanation: We now build the composition variable (compo) which is the division of the frequency of the chosen group and total population. Let's inspect the first rows of the data.
End of explanation
"""
la_2010 = merged_gdf.loc[(merged_gdf.name == "Los Angeles-Long Beach-Anaheim, CA")]
la_2010.plot(column = 'compo', figsize = (10, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
"""
Explanation: Now, we chose two different metropolitan areas to compare the degree of segregation.
Map of the composition of the Metropolitan area of Los Angeles
End of explanation
"""
ny_2010 = merged_gdf.loc[(merged_gdf.name == 'New York-Newark-Jersey City, NY-NJ-PA')]
ny_2010.plot(column = 'compo', figsize = (20, 10), cmap = 'OrRd', legend = True)
plt.axis('off')
"""
Explanation: Map of the composition of the Metropolitan area of New York
End of explanation
"""
from pysal.explore.segregation.aspatial import GiniSeg
G_la = GiniSeg(la_2010, 'nhblk10', 'pop10')
G_ny = GiniSeg(ny_2010, 'nhblk10', 'pop10')
G_la.statistic - G_ny.statistic
"""
Explanation: We first compare the Gini index of both cities. Let's import the Gini_Seg class from segregation, fit both indexes and check the difference in point estimation.
End of explanation
"""
help(DecomposeSegregation)
"""
Explanation: Let's decompose these difference according to Rey, S. et al "Comparative Spatial Segregation Analytics". Forthcoming. You can check the options available in this decomposition below:
End of explanation
"""
DS_composition = DecomposeSegregation(G_la, G_ny)
DS_composition.c_s
DS_composition.c_a
"""
Explanation: Composition Approach (default)
The difference of -0.10653 fitted previously, can be decomposed into two components. The Spatial component and the attribute component. Let's estimate both, respectively.
End of explanation
"""
DS_composition.plot(plot_type = 'cdfs')
"""
Explanation: So, the first thing to notice is that attribute component, i.e., given by a difference in the population structure (in this case, the composition) plays a more important role in the difference, since it has a higher absolute value.
The difference in the composition can be inspected in the plotting method with the type cdfs:
End of explanation
"""
DS_composition.plot(plot_type = 'maps')
"""
Explanation: If your data is a GeoDataFrame, it is also possible to visualize the counterfactual compositions with the argument plot_type = 'maps'
The first and second contexts are Los Angeles and New York, respectively.
End of explanation
"""
DS_share = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'share')
DS_share.plot(plot_type = 'cdfs')
"""
Explanation: Note that in all plotting methods, the title presents each component of the decomposition performed.
Share Approach
The share approach takes into consideration the share of each group in each city. Since this approach takes into consideration the focus group and the complementary group share to build the "counterfactual" total population of each unit, it is of interest to inspect all these four cdf's.
ps.: The share is the population frequency of each group in each unit over the total population of that respectively group.
End of explanation
"""
DS_share.plot(plot_type = 'maps')
"""
Explanation: We can see that curve between the contexts are closer to each other which represent a drop in the importance of the population structure (attribute component) to -0.062. However, this attribute still overcomes the spatial component (-0.045) in terms of importance due to both absolute magnitudes.
End of explanation
"""
DS_dual = DecomposeSegregation(G_la, G_ny, counterfactual_approach = 'dual_composition')
DS_dual.plot(plot_type = 'cdfs')
"""
Explanation: We can see that the counterfactual maps of the composition (outside of the main diagonal), in this case, are slightly different from the previous approach.
Dual Composition Approach
The dual_composition approach is similar to the composition approach. However, it uses also the counterfactual composition of the cdf of the complementary group.
End of explanation
"""
DS_dual.plot(plot_type = 'maps')
"""
Explanation: It is possible to see that the component values are very similar with slight changes from the composition approach.
End of explanation
"""
from pysal.explore.segregation.spatial import RelativeConcentration
RCO_la = RelativeConcentration(la_2010, 'nhblk10', 'pop10')
RCO_ny = RelativeConcentration(ny_2010, 'nhblk10', 'pop10')
RCO_la.statistic - RCO_ny.statistic
RCO_DS_composition = DecomposeSegregation(RCO_la, RCO_ny)
RCO_DS_composition.c_s
RCO_DS_composition.c_a
"""
Explanation: The counterfactual distributions are virtually the same (but not equal) as the one from the composition approach.
Inspecting a different index: Relative Concentration
End of explanation
"""
|
crowd-course/datascience
|
Error Analysis and Classification Measures.ipynb
|
mit
|
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
%matplotlib inline
churn_df = pd.read_csv('../data/churn.csv')
col_names = churn_df.columns.tolist()
print "Column names:"
print col_names
to_show = col_names[:6] + col_names[-6:]
print "\nSample data:"
churn_df[to_show].head(6)
"""
Explanation: CUSTOMER CHURN
Credits: yhat blog
"Churn Rate" is a business term describing the rate at which customers leave or cease paying for a product or service. The need to retain customers growing interest among companies to develop better churn-detection techniques, leading many to look to data mining and machine learning.
"Predicting churn is particularly important for businesses w/ subscription models such as cell phone, cable, or merchant credit card processing plans. But modeling churn has wide reaching applications in many domains. For example, casinos have used predictive models to predict ideal room conditions for keeping patrons at the blackjack table and when to reward unlucky gamblers with front row seats to Celine Dion. Similarly, airlines may offer first class upgrades to complaining customers. The list goes on".
THE DATASET
The data set we'll be using is a longstanding telecom customer data set. The data is straightforward. Each row represents a subscribing telephone customer. Each column contains customer attributes such as phone number, call minutes used during different times of day, charges incurred for services, lifetime account duration, and whether or not the customer is still a customer.
End of explanation
"""
# Isolate target data
churn_result = churn_df['Churn?']
y = np.where(churn_result == 'True.',1,0)
# We don't need these columns
to_drop = ['State','Area Code','Phone','Churn?']
churn_feat_space = churn_df.drop(to_drop,axis=1)
# 'yes'/'no' has to be converted to boolean values
# NumPy converts these from boolean to 1. and 0. later
yes_no_cols = ["Int'l Plan","VMail Plan"]
churn_feat_space[yes_no_cols] = churn_feat_space[yes_no_cols] == 'yes'
# Pull out features for future use
features = churn_feat_space.columns
print features
X = churn_feat_space.as_matrix().astype(np.float)
# This is important
scaler = StandardScaler()
X = scaler.fit_transform(X)
print "Feature space holds %d observations and %d features" % X.shape
print "Unique target labels:", np.unique(y)
"""
Explanation: We'll be keeping the statistical model pretty simple for this example so the feature space is almost unchanged from what you see above. The following code simply drops irrelevant columns and converts strings to boolean values (since models don't handle "yes" and "no" very well). The rest of the numeric columns are left untouched.
End of explanation
"""
from sklearn.cross_validation import KFold
def run_cv(X,y,clf_class,**kwargs):
# Construct a kfolds object
kf = KFold(len(y),n_folds=3,shuffle=True)
y_pred = y.copy()
# Iterate through folds
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
# Initialize a classifier with key word arguments
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
y_pred[test_index] = clf.predict(X_test)
return y_pred
"""
Explanation: Many predictors care about the relative size of different features even though those scales might be arbitrary. For instance: the number of points a basketball team scores per game will naturally be a couple orders of magnitude larger than their win percentage. But this doesn't mean that the latter is 100 times less signifigant. StandardScaler fixes this by normalizing each feature to a range of around 1.0 to -1.0 thereby preventing models from misbehaving.
feature space X
set of target values y
EVALUATING MODEL PERFORMANCE
Express, test, cycle. A machine learning pipeline should be anything but static. There are always new features to design, new data to use, new classifiers to consider each with unique parameters to tune. And for every change it's critical to be able to ask, "Is the new version better than the last?" So how do I do that?
CROSS VALIDATION
As a good start, CROSS VALIDATION will be used throught this example. Cross validation attempts to avoid OVERFITTING (training on and predicting the same datapoint) while still producing a prediction for each observation dataset. This is accomplished by systematically hiding different subsets of the data while training a set of models. After training, each model predicts on the subset that had been hidden to it, emulating multiple train-test splits. When done correctly, every observation will have a 'fair' corresponding prediction.
End of explanation
"""
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.linear_model import LogisticRegression as LR
from sklearn.ensemble import GradientBoostingClassifier as GBC
from sklearn.metrics import average_precision_score
def accuracy(y_true,y_pred):
# NumPy interpretes True and False as 1. and 0.
return np.mean(y_true == y_pred)
print "Logistic Regression:"
print "%.3f" % accuracy(y, run_cv(X,y,LR))
print "Gradient Boosting Classifier"
print "%.3f" % accuracy(y, run_cv(X,y,GBC))
print "Support vector machines:"
print "%.3f" % accuracy(y, run_cv(X,y,SVC))
print "Random forest:"
print "%.3f" % accuracy(y, run_cv(X,y,RF))
print "K-nearest-neighbors:"
print "%.3f" % accuracy(y, run_cv(X,y,KNN))
"""
Explanation: Algorithms compared:
support vector machines
random forest
k-nearest-neighbors.
After, we pass each to cross validation and determining how often the classifier predicted the correct class.
End of explanation
"""
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
def draw_confusion_matrices(confusion_matricies,class_names):
class_names = class_names.tolist()
for cm in confusion_matrices:
classifier, cm = cm[0], cm[1]
print(cm)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion matrix for %s' % classifier)
fig.colorbar(cax)
ax.set_xticklabels([''] + class_names)
ax.set_yticklabels([''] + class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
y = np.array(y)
class_names = np.unique(y)
confusion_matrices = [
( "Support Vector Machines", confusion_matrix(y,run_cv(X,y,SVC)) ),
( "Random Forest", confusion_matrix(y,run_cv(X,y,RF)) ),
( "K-Nearest-Neighbors", confusion_matrix(y,run_cv(X,y,KNN)) ),
( "Gradient Boosting Classifier", confusion_matrix(y,run_cv(X,y,GBC)) ),
( "Logisitic Regression", confusion_matrix(y,run_cv(X,y,LR)) )
]
# Pyplot code not included to reduce clutter
# from churn_display import draw_confusion_matrices
%matplotlib inline
draw_confusion_matrices(confusion_matrices,class_names)
"""
Explanation: Random forest seems to be the winner, but... ?
Precision and recall
Measurements aren't golden formulas which always spit out high numbers for good models and low numbers for bad ones. Inherently they convey something sentiment about a model's performance, and it's the job of the human designer to determine each number's validity.
The problem with accuracy is that outcomes aren't necessarily equal.
If my classifier predicted a customer would churn and they didn't, that's not the best but it's forgivable. However, if my classifier predicted a customer would return, I didn't act, and then they churned... that's really bad.
We'll be using another built in scikit-learn function to construction a confusion matrix.
A CONFUSION MATRIX is a way of visualizing predictions made by a classifier and is just a table showing the distribution of predictions for a specific class.
The x-axis indicates the true class of each observation (if a customer churned or not)
The y-axis corresponds to the class predicted by the model (if my classifier said a customer would churned or not).
Confusion matrix and confusion tables:
The columns represent the actual class and the rows represent the predicted class.
Lets evaluate performance:
| | condition True | condition false|
|------|----------------|---------------|
|prediction true|True Positive|False positive|
|Prediction False|False Negative|True Negative|
Sensitivity, Recall or True Positive Rate quantify the models ability to predict our positive classes.
$$TPR = \frac{ TP}{TP + FN}$$
Specificity or True Negative Rate quantify the models ability to predict our Negative classes.
$$TNR = \frac{ TN}{FP + TN}$$
End of explanation
"""
from sklearn.metrics import roc_curve, auc
from scipy import interp
def plot_roc(X, y, clf_class, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train_index, test_index) in enumerate(kf):
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y[test_index], y_prob[test_index, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
mean_tpr /= len(kf)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
print "Support vector machines:"
plot_roc(X,y,SVC,probability=True)
print "Random forests:"
plot_roc(X,y,RF,n_estimators=18)
print "K-nearest-neighbors:"
plot_roc(X,y,KNN)
print "Gradient Boosting Classifier:"
plot_roc(X,y,GBC)
"""
Explanation: When an individual churns, how often does my classifier predict that correctly?
Precision and Recall
This measurement is called "RECALL" and a quick look at these diagrams can demonstrate that random forest is clearly best for this criteria. Out of all the churn cases (outcome "1") random forest correctly retrieved 330 out of 482. This translates to a churn "recall" of about 68% (330/482≈2/3), far better than support vector machines (≈50%) or k-nearest-neighbors (≈35%).
Another question of importance is "PRECISION" or, When a classifier predicts an individual will churn, how often does that individual actually churn? Random forest again out preforms the other two at about 93% precision (330 out of 356) with support vector machines a little behind at about 87% (235 out of 269). K-nearest-neighbors lags at about 80%.
While, just like accuracy, precision and recall still rank random forest above SVC and KNN, this won't always be true.
When different measurements do return a different pecking order, understanding the values and tradeoffs of each rating should effect how you proceed.
ROC Plots & AUC
Another important metric to consider is ROC plots.
Simply put, the area under the curve (AUC) of a receiver operating characteristic (ROC) curve is a way to reduce ROC performance to a single value representing expected performance.
To explain with a little more detail, a ROC curve plots the true positives (sensitivity) vs. false positives (1 − specificity), for a binary classifier system as its discrimination threshold is varied.
Since a random method describes a horizontal curve through the unit interval, it has an AUC of .5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
End of explanation
"""
train_index,test_index = train_test_split(churn_df.index)
forest = RF()
forest_fit = forest.fit(X[train_index], y[train_index])
forest_predictions = forest_fit.predict(X[test_index])
importances = forest_fit.feature_importances_[:10]
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(10):
print("%d. %s (%f)" % (f + 1, features[f], importances[indices[f]]))
# Plot the feature importances of the forest
#import pylab as pl
plt.figure()
plt.title("Feature importances")
plt.bar(range(10), importances[indices], yerr=std[indices], color="r", align="center")
plt.xticks(range(10), indices)
plt.xlim([-1, 10])
plt.show()
"""
Explanation: Feature Influence on Customer Behavior
Now that we understand the accuracy of each individual model for our particular dataset, let's dive a little deeper to get a better understanding of what features or behaviours are causing our customers to churn.
We will be using a RandomForestClassifer to build an ensemble of decision trees to predict whether a customer will churn or not churn.
One of the first steps in building a decision tree to calculating the information gain associated with splitting on a particular feature.
Let's look at the Top 10 features in our dataset that contribute to customer churn:
End of explanation
"""
def run_prob_cv(X, y, clf_class, roc=False, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
return y_prob
"""
Explanation: Thinking in terms of Probabilities
Decision making often favors probability over simple classifications. There's plainly more information in statements like "there's a 20% chance of rain tomorrow" and "about 55% of test takers pass the California bar exam" than just saying "it shouldn't rain tomorrow" or "you'll probably pass."
Probability predictions for churn also allow us to gauge a customers expected value, and their expected loss.
Who do you want to reach out to first, the client with a 80% churn risk who pays 20,000 annually, or the client who's worth 100,000 a year with a 40% risk? How much should you spend on each client?
End of explanation
"""
import warnings
warnings.filterwarnings('ignore')
# Use 10 estimators so predictions are all multiples of 0.1
pred_prob = run_prob_cv(X, y, RF, n_estimators=10)
pred_churn = pred_prob[:,1]
is_churn = y == 1
# Number of times a predicted probability is assigned to an observation
counts = pd.value_counts(pred_churn)
counts[:]
from collections import defaultdict
true_prob = defaultdict(float)
# calculate true probabilities
for prob in counts.index:
true_prob[prob] = np.mean(is_churn[pred_churn == prob])
true_prob = pd.Series(true_prob)
# pandas-fu
counts = pd.concat([counts,true_prob], axis=1).reset_index()
counts.columns = ['pred_prob', 'count', 'true_prob']
counts
"""
Explanation: How good is a good predictor?
Determining how good a predictor which gives probabilities rather than classes is a bit more difficult. If I predict there's a 20% likelihood of rain tomorrow I don't get to live out all the possible outcomes of the universe. It either rains or it doesn't.
What helps is that the predictors aren't making one prediction, they're making 3000+. So for every time I predict an event to occur 20% of the time I can see how often those events actually happen. Here's we'll use pandas to help me compare the predictions made by random forest against the actual outcomes.
End of explanation
"""
from churn_measurements import calibration, discrimination
from sklearn.metrics import roc_curve, auc
from scipy import interp
from __future__ import division
from operator import idiv
def print_measurements(pred_prob):
churn_prob, is_churn = pred_prob[:,1], y == 1
print " %-20s %.4f" % ("Calibration Error", calibration(churn_prob, is_churn))
print " %-20s %.4f" % ("Discrimination", discrimination(churn_prob,is_churn))
print "Note -- Lower calibration is better, higher discrimination is better"
print "Support vector machines:"
print_measurements(run_prob_cv(X,y,SVC,probability=True))
print "Random forests:"
print_measurements(run_prob_cv(X,y,RF,n_estimators=18))
print "K-nearest-neighbors:"
print_measurements(run_prob_cv(X,y,KNN))
print "Gradient Boosting Classifier:"
print_measurements(run_prob_cv(X,y,GBC))
print "Random Forest:"
print_measurements(run_prob_cv(X,y,RF))
"""
Explanation: We can see that random forests predicted that 75 individuals would have a 0.9 proability of churn and in actuality that group had a ~0.97 rate.
Calibration and Descrimination
Using the DataFrame above we can draw a pretty simple graph to help visualize probability measurements.
The x axis represents the churn probabilities which random forest assigned to a group of individuals.
The y axis is the actual rate of churn within that group, and each point is scaled relative to the size of the group.
Calibration is a relatively simple measurement and can be summed up as so: Events predicted to happen 60% of the time should happen 60% of the time. For all individuals I predict to have a churn risk of between 30 and 40%, the true churn rate for that group should be about 35%. For the graph above think of it as, How close are my predictions to the red line?
DISCRIMINATION MEASURES How far are my predictions away from the green line? Why is that important?
Well, if we assign a churn probability of 15% to every individual we'll have near perfect calibration due to averages, but I'll be lacking any real insight. Discrimination gives a model a better score if it's able to isolate groups which are further from the base set.
Approach sources:
https://www.google.com/search?q=Measures+of+Discrimination+Skill+in+Probabilistic+Judgment&oq=Measures+of+Discrimination+Skill+in+Probabilistic+Judgment and https://github.com/EricChiang/churn/blob/master/churn_measurements.py.
End of explanation
"""
|
ddyy345/trajAPI
|
test/testAPI_propane.ipynb
|
mit
|
import itertools
import string
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from msibi import MSIBI, State, Pair, mie
import mdtraj as md
"""
Explanation: testAPI_propane
Created by Davy Yue 2017-06-22
Imports
End of explanation
"""
t = md.load('traj_unwrapped.dcd', top='start_aa.hoomdxml')
"""
Explanation: PROPANE - edition (code to be expanded) ==============================
Where the real magic happens
End of explanation
"""
cg_idx = 0
start_idx = 0
n_propane = 1024 #passed later
propane_map = {0: [0, 1, 2]}
system_mapping = {}
for n in range(n_propane):
for bead, atoms in propane_map.items():
system_mapping[cg_idx] = [x + start_idx for x in atoms]
start_idx += len(atoms)
cg_idx += 1
# print(system_mapping)
"""
Explanation: Mapping and application
Keys being CG bead indices and values being a list of atom indices corresponding to each CG bead
e.g., {prop0: [0, 1, 2], prop1: [3, 4, 5], prop2: [6, 7, 8], …}
Construct for entire system
End of explanation
"""
from mdtraj.core import element
list(t.top.atoms)[0].element = element.carbon
list(t.top.atoms)[0].element.mass
for atom in t.top.atoms:
atom.element = element.carbon
cg_xyz = np.empty((t.n_frames, len(system_mapping), 3))
for cg_bead, aa_indices in system_mapping.items():
cg_xyz[:, cg_bead, :] = md.compute_center_of_mass(t.atom_slice(aa_indices))
# print(cg_xyz)
"""
Explanation: With mapping for whole system, apply to all atom trajectory
End of explanation
"""
cg_top = md.Topology()
for cg_bead in system_mapping.keys():
cg_top.add_atom('carbon', element.virtual_site, cg_top.add_residue('A', cg_top.add_chain()))
cg_traj = md.Trajectory(cg_xyz, cg_top, time=None, unitcell_lengths=t.unitcell_lengths, unitcell_angles=t.unitcell_angles)
cg_traj.save_dcd('cg_traj.dcd')
# print(cg_traj)
# print(cg_top)
# print(cg_xyz)
"""
Explanation: Traj & Obj
Create new Trajectory object & CG Topology object
Save resultant trajectory file
End of explanation
"""
pairs = cg_traj.top.select_pairs(selection1='name "carbon"', selection2='name "carbon"')
# mdtraj.compute_rdf(traj, pairs=None, r_range=None, bin_width=0.005, n_bins=None, periodic=True, opt=True)
r, g_r = md.compute_rdf(cg_traj, pairs=pairs, r_range=(0, 1.2), bin_width=0.005)
np.savetxt('rdfs_aa.txt', np.transpose([r, g_r]))
fig, ax = plt.subplots()
ax.plot(r, g_r)
ax.set_xlabel("r")
ax.set_ylabel("g(r)")
"""
Explanation: Calculate RDF and save
End of explanation
"""
|
dedx/STAR2015
|
notebooks/CountingStarsWithNumPy.ipynb
|
mit
|
import scipy.ndimage as ndi
import requests
from StringIO import StringIO
#Pick an image from the list above and fetch it with requests.get
#The default picture here is of M45 - the Pleiades Star Cluster.
response = requests.get("http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg")
pic = ndi.imread(StringIO(response.content))
"""
Explanation: Counting Stars with NumPy
This example introduces some of the image processing capabilities available with NumPy and the SciPy ndimage package. More extensive documentation and tutorials can be found through the SciPy Lectures series.
Image I/O
Here is a list of beautiful star field images taken by the Hubble Space Telescope:
http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1995-32-c-full_jpg.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-2002-10-c-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1999-30-b-full_jpg.jpg
We can use the SciPy ndimage library to read image data into NumPy arrays. If we want to fetch a file off the web, we also need some help from the requests and StringIO libraries:
End of explanation
"""
%pylab inline
import matplotlib.pyplot as plt
plt.imshow(pic);
"""
Explanation: Image Visualization
We can plot the image using matplotlib:
End of explanation
"""
print pic.shape
"""
Explanation: Image Inspection
We can examine the image properties:
End of explanation
"""
#Color array [R,G,B] of very first pixel
print pic[0,0]
"""
Explanation: Pixel coordinates are (column,row).
Colors are represented by RGB triples. Black is (0,0,0), White is (255, 255, 255) or (0xFF, 0xFF, 0xFF) in hexadecimal. Think of it as a color cube with the three axes representing the different possible colors. The furthest away from the origin (black) is white.
End of explanation
"""
#find value of max pixel with aggregates
print pic.sum(axis=2).max() #numbering from 0, axis 2 is the color depth
"""
Explanation: We could write code to find the brightest pixel in the image, where "brightest" means highest value of R+G+B. For the 256 color scale, the greatest possible value is 3 * 255, or 765. One way to do this would be to write a set of nested loops over the pixel dimensions, calculating the sum R+G+B for each pixel, but that would be rather tedious and slow.
We could process the information faster if we take advantage of the speedy NumPy slicing, aggregates, and ufuncs. Remember that any time we can eliminate interpreted Python loops we save a lot of processing time.
End of explanation
"""
def monochrome(pic_array, threshold):
"""replace the RGB values in the loaded image with either
black or white depending on whether its total
luminance is above or below some threshold
passed in by the user"""
mask = (pic_array.sum(axis=2) >= threshold) #could also be done in one step
pic_array[mask] = 0 #BLACK - broadcasting at work here
pic_array[~mask] = 255 #WHITE - broadcasting at work here
return
#Get another copy to convert to B/W
bwpic = ndi.imread(StringIO(response.content))
#This threshold is a scalar, not an RGB triple
#We're looking for pixels whose total color value is 600 or greater
monochrome(bwpic,200+200+200)
plt.imshow(bwpic);
"""
Explanation: Image Feature Extraction
Now that we know how to read in the image as a NumPy array, let's count the stars above some threshold brightness. Start by converting the image to B/W, so that which pixels belong to stars and which don't is unambiguous. We'll use black for stars and white for background, since it's easier to see black-on-white than the reverse.
End of explanation
"""
a = np.array([[0,0,1,1,0,0],
[0,0,0,1,0,0],
[1,1,0,0,1,0],
[0,0,0,1,0,0]])
"""
Explanation: The way to count the features (stars) in the image is to identify "blobs" of connected or adjacent black pixels.
A traditional implementation of this algorithm using plain Python loops is presented in the Multimedia Programming lesson from Software Carpentry. This was covered in the notebook Counting Stars.
Let's see how to implement such an algorithm much more efficiently using numpy and scipy.ndimage.
The scipy.ndimage.label function will use a structuring element (cross-shaped by default) to search for features. As an example, consider the simple array:
End of explanation
"""
labeled_array, num_features = ndi.label(a)
print(num_features)
print(labeled_array)
"""
Explanation: There are four unique features here, if we only count those that have neighbors along a cross-shaped structuring element.
End of explanation
"""
s = [[1,1,1],
[1,1,1],
[1,1,1]]
#Note, that scipy.ndimage.generate_binary_structure(2,2) would also do the same thing.
print s
"""
Explanation: If we wish to consider elements connected on the diagonal, as well as the cross structure, we define a new structuring element:
End of explanation
"""
labeled_array, num_features = ndi.label(a, structure=s)
print(num_features)
"""
Explanation: Label the image using the new structuring element:
End of explanation
"""
print(labeled_array)
"""
Explanation: Note that features 1, 3, and 4 from above are now considered a single feature
End of explanation
"""
labeled_array, num_stars = ndi.label(~bwpic) #Count and label the complement
print num_stars
plt.imshow(labeled_array);
"""
Explanation: Let's use ndi.label to count up the stars in our B/W starfield image.
End of explanation
"""
locations = ndi.find_objects(labeled_array)
print locations[9]
label_indices = [(labeled_array[:,:,0] == i).nonzero() for i in xrange(1, num_stars+1)]
print label_indices[9]
"""
Explanation: Label returns an array the same shape as the input where each "unique feature has a unique value", so if you want the indices of the features you use a list comprehension to extract the exact feature indices. Something like:
label_indices = [(labeled_array[:,:,0] == i).nonzero() for i in xrange(1, num_stars+1)]
or use the ndi.find_objects method to obtain a tuple of feature locations as slices to obtain the general location of the star but not necessarily the correct shape.
End of explanation
"""
star_sizes = [(label_indices[i-1][0]).size for i in xrange(1, num_stars+1)]
print len(star_sizes)
biggest_star = np.where(star_sizes == np.max(star_sizes))[0]
print biggest_star
print star_sizes[biggest_star]
bwpic[label_indices[biggest_star][0],label_indices[biggest_star][1],:] = (255,0,0)
plt.imshow(bwpic);
"""
Explanation: Let's change the color of the largest star in the field to red. To find the largest star, look at the lengths of the arrays stored in label_indices.
End of explanation
"""
|
BrownDwarf/ApJdataFrames
|
notebooks/Luhman2009.ipynb
|
mit
|
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
"""
Explanation: ApJdataFrames 009: Luhman2009
Title: An Infrared/X-Ray Survey for New Members of the Taurus Star-Forming Region
Authors: Kevin L Luhman, E. E. Mamajek, P R Allen, and Kelle L Cruz
Data is from this paper:
http://iopscience.iop.org/0004-637X/703/1/399/article#apj319072t2
End of explanation
"""
tbl2 = pd.read_csv("http://iopscience.iop.org/0004-637X/703/1/399/suppdata/apj319072t2_ascii.txt",
nrows=43, sep='\t', skiprows=2, na_values=[" sdotsdotsdot"])
tbl2.drop("Unnamed: 10",axis=1, inplace=True)
"""
Explanation: Table 2- Members of Taurus in Spectroscopic Sample
End of explanation
"""
new_names = ['2MASS', 'Other_Names', 'Spectral_Type', 'T_eff', 'A_J','L_bol','Membership',
'EW_Halpha', 'Basis of Selection', 'Night']
old_names = tbl2.columns.values
tbl2.rename(columns=dict(zip(old_names, new_names)), inplace=True)
tbl2.head()
sns.set_context("notebook", font_scale=1.5)
plt.plot(tbl2.T_eff, tbl2.L_bol, '.')
plt.ylabel(r"$L/L_{sun}$")
plt.xlabel(r"$T_{eff} (K)$")
plt.yscale("log")
plt.title("Luhman et al. 2009 Taurus Members")
plt.xlim(5000,2000)
"""
Explanation: Clean the column names.
End of explanation
"""
! mkdir ../data/Luhman2009
tbl2.to_csv("../data/Luhman2009/tbl2.csv", sep="\t")
"""
Explanation: Save the data tables locally.
End of explanation
"""
|
xsolo/machine-learning
|
face_detect/MLPClassifier.ipynb
|
mit
|
data = pd.read_csv('fer2013/fer2013.csv')
df = shuffle(df)
X = data['pixels']
y = data['emotion']
X = pd.Series([np.array(x.split()).astype(int) for x in X])
# convert one column as list of ints into dataframe where each item in array is a column
X = pd.DataFrame(np.matrix(X.tolist()))
df = pd.DataFrame(y)
df.loc[:,'f'] = pd.Series(-1, index=df.index)
df.groupby('emotion').count()
# This function plots the given sample set of images as a grid with labels
# if labels are available.
def plot_sample(S,w=48,h=48,labels=None):
m = len(S);
# Compute number of items to display
display_rows = int(np.floor(np.sqrt(m)));
display_cols = int(np.ceil(m / display_rows));
fig = plt.figure()
S = S.as_matrix()
for i in range(0,m):
arr = S[i,:]
arr = arr.reshape((w,h))
ax = fig.add_subplot(display_rows,display_cols , i+1)
ax.imshow(arr, aspect='auto', cmap=plt.get_cmap('gray'))
if labels is not None:
ax.text(0,0, '{}'.format(labels[i]), bbox={'facecolor':'white', 'alpha':0.8,'pad':2})
ax.axis('off')
plt.show()
print ('0=Angry', '1=Disgust', '2=Fear', '3=Happy', '4=Sad', '5=Surprise', '6=Neutral')
samples = X.sample(16)
plot_sample(samples,48,48,y[samples.index].as_matrix())
"""
Explanation: First, let's unpack the data set from the ex4data1.mat, the data is
available on the coursera site for the machine learning class https://www.coursera.org/learn/machine-learning tought by Andrew NG lecture 4. Also there is a number of clones that have this data file.
End of explanation
"""
from sklearn.neural_network import MLPClassifier
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
# CALC AUC_ROC, binarizing each lable
y_b = pd.DataFrame(label_binarize(y, classes=[0,1,2,3,4,5,6]))
n_classes = y_b.shape[1]
# since the data we have is one big array, we want to split it into training
# and testing sets, the split is 70% goes to training and 30% of data for testing
X_train, X_test, y_train, y_test = train_test_split(X, y_b, test_size=0.3)
neural_network =(100,)
clfs ={}
for a in [1,0.1,1e-2,1e-3,1e-4,1e-5]:
# for this excersize we are using MLPClassifier with lbfgs optimizer (the family of quasi-Newton methods). In my simple
# experiments it produces good quality outcome
clf = MLPClassifier( alpha=a, hidden_layer_sizes=neural_network, random_state=1)
clf.fit(X_train, y_train)
# So after the classifier is trained, lets see what it predicts on the test data
prediction = clf.predict(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test.as_matrix()[:,i], prediction[:,i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.as_matrix().ravel(), prediction.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
print ("ROC_AUC (micro) score is {:.04f} with alpha {}".format(roc_auc["micro"], a))
clfs[a] = clf
samples = X_test.sample(16)
p = clfs.get(0.001).predict(samples)
plot_sample(samples,48,48,[x.argmax(axis=0) for x in p])
p=y_test.loc[samples.index].as_matrix()
plot_sample(samples,48,48,[x.argmax(axis=0) for x in p])
"""
Explanation: Now, let use the Neural Network with 1 hidden layers. The number of neurons in each layer is X_train.shape[1] which is 400 in our example (excluding the extra bias unit).
End of explanation
"""
|
CNS-OIST/STEPS_Example
|
user_manual/source/API_2/Interface_Tutorial_2_IP3.ipynb
|
gpl-2.0
|
import steps.interface
from steps.model import *
from steps.geom import *
from steps.rng import *
from steps.sim import *
from steps.saving import *
r = ReactionManager()
mdl = Model()
with mdl:
Ca, IP3, R, RIP3, Ropen, RCa, R2Ca, R3Ca, R4Ca = Species.Create()
surfsys = SurfaceSystem.Create()
with surfsys:
# IP3 and activating Ca binding
R.s + IP3.o <r['r1']> RIP3.s
RIP3.s + Ca.o <r['r2']> Ropen.s
r['r1'].K = 1000e6, 25800
r['r2'].K = 8000e6, 2000
# Inactivating Ca binding
R.s + Ca.o <r['r3']> RCa.s
RCa.s + Ca.o <r['r4']> R2Ca.s
R2Ca.s + Ca.o <r['r5']> R3Ca.s
R3Ca.s + Ca.o <r['r6']> R4Ca.s
r['r3'].K = 8.889e6, 5
r['r4'].K = 20e6, 10
r['r5'].K = 40e6, 15
r['r6'].K = 60e6, 20
# Ca ions passing through open IP3R channel
Ca.i + Ropen.s >r[1]> Ropen.s + Ca.o
r[1].K = 2e8
"""
Explanation: Surface-Volume Reactions
<div class="admonition note">
**Topics**: Surface reactions, chained reactions, advanced ResultSelector usage, data saving to file.
</div>
In the previous chapter we declared reactions taking place inside a volume. In this chapter we consider another
type of kinetic reaction, associated with the steps.model.SurfaceSystem container, which defines a
reaction taking place on a surface (or patch) connecting two compartments
(arbitrarily naming one of them the “inner” compartment, and the other one
the “outer” compartment). Reactants and products can therefore be freely moving
around in a volume or embedded in a surface.
Therefore, it is necessary to firstly specify the location of the reactant
and product species.
Note: Surface reactions are designed to represent reactions where one
reactant is embedded in a membrane, but in fact if all reactants and
products belong to the same compartment and none appear on a patch
it will behave exactly like the equivalent volume reaction.
To become familiar with these reactions we will build a simplified version of
the inositol 1,4,5-trisphosphate (IP $_{3}$) model
(described in Doi T, et al,Inositol
1,4,5-Triphosphate-Dependent $Ca^{\text{2+}}$ Threshold Dynamics
Detect Spike Timing in Cerebellar Purkinje Cells, J Neurosci 2005, 25(4):950-961) in STEPS.
In the IP3 receptor model, reactions (i.e. receptor binding of calcium and IP3 molecules) take place on the membrane separating the Endoplasmic Reticulum (ER) and the cytosol. Therefore, we will need to declare surface reactions.
In the figure below we can see a schematic diagram of the states
and transitions in the model. IP3 receptors are embedded in the ER membrane, each “binding” reaction is described
by a second order surface reaction and each “unbinding” reaction by a first order surface reaction.
We will go through the Python code to build this model in STEPS,
but providing only brief descriptions of operations we are familiar
with from the previous chapter.
Model declaration
Surface reactions
STEPS surface reactions can deal with three types of reactions, classified by the locations of the reactants:
Volume-Surface reactions. In this case molecules within a volume interact with molecules embedded in a surface and result in products that may reside within in a volume or a surface. The units for the reaction parameters in this case are the same as for ordinary volume reactions, namely: a first order reaction parameter has units $s^{-1}$; a second order reaction parameter has units $\left(M.s\right)^{-1}$; a third order reaction $\left(M^{2}.s\right)^{-1}$; and so on.
Surface-Surface reactions. In this case the reactants are all embedded in a surface. Quite clearly, the dimensions of the reaction are different from a volume reaction and the reaction parameter is assumed to be two-dimensional. This is an important point because the reaction parameter will be treated differently from a volume-volume or volume-surface interaction. A further complication is that parameters for ordinary volume reactions are based on the litre, where there is no convenient equivalent 2D concentration unit. Surface-surface reaction parameters are based on units of area of square meters. A first order surface-surface reaction parameter is therefore required in units of $s^{-1}$; a second-order surface-surface reaction parameter has units $\left(mol.m^{-2}\right)^{-1}.s^{-1}$; a third-order surface-surface reaction parameter has units $\left(mol.m^{-2}\right)^{-2}.s^{-1}$; and so on. Zero-order surface reactions are not supported because of the ambiguity of interpreting the reaction parameter.
Volume-Volume reactions.
It is possible for a surface reaction to contain reactant species that are all in a volume, in which case the reaction behaves similarly to an ordinary volume reaction, though products may belong to connected volumes or surfaces.
As mentioned previously, to declare surface reactions we have to include some information about the location of the reaction:
which compartment are the reactants to be found in, and are any molecules embedded in a surface and which of the two compartments that the surface connects are the products injected into? We supply this information to STEPS by labelling our compartments that a patch connects, arbitrarily choosing the labels 'inner' and 'outer'. When the surface reaction's parent surface system object is added to a certain patch, the compartment labelling in the surface reaction stoichiometry will match the compartment labelling in the patch definition. We will come to creating a patch later in this chapter.
So, at this stage we must chose which compartment we will label 'outer'
and which we will label 'inner' and make sure to maintain this labelling
throughout our definitions, and also in our geometry description.
We chose to label the cytosol as the 'outer' compartment and the ER
as the 'inner' compartment, so should be very careful that this ties in correctly to our description when
we create our steps.geom.Patch object to represent a surface to connect the two compartments.
When writing a surface reaction, we can specify the location of each reactant in two different ways. The most verbose consist in using functions In, Out and Surf to refer respectively to the inner compartment, the outer compartment, and the patch surface:
python
Surf(R) + Out(IP3) <r[1]> Surf(RIP3)
The same reactions can be declared in a slightly more concised way:
python
R.s + IP3.o <r[1]> RIP3.s
Adding .s, .i, or .o after a reactant is a shorthand way to specify its location. This notation aims at imitating subscripts like in: $\mathrm{R_s + IP3_o \leftrightarrow RIP3_s}$
Note: Reactant species cannot belong to different compartments,
so attempting to create a surface reaction with both Spec1.i and Spec2.o will
result in an error.*
Surface reactions declaration
Let us now declare all our reactions. As in the previous chapter, we first import the required packages, create the Model, a ReactionManager, and declare Species:
End of explanation
"""
geom = Geometry()
with geom:
# Create the cytosol and Endoplasmic Reticulum compartments
cyt, ER = Compartment.Create()
cyt.Vol = 1.6572e-19
ER.Vol = 1.968e-20
# ER is the 'inner' compartment, cyt is the 'outer' compartment
memb = Patch.Create(ER, cyt, surfsys)
memb.Area = 0.4143e-12
"""
Explanation: Note that each possible state of an IP3 receptor is declared as a dinstinct species.
We then declare a SurfaceSystem instead of a VolumeSystem and the reactions are declared using the context manager with keyword, as previously seen.
All reactions are biderectional (using <r[...]>) except for the last one (using >r[...]>), that represents calcium leaving the ER through the open IP3R channel.
By using the .K property, we set all reaction constants' default values (see Doi T, et al,Inositol
1,4,5-Triphosphate-Dependent $Ca^{\text{2+}}$ Threshold Dynamics
Detect Spike Timing in Cerebellar Purkinje Cells, J Neurosci 2005, 25(4):950-961). Since these are volume-surface interactions, we must make
sure to supply our values in Molar units as discussed previously in this chapter.
Note that, when several reactants bind sequentially (to a receptor like here for example), it is possible to chain the reactions using parentheses:
python
(R.s + IP3.o <r[1]> RIP3.s) + Ca.o <r[2]> Ropen.s
Reactions that are in parentheses are equivalent to their right hand side. The above line thus first declares the reaction in the parentheses R.s + IP3.o <r[1]> RIP3.s and replaces it by its right hand side; the line then reads RIP3.s + Ca.o <r[2]> Ropen.s and this reaction is declared. The whole process is thus equivalent to declaring the reactions on two separate lines.
<div class="admonition warning">
**Warning**: Chained reactions can be a bit hard to read so their use should probably be restricted to sequential bindings.
</div>
Geometry specification
The next step is to create the geometry for the model. We will choose well-mixed
geometry, as in the chapter on well-mixed models, but we now have two compartments which
are connected by a surface 'patch'. We create two steps.geom.Compartment objects
to represent the Endoplasmic Reticulum (which we intend to label the 'inner'
compartment) and the cytosol ('outer' compartment), and a steps.geom.Patch
object to represent the ER membrane between the ER and cytosol.
End of explanation
"""
print(memb.innerComp)
print(memb.outerComp)
"""
Explanation: First we create the two well-mixed compartments at once with cyt, ER = Compartment.Create(). Since no parameters were given to Create(), we explicitely set the volume of each compartment by using the .Vol property.
Since we only defined a SurfaceSystem in the model, we do not need to associate our compartment with a VolumeSystem.
We then create the Patch using the automatic naming syntax. This time, the Create method needs to receive at least the inner compartment.
It is vital that care is taken in the order of the compartment objects to the constructor, so that the required labelling from our surface reaction
definitions is maintained. Note: A Patch must have an inner compartment by convention,
but does not require an outer compartment. This is an easy way to remember the order to the constructor; since an inner compartment is always required it must come first to the constructor, and the optional outer compartment comes after. Obviously any surface reaction
rules that contain reactants or products in the outer compartment cannot be
added to a Patch that doesn't have an outer compartment.
We can also specify the area of the patch during creation, like so:
python
patchName = Patch.Create(innerComp, outerComp, surfSys, area)
In our example, we set the area of a patch after creation using the .Area property.
We can check the labelling is as desired after object construction if we like with properties steps.geom.Patch.innerComp and steps.geom.Patch.outerComp.:
End of explanation
"""
rng = RNG('mt19937', 512, 7233)
sim = Simulation('Wmdirect', mdl, geom, rng)
"""
Explanation: Simulation declaration and data saving
We first create the random number generator and the Simulation. Like in the previous chapter, we will use the well-mixed 'Wmdirect' solver:
End of explanation
"""
rs = ResultSelector(sim)
Rstates = rs.memb.MATCH('R.*').Count
Reacs = rs.memb.MATCH('r[1-6]')['fwd'].Extent + rs.memb.MATCH('r[1-6]')['bkw'].Extent
"""
Explanation: We then specify which data should be saved, this time we will declare two result selectors:
End of explanation
"""
print(Rstates.labels)
"""
Explanation: We have two ResultSelectors, Rstates and Reacs. Rstates makes use of the MATCH(...) function that only selects objects whose name matches the regular expression given as a parameter. In our case, it will match all objects inside memb whose name starts with 'R', i.e. all receptors. It is equivalent to the longer:
python
Rstates = rs.memb.LIST(R, RIP3, Ropen, RCa, R2Ca, R3Ca, R4Ca).Count
In our specific case, it happens to be equivalent to rs.memb.ALL(Species).Count because we did not define any other species on the ER membrane. The MATCH version is however preferable since it will still work if we decide to add other species.
Combining ResultSelectors
<img src="images/resultselector_syntax.png"/>
The second ResultSelector, Reacs will save the extent of each reaction from 'r1' to 'r6', taking into account both forward and backward subreactions. To understand its declaration, we first need to see how reactions interact with ResultSelectors.
The following ResultSelector will save both forward and backward extent of reaction 'r1':
python
rs.memb.r1.Extent
Since it saves two values for each run and for each timestep (i.e. the third dimension of the associated data structure is 2, cf. previous tutorial), and for the sake of simplicity, we will say that it has length 2. To save specifically the forward extent, we write:
python
rs.memb.r1['fwd'].Extent
and this ResultSelector has length 1. To save the total extent (the sum of forward and backward), one could write:
python
rs.SUM(rs.memb.r1.Extent)
the rs.SUM(...) function takes a ResultSelector as an argument and returns a ResultSelector whose length is 1 and corresponds to the sum of all the values defined by the argument. In our case, it would be equivalent to:
python
rs.memb.r1['fwd'].Extent + rs.memb.r1['bkw'].Extent
This + notation is also valid; standard arithmetic operators (+, -, *, /, and **) can all be used with ResultSelectors and behave in the same way as they do in numpy. The above example sums two ResultSelectors of length 1 and thus results in a ResultSelector of length one as well. If the length of the operands is higher than one, like in:
python
rs.memb.LIST(r1, r2)['fwd'].Extent + rs.memb.LIST(r1, r2)['bkw'].Extent
the resulting ResultSelector has length 2, like the operands, and is equivalent to:
python
rs.memb.r1['fwd'].Extent + rs.memb.r1['bkw'].Extent << rs.memb.r2['fwd'].Extent + rs.memb.r2['bkw'].Extent
In our main example, Reacs has length 6 and saves the total extent of each reaction whose name matches the regular expression 'r[1-6]' (the name has to start with character 'r' and then a number between 1 and 6).
Editing labels
As we saw in the previous tutorial, labels are automatically generated when using ResultSelectors and can be accessed with e.g. Rstates.labels. With the same notation, it is also possible to provide custom labels by simply using:
python
selector.labels = ['label 1', 'label2', ...]
The length of the list needs to match the length of the ResultSelector. Here, we will modify the automatically generated labels of Rstates which currently look like this:
End of explanation
"""
Rstates.labels = [l.split('.')[1] for l in Rstates.labels]
print(Rstates.labels)
"""
Explanation: Instead, we would like to only keep the species name:
End of explanation
"""
Rstates.toFile('Rstates.dat')
Reacs.toFile('Reacs.dat')
sim.toSave(Rstates, Reacs, dt=0.001)
"""
Explanation: Saving data to files
Finally, in order to save the data to files, we simply need to call the toFile method on the ResultSelector objects and provide it with the file path. The data from all runs will be saved to the same file in binary format. We then need to remember to associate both ResultSelectors to the simulation and specify how frequently they should be saved:
End of explanation
"""
NITER = 100
ENDT = 0.201
for i in range (0, NITER):
sim.newRun()
sim.cyt.Ca.Conc = 3.30657e-8
sim.cyt.IP3.Count = 6
sim.ER.Ca.Conc = 150e-6
sim.ER.Ca.Clamped = True
sim.memb.R.Count = 160
sim.run(ENDT)
"""
Explanation: If the results needed to be saved at different intervals, we could of course write:
python
sim.toSave(Rstates, dt=0.001)
sim.toSave(Reacs, dt=0.005)
In addition, if we wanted to save the data at specified timepoints, we could use the timePoints argument:
python
sim.toSave(Reacs, timePoints=[0.0, 0.01, 0.03, 0.15])
Running the simulation
Having defined and added all ResultSelectors, we can then run the simulation:
End of explanation
"""
%reset -f
"""
Explanation: Let us then assume that the previous code was executed on a computing machine and the following code is executed on a distinct machine. To emulate this, we can reset the jupyter kernel with:
End of explanation
"""
import steps.interface
from steps.saving import *
from matplotlib import pyplot as plt
import numpy as np
ldRstates = ResultSelector.FromFile('Rstates.dat')
ldReacs = ResultSelector.FromFile('Reacs.dat')
"""
Explanation: Loading saved data
Now we do not have access to the variables we declared previously and thus cannot plot data using Rstates.data like we did before. Instead, we first need to load the files to which we saved the data:
End of explanation
"""
plt.figure(figsize=(10, 7))
RopenInd = ldRstates.labels.index('Ropen')
RopenData = ldRstates.data[:, :, RopenInd]
time = ldRstates.time[0]
mean = np.mean(RopenData, axis=0)
std = np.std(RopenData, axis=0)
plt.plot(time, mean, linewidth=2, label='Average')
plt.fill_between(time, mean - std, mean + std, alpha=0.2, label='Std. Dev.')
for t, d in zip(ldRstates.time, RopenData):
plt.plot(t, d, color='grey', linewidth=0.1, zorder=-1)
plt.ylim(0)
plt.margins(0, 0.05)
plt.xlabel('Time [s]')
plt.ylabel('Number of open IP3R')
plt.legend()
plt.show()
"""
Explanation: The ldRstates and ldReacs objects now behave like the Rstates and Reacs objects in the simulation script, we can thus use them transparently to plot the data. We will first plot the time course of the number of receptors in the open state:
End of explanation
"""
plt.figure(figsize=(10, 7))
time = ldRstates.time[0]
mean = np.mean(ldRstates.data, axis=0)
std = np.std(ldRstates.data, axis=0)
plt.plot(time, mean, linewidth=2)
for m, s in zip(mean.T, std.T):
plt.fill_between(time, m - s, m + s, alpha=0.2)
plt.legend(ldRstates.labels)
plt.xlabel('Time [s]')
plt.ylabel('Number of receptors')
plt.ylim(0)
plt.margins(0, 0.05)
plt.show()
"""
Explanation: We first need to extract the data for Ropen from ldRstates. We do so by using ldRstates.labels; we are interested in memb.Ropen.Count so we simply call the python index method that returns the index of the Ropen data in the whole selector.
We then get the corresponding data with ldRstates.data[:, :, RopenInd], the two first dimensions (runs and time) are untouched and we only take the data relative to Ropen.
In order to display the trace corresponding to each run, we iterate on the data with:
python
for t, d in zip(ldRstates.time, RopenData):
plt.plot(t, d, color='grey', linewidth=0.1, zorder=-1)
We would then like to look at the time course of all receptor states:
End of explanation
"""
plt.figure(figsize=(10, 7))
time = ldReacs.time[0]
dt = time[1] - time[0]
meanDeriv = np.mean(np.gradient(ldReacs.data, dt, axis=1), axis=0)
plt.stackplot(time, meanDeriv.T)
plt.legend([f'd{l} / dt' for l in ldReacs.labels])
plt.margins(0, 0.05)
plt.xlabel('Time [s]')
plt.ylabel('Total reaction rate [1/s]')
plt.show()
"""
Explanation: Since fill_between cannot take all data at once, like plot can, we needed to iterate over the different receptor states in mean and std. To do so, we used:
python
for m, s in zip(mean.T, std.T):
plt.fill_between(time, m - s, m + s, alpha=0.2)
both mean and std have dimension (nbT, nbR) with nbT the number of saved time points and nbR the number of receptor states. Since the first dimension corresponds to time, if we were to directly iterate over mean and std, we would iterate over time instead of iterating over receptor states. We thus first transpose the matrices with mean.T and std.T.
We then turn to plotting data from ldReacs:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nims-kma/cmip6/models/sandbox-3/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
locationtech/geowave
|
examples/data/notebooks/jupyter/geowave-gdelt.ipynb
|
apache-2.0
|
#!pip install --user --upgrade pixiedust
import pixiedust
import geowave_pyspark
pixiedust.enableJobMonitor()
"""
Explanation: Import pixiedust
Start by importing pixiedust which if all bootstrap and install steps were run correctly.
You should see below for opening the pixiedust database successfully with no errors.
Depending on the version of pixiedust that gets installed, it may ask you to update.
If so, run this first cell.
End of explanation
"""
# Print Spark info and create sql_context
print('Spark Version: {0}'.format(sc.version))
print('Python Version: {0}'.format(sc.pythonVer))
print('Application Name: {0}'.format(sc.appName))
print('Application ID: {0}'.format(sc.applicationId))
print('Spark Master: {0}'.format( sc.master))
"""
Explanation: Creating the SQLContext and inspecting pyspark Context
Pixiedust imports pyspark and the SparkContext + SparkSession should be already available through the "sc" and "spark" variables respectively.
End of explanation
"""
%%bash
cd /mnt/tmp
wget s3.amazonaws.com/geowave/latest/scripts/emr/quickstart/geowave-env.sh
source /mnt/tmp/geowave-env.sh
mkdir gdelt
cd gdelt
wget http://data.gdeltproject.org/events/md5sums
for file in `cat md5sums | cut -d' ' -f3 | grep "^${TIME_REGEX}"` ; \
do wget http://data.gdeltproject.org/events/$file ; done
md5sum -c md5sums 2>&1 | grep "^${TIME_REGEX}"
"""
Explanation: Download GDELT Data
Download the data necessary to perform Kmeans
End of explanation
"""
%%bash
# We have to source here again because bash runs in a separate sub process each cell.
source /mnt/tmp/geowave-env.sh
# clear old potential runs
geowave store clear gdelt
geowave store rm gdelt
geowave store clear kmeans_gdelt
geowave store rm kmeans_gdelt
# configure geowave connection params for hbase stores "gdelt" and "kmeans"
geowave store add gdelt --gwNamespace geowave.gdelt -t hbase --zookeeper $HOSTNAME:2181
geowave store add kmeans_gdelt --gwNamespace geowave.kmeans -t hbase --zookeeper $HOSTNAME:2181
# configure a spatial index
geowave index add gelt gdeltspatial -t spatial --partitionStrategy round_robin --numPartitions $NUM_PARTITIONS
# run the ingest for a 10x10 deg bounding box over Europe
geowave ingest localtogw /mnt/tmp/gdelt gdelt gdeltspatial -f gdelt \
--gdelt.cql "BBOX(geometry, 0, 50, 10, 60)"
"""
Explanation: Create datastores and ingest gdelt data.
The ingest process may take a few minutes. If the '*' is present left of the cell the command is still running. Output will not appear below under the process is finished.
End of explanation
"""
%%bash
# clear out potential old runs
geowave store clear kmeans_gdelt
# configure a spatial index
geowave index add kmeans_gdelt gdeltspatial -t spatial --partitionStrategy round_robin --numPartitions $NUM_PARTITIONS
#grab classes from jvm
# Pull classes to desribe core GeoWave classes
hbase_options_class = sc._jvm.org.locationtech.geowave.datastore.hbase.cli.config.HBaseRequiredOptions
query_options_class = sc._jvm.org.locationtech.geowave.core.store.query.QueryOptions
byte_array_class = sc._jvm.org.locationtech.geowave.core.index.ByteArrayId
# Pull core GeoWave Spark classes from jvm
geowave_rdd_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDD
rdd_loader_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDDLoader
rdd_options_class = sc._jvm.org.locationtech.geowave.analytic.spark.RDDOptions
sf_df_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.SimpleFeatureDataFrame
kmeans_runner_class = sc._jvm.org.locationtech.geowave.analytic.spark.kmeans.KMeansRunner
datastore_utils_class = sc._jvm.org.locationtech.geowave.core.store.util.DataStoreUtils
spatial_encoders_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.GeoWaveSpatialEncoders
spatial_encoders_class.registerUDTs()
#Setup input datastore options
input_store = hbase_options_class()
input_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
input_store.setGeowaveNamespace('geowave.gdelt')
#Setup output datastore options
output_store = hbase_options_class()
output_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
output_store.setGeowaveNamespace('geowave.kmeans')
#Create a instance of the runner, and datastore options
kmeans_runner = kmeans_runner_class()
input_store_plugin = input_store.createPluginOptions()
output_store_plugin = output_store.createPluginOptions()
#Set the appropriate properties
kmeans_runner.setSparkSession(sc._jsparkSession)
kmeans_runner.setAdapterId('gdeltevent')
kmeans_runner.setInputDataStore(input_store_plugin)
kmeans_runner.setOutputDataStore(output_store_plugin)
kmeans_runner.setCqlFilter("BBOX(geometry, 0, 50, 10, 60)")
kmeans_runner.setCentroidTypeName('mycentroids_gdelt')
kmeans_runner.setHullTypeName('myhulls_gdelt')
kmeans_runner.setGenerateHulls(True)
kmeans_runner.setComputeHullData(True)
#Execute the kmeans runner
kmeans_runner.run()
"""
Explanation: Run KMeans
Running the KMeans process may take a few minutes you should be able to track the progress of the task via the console or Spark History Server once the job begins.
End of explanation
"""
# Create the dataframe and get a rdd for the output of kmeans
adapter_id = byte_array_class('mycentroids_gdelt')
query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id)
query_options = query_options_class(query_adapter)
# Create RDDOptions for loader
rdd_options = rdd_options_class()
rdd_options.setQueryOptions(query_options)
output_rdd = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options)
# Create a SimpleFeatureDataFrame from the GeoWaveRDD
sf_df = sf_df_class(spark._jsparkSession)
sf_df.init(output_store_plugin, adapter_id)
df = sf_df.getDataFrame(output_rdd)
# Convert Java DataFrame to Python DataFrame
import pyspark.mllib.common as convert
py_df = convert._java2py(sc, df)
py_df.createOrReplaceTempView('mycentroids')
df = spark.sql("select * from mycentroids")
display(df)
"""
Explanation: Load resulting Centroids into DataFrame
End of explanation
"""
# Convert the string point information into lat long columns and create a new dataframe for those.
import pyspark
def parseRow(row):
lat=row.geom.y
lon=row.geom.x
return pyspark.sql.Row(lat=lat,lon=lon,ClusterIndex=row.ClusterIndex)
row_rdd = df.rdd
new_rdd = row_rdd.map(lambda row: parseRow(row))
new_df = new_rdd.toDF()
display(new_df)
"""
Explanation: Parse DataFrame data into lat/lon columns and display centroids on map
Using pixiedust's built in map visualization we can display data on a map assuming it has the following properties.
- Keys: put your latitude and longitude fields here. They must be floating values. These fields must be named latitude, lat or y and longitude, lon or x.
- Values: the field you want to use to thematically color the map. Only one field can be used.
Also you will need a access token from whichever map renderer you choose to use with pixiedust (mapbox, google).
Follow the instructions in the token help on how to create and use the access token.
End of explanation
"""
# Create the dataframe and get a rdd for the output of kmeans
# Grab adapter and setup query options for rdd load
adapter_id = byte_array_class('myhulls_gdelt')
query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id)
query_options = query_options_class(query_adapter)
# Use GeoWaveRDDLoader to load an RDD
rdd_options = rdd_options_class()
rdd_options.setQueryOptions(query_options)
output_rdd_hulls = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options)
# Create a SimpleFeatureDataFrame from the GeoWaveRDD
sf_df_hulls = sf_df_class(spark._jsparkSession)
sf_df_hulls.init(output_store_plugin, adapter_id)
df_hulls = sf_df_hulls.getDataFrame(output_rdd_hulls)
# Convert Java DataFrame to Python DataFrame
import pyspark.mllib.common as convert
py_df_hulls = convert._java2py(sc, df_hulls)
# Create a sql table view of the hulls data
py_df_hulls.createOrReplaceTempView('myhulls')
# Run SQL Query on Hulls data
df_hulls = spark.sql("select * from myhulls order by Density")
display(df_hulls)
"""
Explanation: Export KMeans Hulls to DataFrame
If you have some more complex data to visualize pixiedust may not be the best option.
The Kmeans hull generation outputs polygons that would be difficult for pixiedust to display without
creating a special plugin.
Instead, we can use another map renderer to visualize our data. For the Kmeans hulls we will use folium to visualize the data. Folium allows us to easily add wms layers to our notebook, and we can combine that with GeoWaves geoserver functionality to render the hulls and centroids.
End of explanation
"""
%%bash
# set up geoserver
geowave config geoserver "$HOSTNAME:8000"
# add the centroids layer
geowave gs layer add kmeans_gdelt -id mycentroids_gdelt
geowave gs style set mycentroids_gdelt --styleName point
# add the hulls layer
geowave gs layer add kmeans_gdelt -id myhulls_gdelt
geowave gs style set myhulls_gdelt --styleName line
import owslib
from owslib.wms import WebMapService
url = "http://" + os.environ['HOSTNAME'] + ":8000/geoserver/geowave/wms"
web_map_services = WebMapService(url)
#print layers available wms
print('\n'.join(web_map_services.contents.keys()))
import folium
#grab wms info for centroids
layer = 'mycentroids_gdelt'
wms = web_map_services.contents[layer]
#build center of map off centroid bbox
lon = (wms.boundingBox[0] + wms.boundingBox[2]) / 2.
lat = (wms.boundingBox[1] + wms.boundingBox[3]) / 2.
center = [lat, lon]
m = folium.Map(location = center,zoom_start=3)
name = wms.title
centroids = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
centroids.add_to(m)
layer = 'myhulls_gdelt'
wms = web_map_services.contents[layer]
name = wms.title
hulls = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
hulls.add_to(m)
m
"""
Explanation: Visualize results using geoserver and wms
folium provides an easy way to visualize leaflet maps in jupyter notebooks.
When the data is too complicated or big to work within the simple framework pixiedust provides for map display we can instead turn to geoserver and wms to render our layers. First we configure geoserver then setup wms layers for folium to display the kmeans results on the map.
End of explanation
"""
|
edwardd1/phys202-2015-work
|
assignments/assignment06/InteractEx05.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import SVG
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
#raise NotImplementedError()
"""
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
s = """
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
"""
SVG(s)
"""
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
"""
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
circle = '<svg width="' + str(width) + '" height="' + str(height) + '"> \n <circle cx="' + str(cx) + '" cy="' + str(cy) + '" r="' + str(r) + '" fill="' + str(fill) + '" /> \n </svg>'
display(SVG(circle))
print (draw_circle(cx=10, cy=10, r=10, fill='blue'))
#raise NotImplementedError()
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
"""
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
"""
width = 300
height = 300
w =interactive(draw_circle, width = fixed(300), height = fixed(300), cx = (0,300), cy = (0,300), r = (0,50), fill = "red")
#raise NotImplementedError()
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
"""
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
"""
display(w)
#raise NotImplementedError()
assert True # leave this to grade the display of the widget
"""
Explanation: Use the display function to show the widgets created by interactive:
End of explanation
"""
|
liufuyang/deep_learning_tutorial
|
jizhi-pytorch-2/03_text_generation/RNNGenerative/MIDIComposer.ipynb
|
mit
|
# 导入必须的依赖包
# 与PyTorch相关的包
import torch
import torch.utils.data as DataSet
import torch.nn as nn
from torch.autograd import Variable
import torch.optim as optim
# 导入midi音乐处理的包
from mido import MidiFile, MidiTrack, Message
# 导入计算与绘图必须的包
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 神经莫扎特——MIDI音乐的学习与生成
在这节课中,我们学习了如何通过人工神经网络学习一个MIDI音乐,并记忆中音符时间序列中的模式,并生成一首音乐
首先,我们要学习如何解析一个MIDI音乐,将它读如进来;其次,我们用处理后的MIDI序列数据训练一个LSTM网络,并让它预测下一个音符;
最后,我们用训练好的LSTM生成MIDI音乐
本程序改造自
本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第VI课的配套源代码
End of explanation
"""
# 从硬盘中读入MIDI音乐文件
#mid = MidiFile('./music/allegroconspirito.mid') # a Mozart piece
mid = MidiFile('./music/krebs.mid') # a Mozart piece
notes = []
time = float(0)
prev = float(0)
original = [] # original记载了原始message数据,以便后面的比较
# 对MIDI文件中所有的消息进行循环
for msg in mid:
# 时间的单位是秒,而不是帧
time += msg.time
# 如果当前消息不是描述信息
if not msg.is_meta:
# 仅提炼第一个channel的音符
if msg.channel == 0:
# 如果当前音符为打开的
if msg.type == 'note_on':
# 获得消息中的信息(编码在字节中)
note = msg.bytes()
# 我们仅对音符信息感兴趣. 音符消息按如下形式记录 [type, note, velocity]
note = note[1:3] #操作玩这一步后,note[0]存音符,note[1]存速度(力度)
# note[2]存据上一个message的时间间隔
note.append(time - prev)
prev = time
# 将音符添加到列表notes中
notes.append(note)
# 在原始列表中保留这些音符
original.append([i for i in note])
# 绘制每一个分量的直方图,方便看出每一个量的取值范围
plt.figure()
plt.hist([i[0] for i in notes])
plt.title('Note')
plt.figure()
plt.hist([i[1] for i in notes])
plt.title('Velocity')
plt.figure()
plt.hist([i[2] for i in notes])
plt.title('Time')
"""
Explanation: 一、导入MIDI文件,并处理成标准形式
首先,我们从MIDI文件中提取出消息(Message)序列,一个消息包括:音符(note)、速度(velocity)与时间(time,距离上一个音符的时间长度)
其次,我们要将每一个消息进行编码,根据音符、速度、时间的取值范围,我们分别用长度为89、128与11的one-hot编码得到一个01向量。
1. 从硬盘读取MIDI文件
End of explanation
"""
# note和velocity都可以看作是类型变量
# time为float,我们按照区间将其也化成离散的类型变量
# 首先,我们找到time变量的取值区间,并进行划分。由于大量msg的time为0,因此我们把0归为了一个特别的类
intervals = 10
values = np.array([i[2] for i in notes])
max_t = np.amax(values) #区间中的最大值
min_t = np.amin(values[values > 0]) #区间中的最小值
interval = 1.0 * (max_t - min_t) / intervals
# 接下来,我们将每一个message编码成三个one-hot向量,将这三个向量合并到一起就构成了slot向量
dataset = []
for note in notes:
slot = np.zeros(89 + 128 + 12)
#由于note是介于24-112之间的,因此减24
ind1 = note[0]-24
ind2 = note[1]
# 由于message中有大量的time=0的情况,因此我们将0分为单独的一类,其他的都是按照区间划分
ind3 = int((note[2] - min_t) / interval + 1) if note[2] > 0 else 0
slot[ind1] = 1
slot[89 + ind2] = 1
slot[89 + 128 + ind3] = 1
# 将处理后得到的slot数组加入到dataset中
dataset.append(slot)
"""
Explanation: 2. 将每一个Message进行编码
原始的数据是形如(78, 0, 0.0108)这样的三元组
编码后的数据格式为:(00...010..., 100..., 0100...)这样的三个one-hot向量,第一个向量长度89,第二个128,第三个11
End of explanation
"""
# 生成训练集和校验集
X = []
Y = []
# 首先,按照预测的模式,我们将原始数据生成一对一对的训练数据
n_prev = 30 # 滑动窗口长度为30
# 对数据中的所有数据进行循环
for i in range(len(dataset)-n_prev):
# 往后取n_prev个note作为输入属性
x = dataset[i:i+n_prev]
# 将第n_prev+1个note(编码前)作为目标属性
y = notes[i+n_prev]
# 注意time要转化成类别的形式
ind3 = int((y[2] - min_t) / interval + 1) if y[2] > 0 else 0
y[2] = ind3
# 将X和Y加入到数据集中
X.append(x)
Y.append(y)
# 将数据集中的前n_prev个音符作为种子,用于生成音乐的时候用
seed = dataset[0:n_prev]
# 对所有数据顺序打乱重排
idx = np.random.permutation(range(len(X)))
# 形成训练与校验数据集列表
X = [X[i] for i in idx]
Y = [Y[i] for i in idx]
# 从中切分1/10的数据出来放入校验集
validX = X[: len(X) // 10]
X = X[len(X) // 10 :]
validY = Y[: len(Y) // 10]
Y = Y[len(Y) // 10 :]
# 将列表再转化为dataset,并用dataloader来加载数据
# dataloader是PyTorch开发采用的一套管理数据的方法。通常数据的存储放在dataset中,而对数据的调用则是通过data loader完成的
# 同时,在进行预处理的时候,系统已经自动将数据打包成撮(batch),每次调用,我们都提取一整个撮出来(包含多条记录)
# 从dataloader中吐出的每一个元素都是一个(x,y)元组,其中x为输入的张量,y为标签。x和y的第一个维度都是batch_size大小。
batch_size = 30 #一撮包含30个数据记录,这个数字越大,系统在训练的时候,每一个周期处理的数据就越多,这样处理越快,但总的数据量会减少
# 形成训练集
train_ds = DataSet.TensorDataset(torch.FloatTensor(np.array(X, dtype = float)), torch.LongTensor(np.array(Y)))
# 形成数据加载器
train_loader = DataSet.DataLoader(train_ds, batch_size = batch_size, shuffle = True, num_workers=4)
# 校验数据
valid_ds = DataSet.TensorDataset(torch.FloatTensor(np.array(validX, dtype = float)), torch.LongTensor(np.array(validY)))
valid_loader = DataSet.DataLoader(valid_ds, batch_size = batch_size, shuffle = True, num_workers=4)
"""
Explanation: 3.生成训练集和校验集,装进数据加载器
我们将整个音符三元组(note,velocity,time)序列按照31位长度的滑动窗口切分成了len(dataset)-n_prev组
每一组的前30位作为输入,最后一位作为输出形成了训练数据
End of explanation
"""
class LSTMNetwork(nn.Module):
def __init__(self, input_size, hidden_size, out_size, n_layers=1):
super(LSTMNetwork, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.out_size = out_size
# 一层LSTM单元
self.lstm = nn.LSTM(input_size, hidden_size, n_layers, batch_first = True)
# 一个Dropout部件,以0.2的概率Dropout
self.dropout = nn.Dropout(0.2)
# 一个全链接层
self.fc = nn.Linear(hidden_size, out_size)
# 对数Softmax层
self.softmax = nn.LogSoftmax()
def forward(self, input, hidden=None):
# 神经网络的每一步运算
hhh1 = hidden[0] #读如隐含层的初始信息
# 完成一步LSTM运算
# input的尺寸为:batch_size , time_step, input_size
output, hhh1 = self.lstm(input, hhh1) #input:batchsize*timestep*3
# 对神经元输出的结果进行dropout
output = self.dropout(output)
# 取出最后一个时刻的隐含层输出值
# output的尺寸为:batch_size, time_step, hidden_size
output = output[:, -1, ...]
# 此时,output的尺寸为:batch_size, hidden_size
# 喂入一个全链接层
out = self.fc(output)
# out的尺寸为:batch_size, output_size
# 将out的最后一个维度分割成三份x, y, z分别对应对note,velocity以及time的预测
x = self.softmax(out[:, :89])
y = self.softmax(out[:, 89: 89 + 128])
z = self.softmax(out[:, 89 + 128:])
# x的尺寸为batch_size, 89
# y的尺寸为batch_size, 128
# z的尺寸为batch_size, 11
# 返回x,y,z
return (x,y,z)
def initHidden(self, batch_size):
# 对隐含层单元变量全部初始化为0
# 注意尺寸是: layer_size, batch_size, hidden_size
out = []
hidden1=Variable(torch.zeros(1, batch_size, self.hidden_size1))
cell1=Variable(torch.zeros(1, batch_size, self.hidden_size1))
out.append((hidden1, cell1))
return out
def criterion(outputs, target):
# 为本模型自定义的损失函数,它由三部分组成,每部分都是一个交叉熵损失函数,
# 它们分别对应note、velocity和time的交叉熵
x, y, z = outputs
loss_f = nn.NLLLoss()
loss1 = loss_f(x, target[:, 0])
loss2 = loss_f(y, target[:, 1])
loss3 = loss_f(z, target[:, 2])
return loss1 + loss2 + loss3
def rightness(predictions, labels):
"""计算预测错误率的函数,其中predictions是模型给出的一组预测结果,batch_size行num_classes列的矩阵,labels是数据之中的正确答案"""
pred = torch.max(predictions.data, 1)[1] # 对于任意一行(一个样本)的输出值的第1个维度,求最大,得到每一行的最大元素的下标
rights = pred.eq(labels.data).sum() #将下标与labels中包含的类别进行比较,并累计得到比较正确的数量
return rights, len(labels) #返回正确的数量和这一次一共比较了多少元素
"""
Explanation: 二、定义一个LSTM网络
该网络特殊的地方在于它的输出,对于每一个样本,它会输出三个变量x,y,z,它们分别是一个归一化的概率向量
分别用来预测类型化了的note、velocity和time
在网络中我们对lstm的输出结果进行dropout的操作,所谓的dropout就是指在训练的截断,系统会随机删除掉一些神经元,
,而在测试阶段则不会删掉神经元,这样使得模型给出正确的输出会更加困难,从避免了过拟合现象。
End of explanation
"""
# 定义一个LSTM,其中输入输出层的单元个数取决于每个变量的类型取值范围
lstm = LSTMNetwork(89 + 128 + 12, 128, 89 + 128 + 12)
optimizer = optim.Adam(lstm.parameters(), lr=0.001)
num_epochs = 100
train_losses = []
valid_losses = []
records = []
# 开始训练循环
for epoch in range(num_epochs):
train_loss = []
# 开始遍历加载器中的数据
for batch, data in enumerate(train_loader):
# batch为数字,表示已经进行了第几个batch了
# data为一个二元组,分别存储了一条数据记录的输入和标签
# 每个数据的第一个维度都是batch_size = 30的数组
lstm.train() # 标志LSTM当前处于训练阶段,Dropout开始起作用
init_hidden = lstm.initHidden(len(data[0])) # 初始化LSTM的隐单元变量
optimizer.zero_grad()
x, y = Variable(data[0]), Variable(data[1]) # 从数据中提炼出输入和输出对
outputs = lstm(x, init_hidden) #喂入LSTM,产生输出outputs
loss = criterion(outputs, y) #代入损失函数并产生loss
train_loss.append(loss.data.numpy()[0]) # 记录loss
loss.backward() #反向传播
optimizer.step() #梯度更新
if 0 == 0:
#在校验集上跑一遍,并计算在校验集上的分类准确率
valid_loss = []
lstm.eval() #将模型标志为测试状态,关闭dropout的作用
rights = []
# 遍历加载器加载进来的每一个元素
for batch, data in enumerate(valid_loader):
init_hidden = lstm.initHidden(len(data[0]))
#完成LSTM的计算
x, y = Variable(data[0]), Variable(data[1])
#x的尺寸:batch_size, length_sequence, input_size
#y的尺寸:batch_size, (data_dimension1=89+ data_dimension2=128+ data_dimension3=12)
outputs = lstm(x, init_hidden)
#outputs: (batch_size*89, batch_size*128, batch_size*11)
loss = criterion(outputs, y)
valid_loss.append(loss.data.numpy()[0])
#计算每个指标的分类准确度
right1 = rightness(outputs[0], y[:, 0])
right2 = rightness(outputs[1], y[:, 1])
right3 = rightness(outputs[2], y[:, 2])
rights.append((right1[0] + right2[0] + right3[0]) * 1.0 / (right1[1] + right2[1] + right3[1]))
# 打印结果
print('第{}轮, 训练Loss:{:.2f}, 校验Loss:{:.2f}, 校验准确度:{:.2f}'.format(epoch,
np.mean(train_loss),
np.mean(valid_loss),
np.mean(rights)
))
records.append([np.mean(train_loss), np.mean(valid_loss), np.mean(rights)])
# 绘制训练过程中的Loss曲线
a = [i[0] for i in records]
b = [i[1] for i in records]
c = [i[2] * 10 for i in records]
plt.plot(a, '-', label = 'Train Loss')
plt.plot(b, '-', label = 'Validation Loss')
plt.plot(c, '-', label = '10 * Accuracy')
plt.legend()
"""
Explanation: 开始训练一个LSTM。
End of explanation
"""
# 生成3000步
predict_steps = 3000
# 初始时刻,将seed(一段种子音符,即我为开始读入的音乐文件)付给x
x = seed
# 将数据扩充为合适的形式
x = np.expand_dims(x, axis = 0)
# 现在的x的尺寸为:batch=1, time_step =30, data_dim = 229
lstm.eval()
initi = lstm.initHidden(1)
predictions = []
# 开始每一步的迭代
for i in range(predict_steps):
# 根据前n_prev预测后面的一个音符
xx = Variable(torch.FloatTensor(np.array(x, dtype = float)))
preds = lstm(xx, initi)
# 返回预测的note,velocity,time的模型预测概率对数
a,b,c = preds
# a的尺寸为:batch=1*data_dim=89, b为1*128,c为1*11
# 将概率对数转化为随机的选择
ind1 = torch.multinomial(a.view(-1).exp())
ind2 = torch.multinomial(b.view(-1).exp())
ind3 = torch.multinomial(c.view(-1).exp())
ind1 = ind1.data.numpy()[0] # 0-89中的整数
ind2 = ind2.data.numpy()[0] # 0-128中的整数
ind3 = ind3.data.numpy()[0] # 0-11中的整数
# 将选择转换为正确的音符等数值,注意time分为11类,第一类为0这个特殊的类,其余按照区间放回去
note = [ind1 + 24, ind2, 0 if ind3 ==0 else ind3 * interval + min_t]
# 将预测的内容存储下来
predictions.append(note)
# 将新的预测内容再次转变为输入数据准备喂给LSTM
slot = np.zeros(89 + 128 + 12, dtype = int)
slot[ind1] = 1
slot[89 + ind2] = 1
slot[89 + 128 + ind3] = 1
slot1 = np.expand_dims(slot, axis = 0)
slot1 = np.expand_dims(slot1, axis = 0)
#slot1的数据格式为:batch=1*time=1*data_dim=229
# x拼接上新的数据
x = np.concatenate((x, slot1), 1)
# 现在x的尺寸为: batch_size = 1 * time_step = 31 * data_dim =229
# 滑动窗口往前平移一次
x = x[:, 1:, :]
# 现在x的尺寸为:batch_size = 1 * time_step = 30 * data_dim = 229
# 将生成的序列转化为MIDI的消息,并保存MIDI音乐
mid = MidiFile()
track = MidiTrack()
mid.tracks.append(track)
for i, note in enumerate(predictions):
# 在note一开始插入一个147表示打开note_on
note = np.insert(note, 0, 147)
# 将整数转化为字节
bytes = note.astype(int)
# 创建一个message
msg = Message.from_bytes(bytes[0:3])
# 0.001025为任意取值,可以调节音乐的速度。由于生成的time都是一系列的间隔时间,转化为msg后时间尺度过小,因此需要调节放大
time = int(note[3]/0.001025)
msg.time = time
# 将message添加到音轨中
track.append(msg)
#保存文件
mid.save('music/new_song.mid')
###########################################
"""
Explanation: 三、音乐生成
我们运用训练好的LSTM来生成音符。首先把seed喂给LSTM并产生第n_prev + 1个msg,然后把这个msg加到输入数据的最后面,删除第一个元素
这就又构成了一个标准的输入序列;然后再得到下一个msg,……,如此循环往复得到音符序列的生成
End of explanation
"""
|
adrianstaniec/deep-learning
|
08_transfer-learning/Transfer_Learning.ipynb
|
mit
|
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6).
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 100
codes_list = []
labels = []
batch = []
codes = None
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
with tf.Session() as sess:
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
labels_vecs
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)
for train_index, test_index in sss.split(codes, labels_vecs):
train_x, rest_x = codes[train_index], codes[test_index]
train_y, rest_y = labels_vecs[train_index], labels_vecs[test_index]
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.5, random_state=0)
for train_index, test_index in sss.split(rest_x, rest_y):
val_x, test_x = rest_x[train_index], rest_x[test_index]
val_y, test_y = rest_y[train_index], rest_y[test_index]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices.
End of explanation
"""
from tensorflow import layers
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.layers.dense(inputs_, 2000, activation=tf.nn.relu)
logits = tf.layers.dense(fc, labels_vecs.shape[1], activation=None)
cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels=labels_,
logits=logits))
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
batches = 100
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
b = 0
for x, y in get_batches(train_x, train_y, batches):
feed = {inputs_: x,
labels_: y}
batch_cost, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{} ".format(e+1, epochs),
"Batch: {}/{} ".format(b+1, batches),
"Training loss: {:.4f}".format(batch_cost))
b += 1
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/daisy/144603918_b9de002f60_m.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
batch = []
with tf.Session() as sess:
img = utils.load_image(test_img_path)
batch.append(img.reshape((1, 224, 224, 3)))
images = np.concatenate(batch)
feed_dict = {input_: images}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
with tf.Session() as sess:
saver = tf.train.Saver()
with tf.Session() as sess2:
saver.restore(sess2, tf.train.latest_checkpoint('checkpoints'))
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for file in files:
batch = []
labels = []
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(lb.transform([each])[0])
images = np.concatenate(batch)
feed_dict = {input_: images}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
feed = {inputs_: code, labels_: labels}
correct, prediction = sess2.run([correct_pred, predicted], feed_dict=feed)
if not correct[0]:
#test_img = imread(os.path.join(class_path, file))
#plt.imshow(test_img)
#plt.barh(np.arange(5), prediction)
#_ = plt.yticks(np.arange(5), lb.classes_)
print(os.path.join(class_path, file))
"""
Explanation: Find photos that were mistakenly calassified
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session08/Day1/ThisLaptopIsInadequate.ipynb
|
mit
|
nums = # complete
s = # complete
# complete
"""
Explanation: This Laptop Is Inadequate:
An Aperitif for DSFP Session 8
Version 0.1
By AA Miller 2019 Mar 24
When I think about LSST there are a few numbers that always stick in my head:
37 billion (the total number of sources that will be detected by LSST)
10 (the number of years for the baseline survey)
1000 (~the number of observations per source)
37 trillion ($37 \times 10^9 \times 10^4$ = the total number of source observations)
These numbers are eye-popping, though the truth is that there are now several astronomical databases that have $\sim{10^9}$ sources (e.g., PanSTARRS-1, which we will hear more about later today).
A pressing question, for current and future surveys, is: how are we going to deal with all that data?
If you're anything like me - then, you love your laptop.
And if you had it your way, you wouldn't need anything but your laptop... ever.
But is that practical?
Problem 1) The Inadequacy of Laptops
Problem 1a
Suppose you could describe every source detected by LSST with a single number. Assuming you are on a computer with a 64 bit architecture, to within an order of magnitude, how much RAM would you need to store every LSST source within your laptop's memory?
Bonus question - can you think of a single number to describe every source in LSST that could produce a meaningful science result?
Take a minute to discuss with your partner
Solution 1a
As for a single number to perform useful science, I can think of two.
First - you could generate a heirarchical triangular mesh with enough trixels to characterize every LSST resolution element on the night sky. Then you could assign a number to each trixel, and describe the position of every source in LSST with a single number. Under the assumption that every source detected by LSST is a galaxy, this is not a terrible assumption, you could look at the clustering of these positions to (potentially) learn things about structure formation or galaxy formation (though without redshifts you may not learn all that much).
The other number is the flux (or magnitude) of every source in a single filter. Again, under the assumption that everything is a galaxy, the number counts (i.e. a histogram) of the flux measurements tells you a bit about the Universe.
It probably isn't a shock that you won't be able to analyze every individual LSST source on your laptop.
But that raises the question - how should you analyze LSST data?
By buying a large desktop?
On a local or national supercomputer?
In the cloud?
On computers that LSST hosts/maintains?
But that raises the question - how should you analyze LSST data?
By buying a large desktop? (impractical to ask of everyone working on LSST)
On a local supercomputer? (not a bad idea, but not necessarily equitable)
In the cloud? (AWS is expensive)
On computers that LSST hosts/maintains? (probably the most fair, but this also has challenges)
We will discuss some of these issues a bit later in the week...
Problem 2) Laptop or Not You Should Be Worried About the Data
Pop quiz
We will now re-visit a question from a previous session:
Problem 2a
What is data?
Take a minute to discuss with your partner
Solution 2a
This leads to another question:
Q - What is the defining property of a constant?
A - They don't change.
If data are constants, and constants don't change, then we should probably be sure that our data storage solutions do not alter the data in any way.
Within the data science community, the python pandas package is particularly popular for reading, writing, and manipulating data (we will talk more about the utility of pandas later).
The pandas docs state the read_csv() method is the workhorse function for reading text files. Let's now take a look at how well this workhorse "maintains the constant nature of data".
Problem 2b
Create a numpy array, called nums, of length 10000 filled with random numbers. Create a pandas Series object, called s, based on that array, and then write the Series to a file called tmp.txt using the to_csv() method.
Hint - you'll need to name the Series and add the header=True option to the to_csv() call.
End of explanation
"""
s_read = # complete
# complete
"""
Explanation: Problem 2c
Using the pandas read_csv() method, read in the data to a new variable, called s_read. Do you expect s_read and nums to be the same? Check whether or not your expectations are correct.
Hint - take the sum of the difference not equal to zero to identify if any elements are not the same.
End of explanation
"""
print(np.max(np.abs(nums - s_read['nums'].values)))
"""
Explanation: So, it turns out that $\sim{23}\%$ of the time, pandas does not in fact read in the same number that it wrote to disk.
The truth is that these differences are quite small (see next slide), but there are many mathematical operations (e.g., subtraction of very similar numbers) that may lead these tiny differences to compound over time such that your data are not, in fact, constant.
End of explanation
"""
s. # complete
s_read = # complete
# complete
"""
Explanation: So, what is going on?
Sometimes, when you convert a number to ASCII (i.e. text) format, there is some precision that is lost in that conversion.
How do you avoid this?
One way is to directly write your files in binary. To do so has serveral advantages: it is possible to reproduce byte level accuracy, and, binary storage is almost always more efficient than text storage (the same number can be written in binary with less space than in ascii).
The downside is that developing your own procedure to write data in binary is a pain, and it places strong constraints on where and how you can interact with the data once it has been written to disk.
Fortuantely, we live in a world with pandas. All this hard work has been done for you, as pandas naturally interfaces with the hdf5 binary table format. (You may want to also take a look at pyTables)
(Historically astronomers have used FITS files as a binary storage solution)
Problem 2d
Repeat your procedure from above, but instead of writing to a csv file, use the pandas to_hdf() and read_df() method to see if there are any differences in s and s_read.
Hint - You will need to specify a name for the table that you have written to the hdf5 file in the call to to_hdf() as a required argument. Any string will do.
Hint 2 - Use s_read.values instead of s_read['nums'].values.
End of explanation
"""
s.to_csv('tmp.txt', header=True, index=False)
s_read = pd.read_csv('tmp.txt', float_precision='round_trip')
sum(nums - s_read['nums'].values != 0)
"""
Explanation: So, if you are using pandas anyway, and if you aren't using pandas –– check it out!, then I strongly suggest removing csv files from your workflow to instead focus on binary hdf5 files. This requires typing the same number of characters, but it ensures byte level reproducibility.
And reproducibiliy is the pillar upon which the scientific method is built.
Is that the end of the story? ... No.
In the previous example, I was being a little tricky in order to make a point. It is in fact possible to create reproducible csv files with pandas. By default, pandas sacrifices a little bit of precision in order to gain a lot more speed. If you want to ensure reproducibility then you can specify that the float_precision should be round_trip, meaning you get the same thing back after reading from a file that you wrote to disk.
End of explanation
"""
sdss_spec = pd.read_csv("DSFP_SDSS_spec_train.csv")
sdss_spec.head()
"""
Explanation: So was all of this in service of a lie?
No. What I said before remains true - text files do not guarantee byte level precision, and they take more space on disk. Text files have some advantages:
anyone, anywhere, on any platform can easily manipulate text files
text files can be easily inspected (and corrected) if necessary
special packages are needed to read/write in binary
binary files, which are not easily interpretable, are difficult to use in version control (and banned by some version control platforms)
To summarize, here is my advice: think of binary as your (new?) default for storing data.
But, as with all things, consider your audience: if you are sharing/working with people that won't be able to deal with binary data, or, you have an incredibly small amount of data, csv (or other text files) should be fine.
Problem 3) Binary or ASCII - Doesn't Matter if You Aren't Organized
While the reproducibility of data is essential, ultimately, concerns about binary vs ascii are useless if you cannot access the data you need when you need it.
Your data are valuable (though cheaper to acquire than ever before), which means you need a good solution for managing that data, or else you are going to run into a significant loss of time and money.
Problem 3a
How would you organize the following: (a) 3 very deep images of a galaxy, (b) 4 nights of optical observations ($\sim$50 images night$^{-1}$) of a galaxy cluster in the $ugrizY$ filters, (c) images from a 50 night time-domain survey ($\sim$250 images night$^{-1}$) covering 1000 deg$^2$?
Similarly, how would you organize: (a) photometric information for your galaxy observations, (b) photometry for all the galaxies in your cluster field, (c) the observations/light curves from your survey?
Take a minute to discuss with your partner
Solution 3a
...
Keeping in mind that there are several suitable answers to each of these questions, here are a few thoughts: (a) the 3 images should be kept together, probably in a single file directory. (b) With 200 images taken over the course of 4 nights, I would create a directory structure that includes every night (top level), with sub-directories based on the individual filters. (c) Similar to (b), I'd create a tree-based file structure, though given that the primary science is time variability, I would likely organize the observations by fieldID at the top level, then by filter and date after that.
As a final note - for each of these data sets, backups are essential! There should be no risk of a single point failure wiping away all that information for good.
The photometric data requires more than just a file structure. In all three cases I would want to store everything in a single file (so directories are not necessary).
For 3 observations of a single galaxy, I would use... a text file (not worth the trouble for binary storage)
Assuming there are 5000 galaxies in the cluster field, I would store the photometric information that I extract for those galaxies in a table. In this table, each row would represent a single galaxy, while the columns would include brightness/shape measurements for the galaxies in each of the observed filters. I would organize this table as a pandas DataFrame (and write it to an hdf5 file).
For the time-domain survey, the organization of all the photometric information is far less straight forward.
Could you use a single table? Yes. Though this would be highly inefficient given that not all sources were observed at the same time. The table would then need columns like obs1_JD, obs1_flux, obs1_flux_unc, obs2_JD, obs2_flux, obs2_flux_unc, ..., all the way up to $N$, the maximum number of observations of any individual source. This will lead to several columns that are empty for several sources.
I would instead use a collection of tables. First, a master source table:
|objID|RA|Dec|mean_mag|mean_mag_unc|
|:--:|:--:|:--:|:--:|:--:|
|0001|246.98756|-12.06547|18.35|0.08|
|0002|246.98853|-12.04325|19.98|0.21|
|.|.|.|.|.|
|.|.|.|.|.|
|.|.|.|.|.|
Coupled with a table holding the individual flux measurements:
|objID|JD|filt|mag|mag_unc|
|:--:|:--:|:--:|:--:|:--:|
|0001|2456785.23465|r|18.21|0.07|
|0001|2456785.23469|z|17.81|0.12|
|.|.|.|.|.|
|.|.|.|.|.|
|.|.|.|.|.|
|0547|2456821.36900|g|16.04|0.02|
|0547|2456821.36906|i|17.12|0.05|
|.|.|.|.|.|
|.|.|.|.|.|
|.|.|.|.|.|
The critical thing to notice about these tables is that they both contain objID. That information allows us to connect the tables via a "join". This table, or relational, structure allows us to easily connect subsets of the data as way to minimize storage (relative to having everything in a single table) while also maintaining computational speed.
Typically, when astronomers (or data scientists) need to organize data into several connected tables capable of performing fast relational algebra operations they use a database. We will hear a lot more about databases over the next few days, so I won't provide a detailed introduction now.
One very nice property of (many) database systems is that provide an efficient means for searching large volumes of data that cannot be stored in memory (recall problem 1a). Whereas, your laptop, or even a specialized high-memory computer, would not be able to open a csv file with all the LSST observations in it.
Another quick aside –– pandas can deal with files that are too large to fit in memory by loading a portion of the file at a time:
light_curves = pd.read_csv(lc_csv_file, chunksize=100000)
If you are building a data structure where loading the data in "chunks" is necessary, I would strongly advise considering an alternative to storing the data in a csv file.
A question you may currently be wondering is: why has there been such an intense focus on pandas today?
The short answer: the developers of pandas wanted to create a product that is good at relational algebra (like traditional database tools) but with lower overhead in construction, and a lot more flexibility (which is essential in a world of heterogeneous data storage and management, see Tuesday's lecture on Data Wrangling).
(You'll get several chances to practice working with databases throughout the week)
We will now run through a few examples that highlight how pandas can be used in a manner similar to a relational database. Throughout the week, as you think about your own data management needs, I think the critical thing to consider is scope. Can my data be organized into something that is smaller than a full-on database?
Problem 3b
Download the SDSS data set that will be used in the exercises for tomorrow.
Read that data, stored in a csv file, into a pandas DataFrame called sdss_spec.
In a lecture where I have spent a great deal of time describing the value of binary data storage, does the fact that I am now providing a (moderate) amount of data as a plain ascii file mean that I am a bad teacher...
probably
End of explanation
"""
mag_diff = # complete
# complete
"""
Explanation: pandas provides many different methods for selecting columns from the DataFrame. Supposing you wanted psfMag, you could use any of the following:
sdss_spec['psfMag_g']
sdss_spec[['psfMag_r', 'psfMag_z']]
sdss_spec.psfMag_g
(notice that selecting multiple columns requires a list within [])
Problem 3c
Plot a histogram of the psfMag_g - modelMag_g distribution for the data set (which requires a selection of those two columns).
Do you notice anything interesting?
Hint - you may want to use more than the default number of bins (=10).
End of explanation
"""
# complete
"""
Explanation: Pandas can also be used to aggregate the results of a search.
Problem 3d
How many extended sources (type = ext) have modelMag_i between 19 and 20? Use as few lines as possible.
End of explanation
"""
grouped = sdss_spec.groupby([sdss_spec.type])
print(grouped['z'].min())
print(grouped['z'].median())
print(grouped['z'].max())
"""
Explanation: pandas also enables GROUP BY operations, where the data are split based on some criterion, a function is then applied to the groups, and the results are then combined back into a data structure.
Problem 3e
Group the data by their type and then report the minimum, median, and maximum redshift of each group. Can you immediately tell anything about these sources based on these results?
Hint - just execute the cell below.
End of explanation
"""
import time
pixel_data = np.random.rand(4000000)
photons = np.empty_like(pixel_data)
tstart = time.time()
for pix_num, pixel in enumerate(pixel_data):
photons[pix_num] = pixel/8
trun = time.time() - tstart
print('It takes {:.6f} s to correct for the gain'.format(trun))
"""
Explanation: Finally, we have only briefly discussed joining tables, but this is where relational databases really shine.
For this example we only have a single table, so we will exclude any examples of a pandas join, but there is functionality to join or merge dataframes in a fashion that is fully analogous to databases.
In summary, there are many different possible solutions for data storage and management.
For "medium" to "big" data that won't easily fit into memory ($\sim$16 GB), it is likely that a database is your best solution. For slightly smaller problems pandas provides a really nice, lightweight alternative to a full blown database that nevertheless maintains a lot of the same functionality and power.
Problem 4) We Aren't Done Talking About Your Laptop's Inadequacies
So far we have been focused on only a single aspect of computing: storage (and your laptop sucks at that).
But here's the thing - your laptop is also incredibly slow.
Supposing for a moment that you could hold all (or even a significant fraction) of the information from LSST in memory on your laptop, you would still be out of luck, as you would die before you could actually process the data and make any meaningful calculations.
(we argued that it would take $\sim$200 yr to process LSST on your laptop in Session 5)
You are in luck, however, as you need not limit yourself to your laptop. You can take advantage of multiple computers, also known as parallel processing.
At a previous session, I asked Robert Lupton, one of the primary developers of the LSST photometric pipeline, "How many CPUs are being used to process LSST data?" To which he replied, "However many are needed to process everything within 1 month."
The critical point here is that if you can figure out how to split a calculation over multiple computers, then you can finish any calculation arbitrarily fast with enough processors (to within some limits, like the speed of light, etc)
We will spend a lot more time talking about both efficient algorithm design and parallel processing later this week, but I want to close with a quick example that touches on each of these things.
Suppose that you have some 2k x 2k detector (i.e. 4 million pixels), and you need to manipulate the data in that array. For instance, the detector will report the number of counts per pixel, but this number is larger than the actual number of detected photons by a factor $g$, the gain of the telescope.
How long does it take divide every pixel by the gain?
(This is where I spend a moment telling you that - if you are going to time portions of your code as a means of measuring performance it is essential that you turn off everything else that may be running on your computer, as background processes can mess up your timing results)
End of explanation
"""
photons = np.empty_like(pixel_data)
tstart = time.time()
photons = pixel_data/8
trun = time.time() - tstart
print('It takes {:.6f} s to correct for the gain'.format(trun))
"""
Explanation: 1.5 s isn't too bad in the grand scheme of things.
Except that this example should make you cringe. There is absolutely no need to use a for loop for these operations.
This brings us to fast coding lesson number 1 - vectorize everything.
End of explanation
"""
from multiprocessing import Pool
def divide_by_gain(number, gain=8):
return number/gain
pool = Pool()
tstart = time.time()
photons = pool.map(divide_by_gain, pixel_data)
trun = time.time() - tstart
print('It takes {:.6f} s to correct for the gain'.format(trun))
"""
Explanation: By removing the for loop we improve the speed of this particular calculation by a factor of $\sim$125. That is a massive win.
Alternatively, we could have sped up the operations via the use of parallel programing. The multiprocessing library in python makes it relatively easy to implement parallel operations. There are many different ways to implement parallel processing in python, here we will just use one simple example.
(again, we will go over multiprocessing in far more detail later this week)
End of explanation
"""
|
ML4DS/ML4all
|
C3.Classification_LogReg/.ipynb_checkpoints/RegresionLogistica_student-checkpoint.ipynb
|
mit
|
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
"""
Explanation: Logistic Regression
Notebook version: 1.0 (Oct 12, 2016)
Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
End of explanation
"""
# Define the logistic function
def logistic(x):
p = #<FILL IN>
return p
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$\phi(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()
"""
Explanation: Logistic Regression
1. Introduction
1.1. Binary classification and decision theory. The MAP criterion
Goal of a classification problem is to assign a class or category to every instance or observation of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = {0, 1}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or decision. If $y=\hat{y}$, the decision is an hit, otherwise $y\neq \hat{y}$ and the decision is an error.
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P{\hat{Y} \neq Y}$ is minimum. Noting that
$$
P{\hat{Y} \neq Y} = \int P{\hat{Y} \neq Y | {\bf x}} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P{\hat{y} \neq Y |{\bf x}} \
&= \arg\max_{\hat{y}} P{\hat{y} = Y |{\bf x}} \
\end{align}
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}{\hat{y}=0}\quad P{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually named MAP (Maximum A Posteriori).
1.2. Parametric classification.
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal S = {({\bf x}^{(k)}, y^{(k)}), \,k=1,\ldots,K}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal S = {({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=1,\ldots,K}$ of independent and identically distributed (i.i.d.) samples from an unknown distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
In practice, the dataset ${\mathcal S}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this lesson, we explore one of the most popular model-based parametric classification methods: logistic regression.
<img src="figs/parametric_decision.png", width=300>
2. Logistic regression.
2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in {0,1}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the logistic function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
It is straightforward to see that the logistic function has the following properties:
P1: Probabilistic output: $\quad 0 \le g(t) \le 1$
P2: Symmetry: $\quad g(-t) = 1-g(t)$
P3: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$
In the following we define a logistic function in python, and use it to plot a graphical representation.
Exercise 1: Verify properties P2 and P3.
Exercise 2: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$.
End of explanation
"""
# Weight vector:
w = [1, 4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = logistic(w[0] + w[1]*xx0 + w[2]*xx1)
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
"""
Explanation: 2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$.
End of explanation
"""
# SOLUTION TO THE EXERCISE
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = #<FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
"""
Explanation: 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$
Exercise 2: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$
End of explanation
"""
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'rb') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
with open('iris.data', 'rb') as csvfile:
lines = csv.reader(csvfile)
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print 'Train: ' + str(nTrain_all)
print 'Test: ' + str(nTest_all)
"""
Explanation: 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal S = {({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {0,1}, k=1,\ldots,K}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png", width=300>
In the following, we will make the following assumptions:
A1. The samples in ${\mathcal S}$ are i.i.d.
A2. Target $Y^{(k)}$ only depends on ${\bf x}^{(k)}$, but not on ${\bf x}^{(l)}$ for any $l\neq k$.
A3. (Logistic Regression): We assume a logistic model for the a posteriori probability of ${Y=1}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})].$$
We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
Maximum Likelihood (ML): $\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})$
Maximum A Posteriori (MAP): $\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p_{{\bf W}|{\mathcal S}}({\bf w}|{\mathcal S})$
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})]
= g[-{\bf w}^\intercal{\bf z}({\bf x})]$$
we can write
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$
where $\overline{y} = 2y-1$ is a symmetrized label ($\overline{y}\in{-1, 1}$).
3.1. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})
= \arg\min_{\bf w} L({\bf w})
$$
where $L({\bf w})$ is the negative log-likelihood function, given by
$$
L({\bf w}) = - \log P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})
= - \log\left[P\left(y^{(1)},\ldots,y^{(K)}|
{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right)\right]
$$
Using assumption A1,
$$
L({\bf w}) = - \log\left[\prod_{k=1}^K P\left(y^{(k)}|{\bf x}^{(1)},\ldots,{\bf x}^{(K)},{\bf w}\right)\right].
$$
Using A2,
\begin{align}
L({\bf w})
&= - \log\left[\prod_{k=1}^K P_{Y|{\bf X}}\left(y^{(k)}|{\bf x}^{(k)},{\bf w}\right)\right] \
&= - \sum_{k=1}^K\log\left[P_{Y|{\bf X}}\left(y^{(k)}|{\bf x}^{(k)},{\bf w}\right)\right]
\end{align}
Using A3 (the logistic model)
\begin{align}
L({\bf w})
&= - \sum_{k=1}^K\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \
&= \sum_{k=1}^K\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right]
\end{align}
where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$.
It can be shown that $L({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} L(\hat{\bf w}{\text{ML}})
&= - \sum{k=1}^K
\frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}}
{1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}{\text{ML}}^\intercal {\bf z}^{(k)}
\right)} = \
&= - \sum_{k=1}^K \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum.
3.2. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}{n+1} = {\bf w}_n - \rho_n \nabla{\bf w} L({\bf w}_n)
\end{align}
where $\rho_n >0$ is the learning step.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}{n+1} &= {\bf w}_n
+ \rho_n \sum{k=1}^K \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)}
\end{align}
Defining vectors
\begin{align}
{\bf y} &= [y^{(1)},\ldots,y^{(K)}]^\intercal \
\hat{\bf p}_n &= [g({\bf w}_n^\intercal {\bf z}^{(1)}), \ldots, g({\bf w}_n^\intercal {\bf z}^{(K)})]^\intercal
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}^{(1)},\ldots,{\bf z}^{(K)}\right]^\intercal
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset.
3.2.1 Example: Iris Dataset.
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (setosa, versicolor or virginica). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
We will try to fit the logistic regression model to discriminate between two classes using only two attributes.
First, we load the dataset and split them in training and test subsets.
End of explanation
"""
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
"""
Explanation: Now, we select two classes and two attributes.
End of explanation
"""
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx
"""
Explanation: 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.
End of explanation
"""
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)
"""
Explanation: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
End of explanation
"""
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
"""
Explanation: The following figure generates a plot of the normalized training data.
End of explanation
"""
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
p0_tr = logistic(-np.dot(Z_tr, w))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p1_tr)) - np.dot((1-Y_tr).T, np.log(p0_tr))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w))
# Class
D = [int(round(pn)) for pn in p]
return p, D
"""
Explanation: In order to apply the gradient descent rule, we need to define two methods:
- A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A predict method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions.
End of explanation
"""
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print "The optimal weights are:"
print w
print "The final error rates are:"
print "- Training: " + str(pe_tr)
print "- Test: " + str(pe_tst)
print "The NLL after training is " + str(nll_tr[len(nll_tr)-1])
"""
Explanation: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$.
End of explanation
"""
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
"""
Explanation: 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
Number of iterations
Initialization
Learning step
Exercise: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array p with nbins, you can use plt.hist(p, n)
3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
Exercise 3: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?
Exercise 4: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, 1/10, 1/100, 1/1000, \ldots$
In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= 1/n$. Another common choice is $\rho_n = \alpha/(1+\beta n)$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method.
3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights.
End of explanation
"""
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print "The optimal weights are:"
print w
print "The final error rates are:"
print "- Training: " + str(pe_tr)
print "- Test: " + str(pe_tst)
print "The NLL after training is " + str(nll_tr[len(nll_tr)-1])
"""
Explanation: 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing.
End of explanation
"""
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 8, 4 # Set figure size
for i in [1, 2]:
ax = plt.subplot(1,2,i)
ax.plot(x0c0, x1c0,'r.', label=labels[c0])
ax.plot(x0c1, x1c1,'g+', label=labels[c1])
ax.set_xlabel('$x_' + str(ind[0]) + '$')
ax.set_ylabel('$x_' + str(ind[1]) + '$')
ax.axis('equal')
if i==1:
ax.contourf(xx, yy, pp, cmap=plt.cm.copper)
else:
ax.legend(loc='best')
ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper)
"""
Explanation: Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.
End of explanation
"""
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print "The final error rates are:"
print "- Training: " + str(pe_tr)
print "- Test: " + str(pe_tst)
print "The NLL after training is " + str(nll_tr[len(nll_tr)-1])
"""
Explanation: 4. Regularization and MAP estimation.
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}|{\mathcal S})
$$
The posterior density $p({\bf w}|{\mathcal S})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal S}) =
\frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right)
p_{\bf W}({\bf w})}
{p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)}
$$
$$
p({\bf w}|{\mathcal S}) =
\frac{P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right)
p_{\bf W}({\bf w})}
{p\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)}\right)}
$$
The numerator of the above expression is the product of two terms:
The likelihood $P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w})$, which takes large values for parameter vectors $\bf w$ that fit well the training data
The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our a priori preference for some solutions. Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to soft classification borders).
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$.
Therefore, the MAP criterion prefers solutions that simultaneously fit well the data and our a priori belief about which solutions should be preferred.
$$\hat{\bf w}{\text{MAP}}
= \arg\max{\bf w} P_{{\mathcal S}|{\bf W}}({\mathcal S}|{\bf w}) \cdot p_{\bf W}({\bf w})$$
We can compute the MAP estimate as
\begin{align}
\hat{\bf w}{\text{MAP}}
&= \arg\max{\bf w}
P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right)
p_{\bf W}({\bf w}) \
&= \arg\max_{\bf w} \left{
\log\left[P\left(y^{(1)},\ldots,y^{(K)}|{\bf x}^{(1)},\ldots, {\bf x}^{(K)},{\bf w}\right) \right]
+ \log\left[ p_{\bf W}({\bf w})\right]
\right} \
&= \arg\min_{\bf w} \left{L({\bf w}) - \log\left[ p_{\bf W}({\bf w})\right]
\right}
\end{align}
where $L(·)$ is the negative log-likelihood function.
We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
4.1 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ is a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}{\text{MAP}}
&= \arg\min{\bf w} \left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right}
\end{align}
where $C = 2v$. Noting that
$$\nabla_{\bf w}\left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
4.2 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate is
\begin{align}
\hat{\bf w}{\text{MAP}}
&= \arg\min{\bf w} \left{L({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right}
\end{align}
The additional term introduced by the prior in the optimization algorithm is usually named the regularization term. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the inverse regularization strength.
Exercise 5: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior.
5. Other optimization algorithms
5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs more iterations to converge.
Exercise 5: Modify logregFit to implement an algorithm that applies the SGD rule.
5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}0)
+ \nabla{\bf w}^\intercal C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\intercal{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> Hessian matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}0 - {\bf H}({\bf w}_0)^{-1} \nabla{\bf w}^\intercal C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rules becomes
$$\hat{\bf w}{n+1} = \hat{\bf w}{n} - \rho_n {\bf H}({\bf w}k)^{-1} \nabla{{\bf w}}C({\bf w}_k)
$$
For instance, for the MAP estimate with Gaussian prior, the Hessian matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + \sum_{k=1}^K f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right){\bf z}^{(k)} ({\bf z}^{(k)})^\intercal
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left(f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right)\right)
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
\hat{\bf w}{n+1} = \hat{\bf w}{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}(\hat{\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
End of explanation
"""
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print "The final error rates are:"
print "- Training: " + str(pe_tr)
print "- Test: " + str(pe_tst)
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 8, 4 # Set figure size
for i in [1, 2]:
ax = plt.subplot(1,2,i)
ax.plot(x0c0, x1c0,'r.', label=labels[c0])
ax.plot(x0c1, x1c1,'g+', label=labels[c1])
ax.set_xlabel('$x_' + str(ind[0]) + '$')
ax.set_ylabel('$x_' + str(ind[1]) + '$')
ax.axis('equal')
if i==1:
ax.contourf(xx, yy, pp, cmap=plt.cm.copper)
else:
ax.legend(loc='best')
ax.contourf(xx, yy, np.round(pp), cmap=plt.cm.copper)
"""
Explanation: 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm.
End of explanation
"""
|
pycroscopy/pycroscopy
|
jupyter_notebooks/AFM_simulations/Multifrequency_Viscoelasticity/Simulation_SoftMatter.ipynb
|
mit
|
import sys
sys.path.append('d:\Github\pycroscopy')
from __future__ import division, absolute_import, print_function
from pycroscopy.simulation.afm_lib import dynamic_spectroscopy
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
"""
Explanation: Dynamic atomic force microscopy simulations over a viscoelastic material
Content under Creative Commons Attribution license CC-BY 4.0 version,
Enrique A. López-Guerra.
This notebook contains atomic force microscopy (AFM) dynamic simulations for the case of a tapping mode simulation. In this example only the 1st eigenmode is excited, but you can easily modify it to excite up to three eigenmodes. The cantilever dynamics are assumed to be contained in the first three eigenmodes.
The simulation corresponds to the case of an AFM spherical tip interacting with a viscoelastic surface. The viscoelastic model is a generalized Maxwell model (Wiechert model) containing a large number of characteristic times. The contact mechanics have been implemented with respect to the classical theory of Lee and Radok: Lee, E. Ho, and Jens Rainer Maria Radok. "The contact problem for viscoelastic bodies." Journal of Applied Mechanics 27.3 (1960): 438-444.
Let's first start by importing some useful libraries
End of explanation
"""
fo1, fo2, fo3 = 45.0e3, 280.0e3, 17.6*45.0e3 #eigenmodes' resonance frequencies
k_m1, k_m2, k_m3 = 5.80, 210.0, 5.80*(fo3/fo1) #1st eigenmode cantilever stiffness
A1, A2, A3 = 50.0e-9, 0.0, 0.0 #target oscillating free amplitude of the 1st three eigenmodes
Q1, Q2, Q3 = 167.0,340.0, 500.0 #quality factor of the 1st three eigenmodes
R = 10.0e-9 #tip radius
period1, period2 = 1.0/fo1, 1.0/fo2 #oscillating period of first two eigenmodes
dt = period1/10.0e3 #simulation timestep
startprint = 3.0*Q1*period1 #starting point when results will start to get printed
simultime = startprint + 25.0*period1 #total simulation time
printstep = dt*10.0 #how often the results will be stored
z_step = 0.05*A1 #the spatial step between tapping mode runs (i.e., spatial vertical distance the cantilever base moves between tapping mode runs, the smaller this number is the more points your amplitude phase curve will have)
"""
Explanation: Inserting simulation parameters
End of explanation
"""
#Sample parameters for polyisobutylene
df_G = pd.read_csv('PIB.txt', delimiter='\t', header=None)
tau = df_G.iloc[:,0].values #relaxation times of the generalized Maxwell model
G = df_G.iloc[:,1].values #moduli of the springs in the Maxwell arms
Ge = 0.0 #equilibrium modulus (rubbery modulus)
H = 5.0e-19 #Hammaker constant
"""
Explanation: OPTIONAL: Pulling sample parameters corresponding to Polyisobutylene (data obtained from: Brinson, Hal F., and L. Catherine Brinson. "Polymer engineering science and viscoelasticity." An Introduction (2008).)
If you want to use this model, you should place this and the following cell right before the cell labeled as Main portion of the simulation.
End of explanation
"""
G = np.array([1.0e9,1.0e7]) #moduli of the springs in the Maxwell arms
tau = np.array([1.0/fo1/10.0,1.0/fo1]) #relaxation times of the generalized Maxwell model
Ge = 2.0e8 #equilibrium modulus (rubbery modulus)
H = 5.0e-19 #Hammaker constant
"""
Explanation: Defining sample parameters: a simple model with two relaxation times
End of explanation
"""
print('Note that this cell takes a while to execute')
import time
start_time = time.time()
amp, phase, zeq, Ediss, virial, peakF, maxdepth, t_a, tip_a, Fts_a, xb_a = dynamic_spectroscopy(G, tau, R, dt, startprint, simultime, fo1, fo2, fo3, k_m1, k_m2,k_m3, A1, A2, A3, printstep, Ge, Q1, Q2, Q3, H, z_step)
end_time = time.time()
print("total time taken this loop: %5.2f seconds"%(end_time - start_time))
"""
Explanation: Main portion of the simulation
The simulations described correspond to dynamic AFM spectroscopy simulations. Here, the cantilever will be brought towards the sample approaching with discrete steps and will be allowed to oscillate until achieving a quasi-steady state. From the tip trajectories and other information recoreded (e.g., tip-sample force, sample position) in the steady state it will be possible to retrieve common information recorded in a dynamic spectroscopy experiment (e.g., amplitude, phase curves).
End of explanation
"""
plt.plot(amp/A1, phase,'-o', color ='b')
plt.xlabel('$A_1/A_0$', fontsize =15)
plt.ylabel('Phase, deg', fontsize=15)
plt.plot(zeq*1.0e9, amp*1.0e9,'-o', color ='g')
plt.xlabel('$Z_{eq}, nm$', fontsize =15)
plt.ylabel('Amplitude, nm', fontsize=15)
"""
Explanation: Plotting dynamic spectroscopy curves (Amplitude, Phase approach curves)
End of explanation
"""
N = 10
setpoint = amp[N]/A1
plt.plot(tip_a[N]*1.0e9, Fts_a[N]*1.0e9, color ='g', label = '$setpoint: %.2f$'%setpoint)
plt.legend(loc='best')
plt.xlabel('$tip, nm$', fontsize =15)
plt.xlim(-5,10)
plt.ylabel('$F_{ts}, nN$', fontsize=15)
"""
Explanation: Plotting a typical force-distance curve
End of explanation
"""
N = 10
setpoint = amp[N]/A1
plt.plot(t_a*1.0e6, tip_a[N]*1.0e9, color ='g', label = '$setpoint: %.2f$'%setpoint)
plt.legend(loc='best')
plt.xlim(0,t_a[int(len(t_a)/10.0)]*1.0e6)
plt.ylim(min(tip_a[N])*5.0e9, max(tip_a[N])*1.2e9)
plt.xlabel('$tip, nm$', fontsize =15)
plt.ylabel('$time, \mu s$', fontsize=15)
"""
Explanation: Plotting the tip trajectory
End of explanation
"""
plt.figure(1)
plt.plot(amp/A1, -maxdepth*1.0e9,'-o', color ='c')
plt.xlabel('$A_1/A_0$', fontsize =15)
plt.ylabel('Maximum Penetration, nm', fontsize=15)
#plt.savefig('Max_penetration.png', bbox_inches='tight') #optional to save the figure in current path
plt.figure(2)
plt.plot(amp/A1, peakF*1.0e9,'-o', color ='m')
plt.xlabel('$A_1/A_0$', fontsize =15)
plt.ylabel('Peak Force, nN', fontsize=15)
#plt.savefig('Max_Force.png', bbox_inches='tight') #optional line to save the figure in current path
"""
Explanation: Plotting more results
End of explanation
"""
|
Nikolay-Lysenko/presentations
|
endogeneity/treatment_effect_with_selection_on_unobservables.ipynb
|
mit
|
from itertools import combinations
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from sklearn.model_selection import train_test_split, KFold, GridSearchCV
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
# Startup settings can not suppress a warning from `XGBRegressor` and so this is needed.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from xgboost import XGBRegressor
np.random.seed(seed=361)
"""
Explanation: Introduction
Problem Description
Data-driven approaches are now used in many fields from business to science. Since data storage and computational power has become cheap, machine learning has gained popularity. However, the majority of tools that can extract dependencies from data, are designed for prediction problem. In this notebook, a problem of decision support simulation is considered and it is shown that even good predictive models can lead to wrong conclusions. This occurs under some conditions summarized by an umbrella term called endogeneity. Its particular cases are as follows:
* An important variable is omitted;
* Variables that are used as features are measured with biases;
* There is simultaneous or reverse causality between a target variable and some features.
Here, important variable omission is a root of a trouble.
Suppose that situation is as follows. There is a freshly-hired manager that can assign treatment to items in order to increase target metric. Treatment is binary, i.e. for each item it is assigned or it is absent. Because treatment costs something, its assignment should be optimized — only some items should be treated. A historical dataset of items performance is given, but the manager does not know that previously treatment was assigned predominantely based on values of just one parameter. Moreover, this parameter is not included in the dataset. By the way, the manager wants to create a system that predicts an item's target metric in case of treatment and in case of absence of treatment. If this system is deployed, the manager can compare these two cases and decide whether effect of treatment worths its costs.
If machine learning approach results in good prediction scores, chances are that the manager does not suspect that important variable is omitted (at least until some expenses are generated by wrong decisions). Hence, domain knowledge and data understanding are still required for modelling based on data. This is of particular importance when datasets contain values that are produced by someone's decisions, because there is no guarantee that future decisions will not change dramatically. On the flip side, if all factors that affect decisions are included in a dataset, i.e., there is selection on observables for treatment assignment, a powerful enough model is able to estimate treatment effect correctly (but accuracy of predictions still does not ensure causal relationships detection).
References
To read more about causality in data analysis, it is possible to look at these papers:
Angrist J, Pischke J-S. Mostly Harmless Econometrics. Princeton University Press, 2009.
Varian H. Big Data: New Tricks for Econometrics. Journal of Economic Perspectives, 28(2): 3–28, 2013
Preparations
General
End of explanation
"""
unobserved = np.hstack((np.ones(10000), np.zeros(10000)))
treatment = np.hstack((np.ones(9000), np.zeros(10000), np.ones(1000)))
np.corrcoef(unobserved, treatment)
"""
Explanation: Synthetic Dataset Generation
Let us generate an unobserved parameter and an indicator of treatment such that they are highly correlated.
End of explanation
"""
def synthesize_dataset(unobserved, treatment,
given_exogenous=None, n_exogenous_to_draw=2,
weights_matrix=np.array([[5, 0, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1],
[0, 0, 1, 3]])):
"""
A helper function for repetitive
pieces of code.
Creates a dataset, where target depends on
`unobserved`, but `unobserved` is not
included as a feature. Independent features
can be passed as `given_exogenous` as well as
be drawn from Gaussian distribution.
Target is generated as linear combination of
features and their interactions in the
following manner. Order features as below:
unobserved variable, treatment indicator,
given exogenous features, drawn exogenous
features. Then the (i, i)-th element of
`weights_matrix` defines coefficient of
the i-th feature, whereas the (i, j)-th
element of `weights_matrix` (where i != j)
defines coefficient of interaction between
the i-th and j-th features.
@type unobserved: numpy.ndarray
@type treatment: numpy.ndarray
@type given_exogenous: numpy.ndarray
@type n_exogenous_to_draw: int
@type weights_matrix: numpy.ndarray
@rtype: tuple(numpy.ndarray)
"""
if unobserved.shape != treatment.shape:
raise ValueError("`unobserved` and `treatment` are not aligned.")
if (given_exogenous is not None and
unobserved.shape[0] != given_exogenous.shape[0]):
raise ValueError("`unobserved` and `given_exogenous` are not " +
"aligned. Try to transpose `given_exogenous`.")
if weights_matrix.shape[0] != weights_matrix.shape[1]:
raise ValueError("Matrix of weights is not square.")
if not np.array_equal(weights_matrix, weights_matrix.T):
raise ValueError("Matrix of weigths is not symmetric.")
len_of_given = given_exogenous.shape[1] if given_exogenous is not None else 0
if 2 + len_of_given + n_exogenous_to_draw != weights_matrix.shape[0]:
raise ValueError("Number of weights is not equal to that of features.")
drawn_features = []
for i in range(n_exogenous_to_draw):
current_feature = np.random.normal(size=unobserved.shape[0])
drawn_features.append(current_feature)
if given_exogenous is None:
features = np.vstack([unobserved, treatment] + drawn_features).T
else:
features = np.vstack([unobserved, treatment, given_exogenous.T] +
drawn_features).T
target = np.dot(features, weights_matrix.diagonal())
indices = list(range(weights_matrix.shape[0]))
interactions = [weights_matrix[i, j] * features[:, i] * features[:, j]
for i, j in combinations(indices, 2)]
target = np.sum(np.vstack([target] + interactions), axis=0)
return features[:, 1:], target
learning_X, learning_y = synthesize_dataset(unobserved, treatment)
"""
Explanation: Now create historical dataset that is used for learning predictive model.
End of explanation
"""
unobserved = np.hstack((np.ones(2500), np.zeros(2500)))
no_treatment = np.zeros(5000)
full_treatment = np.ones(5000)
no_treatment_X, no_treatment_y = synthesize_dataset(unobserved, no_treatment)
full_treatment_X, full_treatment_y = synthesize_dataset(unobserved, full_treatment,
no_treatment_X[:, 1:], 0)
"""
Explanation: Now create two datasets for simulation where the only difference between them is that in the first one treatment is absent and in the second one treatment is assigned to all items.
End of explanation
"""
no_treatment_X[:5, :]
full_treatment_X[:5, :]
no_treatment_y[:5]
full_treatment_y[:5]
"""
Explanation: Look at the data that are used for simulation.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(learning_X, learning_y,
random_state=361)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
def tune_inform(X_train, y_train, rgr, grid_params, kf, scoring):
"""
Just a helper function that combines
all routines related to grid search.
@type X_train: numpy.ndarray
@type y_train: numpy.ndarray
@type rgr: any sklearn regressor
@type grid_params: dict
@type kf: any sklearn folds
@type scoring: str
@rtype: sklearn regressor
"""
grid_search_cv = GridSearchCV(rgr, grid_params, cv=kf,
scoring=scoring)
grid_search_cv.fit(X_train, y_train)
print("Best CV mean score: {}".format(grid_search_cv.best_score_))
means = grid_search_cv.cv_results_['mean_test_score']
stds = grid_search_cv.cv_results_['std_test_score']
print("Detailed results:")
for mean, std, params in zip(means, stds,
grid_search_cv.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, 2 * std, params))
return grid_search_cv.best_estimator_
rgr = LinearRegression()
grid_params = {'fit_intercept': [True, False]}
kf = KFold(n_splits=5, shuffle=True, random_state=361)
"""
Explanation: Good Model...
End of explanation
"""
rgr = tune_inform(X_train, y_train, rgr, grid_params, kf, 'r2')
y_hat = rgr.predict(X_test)
r2_score(y_test, y_hat)
"""
Explanation: Let us use coefficient of determination as a scorer rather than MSE. Actually, they are linearly dependent: $R^2 = 1 - \frac{MSE}{\mathrm{Var}(y)}$, but coefficient of determination is easier to interpret.
End of explanation
"""
rgr = XGBRegressor()
grid_params = {'n_estimators': [50, 100, 200, 300],
'max_depth': [3, 5],
'subsample': [0.8, 1]}
kf = KFold(n_splits=5, shuffle=True, random_state=361)
rgr = tune_inform(X_train, y_train, rgr, grid_params, kf, 'r2')
"""
Explanation: Although true relationship is non-linear, predictive power of linear regression is good. This is indicated by close to 1 coefficient of determination. Since the winner is model with intercept, its score can be interpreted as follows — the model explains almost all variance of the target around its mean (note that such interpretation can not be used for a model without intercept).
End of explanation
"""
y_hat = rgr.predict(X_test)
r2_score(y_test, y_hat)
"""
Explanation: It looks like almost all combinations of hyperparameters result in error that is close to irreducible error caused by mismatches between the indicator of treatment and the omitted variable.
End of explanation
"""
no_treatment_y_hat = rgr.predict(no_treatment_X)
r2_score(no_treatment_y, no_treatment_y_hat)
full_treatment_y_hat = rgr.predict(full_treatment_X)
r2_score(full_treatment_y, full_treatment_y_hat)
"""
Explanation: The score is even closer to 1 than in case of linear model. Decent result deceptively motivates to think that all important variables are included in the model.
...and Poor Simulation
End of explanation
"""
fig = plt.figure(figsize=(14, 7))
ax_one = fig.add_subplot(121)
ax_one.scatter(no_treatment_y_hat, no_treatment_y)
ax_one.set_title("Simulation of absence of treatment")
ax_one.set_xlabel("Predicted values")
ax_one.set_ylabel("True values")
ax_one.grid()
ax_two = fig.add_subplot(122, sharey=ax_one)
ax_two.scatter(full_treatment_y_hat, full_treatment_y)
ax_two.set_title("Simulation of treatment")
ax_two.set_xlabel("Predicted values")
ax_two.set_ylabel("True values")
_ = ax_two.grid()
"""
Explanation: And now scores are not perfect, are they?
End of explanation
"""
estimated_effects = full_treatment_y_hat - no_treatment_y_hat
true_effects = full_treatment_y - no_treatment_y
np.min(estimated_effects)
"""
Explanation: It can be seen that effect of treatment is overestimated. In case of absence of treatment, for items with unobserved feature equal to 1, predictions are significantly less than true values. To be more precise, the differences are close to coefficient near unobserved feature in weights_matrix passed to the dataset creation. Similarly, in case of full treatment, for items with unobserved feature equal to 0, predictions are higher than true values and the differences are close to the abovementioned coefficient too.
Finally, let us simulate a wrong decision that the manager can make. Suppose that treatment costs one dollar per item and every unit increase in the target variable leads to creation of value that is equal to one dollar too.
End of explanation
"""
cost_of_one_treatment = 1
estimated_net_improvement = (np.sum(estimated_effects) -
cost_of_one_treatment * estimated_effects.shape[0])
estimated_net_improvement
true_net_improvement = (np.sum(true_effects) -
cost_of_one_treatment * true_effects.shape[0])
true_net_improvement
"""
Explanation: The model recommends to treat all items. What happens if all of them are treated?
End of explanation
"""
|
AllenDowney/ModSimPy
|
notebooks/chap01.ipynb
|
mit
|
try:
import pint
except ImportError:
!pip install pint
import pint
try:
from modsim import *
except ImportError:
!pip install modsimpy
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Chapter 1
Copyright 2020 Allen Downey
License: Creative Commons Attribution 4.0 International
Jupyter
Welcome to Modeling and Simulation, welcome to Python, and welcome to Jupyter.
This is a Jupyter notebook, which is a development environment where you can write and run Python code. Each notebook is divided into cells. Each cell contains either text (like this cell) or Python code.
Selecting and running cells
To select a cell, click in the left margin next to the cell. You should see a blue frame surrounding the selected cell.
To edit a code cell, click inside the cell. You should see a green frame around the selected cell, and you should see a cursor inside the cell.
To edit a text cell, double-click inside the cell. Again, you should see a green frame around the selected cell, and you should see a cursor inside the cell.
To run a cell, hold down SHIFT and press ENTER.
If you run a text cell, Jupyter formats the text and displays the result.
If you run a code cell, Jupyter runs the Python code in the cell and displays the result, if any.
To try it out, edit this cell, change some of the text, and then press SHIFT-ENTER to format it.
Adding and removing cells
You can add and remove cells from a notebook using the buttons in the toolbar and the items in the menu, both of which you should see at the top of this notebook.
Try the following exercises:
From the Insert menu select "Insert cell below" to add a cell below this one. By default, you get a code cell, as you can see in the pulldown menu that says "Code".
In the new cell, add a print statement like print('Hello'), and run it.
Add another cell, select the new cell, and then click on the pulldown menu that says "Code" and select "Markdown". This makes the new cell a text cell.
In the new cell, type some text, and then run it.
Use the arrow buttons in the toolbar to move cells up and down.
Use the cut, copy, and paste buttons to delete, add, and move cells.
As you make changes, Jupyter saves your notebook automatically, but if you want to make sure, you can press the save button, which looks like a floppy disk from the 1990s.
Finally, when you are done with a notebook, select "Close and Halt" from the File menu.
Using the notebooks
The notebooks for each chapter contain the code from the chapter along with additional examples, explanatory text, and exercises. I recommend you
Read the chapter first to understand the concepts and vocabulary,
Run the notebook to review what you learned and see it in action, and then
Attempt the exercises.
If you try to work through the notebooks without reading the book, you're gonna have a bad time. The notebooks contain some explanatory text, but it is probably not enough to make sense if you have not read the book. If you are working through a notebook and you get stuck, you might want to re-read (or read!) the corresponding section of the book.
Installing modules
These notebooks use standard Python modules like NumPy and SciPy. I assume you already have them installed in your environment.
They also use two less common modules: Pint, which provides units, and modsim, which contains code I wrote specifically for this book.
The following cells check whether you have these modules already and tries to install them if you don't.
End of explanation
"""
!python --version
!jupyter-notebook --version
"""
Explanation: The first time you run this on a new installation of Python, it might produce a warning message in pink. That's probably ok, but if you get a message that says modsim.py depends on Python 3.7 features, that means you have an older version of Python, and some features in modsim.py won't work correctly.
If you need a newer version of Python, I recommend installing Anaconda. You'll find more information in the preface of the book.
You can find out what version of Python and Jupyter you have by running the following cells.
End of explanation
"""
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
"""
Explanation: Configuring Jupyter
The following cell:
Uses a Jupyter "magic command" to specify whether figures should appear in the notebook, or pop up in a new window.
Configures Jupyter to display some values that would otherwise be invisible.
Select the following cell and press SHIFT-ENTER to run it.
End of explanation
"""
meter = UNITS.meter
second = UNITS.second
"""
Explanation: The penny myth
The following cells contain code from the beginning of Chapter 1.
modsim defines UNITS, which contains variables representing pretty much every unit you've ever heard of. It uses Pint, which is a Python library that provides tools for computing with units.
The following lines create new variables named meter and second.
End of explanation
"""
a = 9.8 * meter / second**2
"""
Explanation: To find out what other units are defined, type UNITS. (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units.
Create a variable named a and give it the value of acceleration due to gravity.
End of explanation
"""
t = 4 * second
"""
Explanation: Create t and give it the value 4 seconds.
End of explanation
"""
a * t**2 / 2
"""
Explanation: Compute the distance a penny would fall after t seconds with constant acceleration a. Notice that the units of the result are correct.
End of explanation
"""
# Solution goes here
"""
Explanation: Exercise: Compute the velocity of the penny after t seconds. Check that the units of the result are correct.
End of explanation
"""
# Solution goes here
"""
Explanation: Exercise: Why would it be nonsensical to add a and t? What happens if you try?
End of explanation
"""
h = 381 * meter
"""
Explanation: The error messages you get from Python are big and scary, but if you read them carefully, they contain a lot of useful information.
Start from the bottom and read up.
The last line usually tells you what type of error happened, and sometimes additional information.
The previous lines are a "traceback" of what was happening when the error occurred. The first section of the traceback shows the code you wrote. The following sections are often from Python libraries.
In this example, you should get a DimensionalityError, which is defined by Pint to indicate that you have violated a rules of dimensional analysis: you cannot add quantities with different dimensions.
Before you go on, you might want to delete the erroneous code so the notebook can run without errors.
Falling pennies
Now let's solve the falling penny problem.
Set h to the height of the Empire State Building:
End of explanation
"""
t = sqrt(2 * h / a)
"""
Explanation: Compute the time it would take a penny to fall, assuming constant acceleration.
$ a t^2 / 2 = h $
$ t = \sqrt{2 h / a}$
End of explanation
"""
v = a * t
"""
Explanation: Given t, we can compute the velocity of the penny when it lands.
$v = a t$
End of explanation
"""
mile = UNITS.mile
hour = UNITS.hour
v.to(mile/hour)
"""
Explanation: We can convert from one set of units to another like this:
End of explanation
"""
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: Suppose you bring a 10 foot pole to the top of the Empire State Building and use it to drop the penny from h plus 10 feet.
Define a variable named foot that contains the unit foot provided by UNITS. Define a variable named pole_height and give it the value 10 feet.
What happens if you add h, which is in units of meters, to pole_height, which is in units of feet? What happens if you write the addition the other way around?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: In reality, air resistance limits the velocity of the penny. At about 18 m/s, the force of air resistance equals the force of gravity and the penny stops accelerating.
As a simplification, let's assume that the acceleration of the penny is a until the penny reaches 18 m/s, and then 0 afterwards. What is the total time for the penny to fall 381 m?
You can break this question into three parts:
How long until the penny reaches 18 m/s with constant acceleration a.
How far would the penny fall during that time?
How long to fall the remaining distance with constant velocity 18 m/s?
Suggestion: Assign each intermediate result to a variable with a meaningful name. And assign units to all quantities!
End of explanation
"""
|
janfreyberg/niwidgets
|
report.ipynb
|
cc0-1.0
|
from niwidgets import NiWidget
"""
Explanation: Niwidgets: interactive visualisation of neuroimaging data
Abstract
With a new python package, niwidgets, we attempt to make it easier to interactively visualise neuroimaging data in jupyter notebooks. Interactive visualisations are useful both for the research process, and for the final presentation of results. It takes away the pressure to produce one illustrative snapshot of your complex, multidimensional data, and instead allows the reader to investigate and explore the data themselves.
This first release of niwidgets provides simple, one- or two-line implementations of interactive widgets in python. It provides interactive ways to slice a nifti file, and an interface to add custom interactive options such as thresholding a statistical map or changing the orientation of the plot. Combined with reports written in jupyter notebooks, this could enhance the write-up of neuroimaging studies by giving the reader the write-up, code, and interactive results all in one file.
Introduction
Neuroimaging data is often highly complex. The results are often multidimensional and could be visualised in many different ways. The traditional model of presenting one snapshot of neuroimaging results only means that researchers face tough decisions on how to present their data, and can often lead to misleading figures in papers. One approach that has recently developed in the neuroimaging community is the publication of result maps - uploading the 3D results of an experiment to a repository so that readers of journal articles can investigate the results by themselves. One such example is neurovault.org Gorgolewski et al. 2015. These tools are extremely useful and provide online visualisation tools, which let readers explore the data they are reading about.
However, this still divorces the data from the article itself, as well as the code that was used to analyse the data. We wanted to provide a way for neuroscientists to provide a coherent report that includes the article, analysis code, and data alongside each other. We chose to try to implement this for jupyter notebooks. Jupyter notebooks is a language-agnostic file format that combines rich text, code, and inline results and visualisations. Given that MATLAB, python, R and bash are all supported by jupyter notebooks, and that most jupyter notebook kernels support python, a python library that visualises brain data should allow researchers to produce nice interactive combinations of writing, code and data.
Approach
We used the library ipywidgets as a basis and wrote wrapper functions that would handle the loading of neuroimaging data and the creation of interactive tools to let researchers manipulate data themselves. We used established neuroimaging packages for python - nibabel (Brett et al. 2016) and nilearn (Abraham et al. 2010) - to handle the loading of data, and used an established visualisation package for python - matplotlib - to handle the visualisation.
We first wanted to provide a simple way to explore imaging maps, so we wrote a function that simply provided three sliders, for x, y and z, and allowed people to chose a color map.
We then wanted to provide the ability to turn more sophisticated neuroimaging plots into widgets. To this end, we added the ability to provide a plotting function yourself. We so far only tested this with plotting functions from the package nilearn, as they are simple to use. When provided a custom plotting function, we tried to make niwidgets infer basic interactive features about the plot, such as whether it supports interactive x-, y-, and z-coordinates. We also tried to enable custom colormaps for all plots, which could be useful to prevent issues around categorical colormaps (Hawkins 2016) or colormaps unsuited for colorblind readers (Albrecht 2010).
Results
We produced a python package called niwidgets. So far, the package provides one class (NiWidget). To initialise this class, only one input is needed: the path to a nifti file.
The NiWidget class provides one method at the moment (nifti_plotter). This method can be called without any input to provide a default function, and with a custom plotting function as input to provide more versatile plots.
End of explanation
"""
from niwidgets import examplet1 # this is an example T1 dataset
my_widget = NiWidget(examplet1)
my_widget.nifti_plotter()
"""
Explanation: Default plotting function
The default function produces a widget that allows the reader to choose any position within the image using three sliders (x, y and z). It also allows the reader to choose any of the colormaps that are part of the matplotlib package (Fig. 1). Creating this widget only requires two lines of code: 1) Initialise the class using NiWidget('/path/to/file') and 2) Create the widget using .nifti_plotter()
End of explanation
"""
from niwidgets import examplezmap # this is an example statistical map from neurosynth
import nilearn.plotting as nip
my_widget = NiWidget(examplezmap)
my_widget.nifti_plotter(plotting_func=nip.plot_glass_brain, # custom plot function
threshold=(0.0, 6.0, 0.01), # custom slider
display_mode=['ortho','xz']) # custom drop-down menu
"""
Explanation: Custom plotting functions
When the nifti_plotter() method is called with the optional keyword argument plotting_func defined, it produces a plot that uses the provided function. We have tested this primarily with nilearn.plotting functions, but will test it for other functions in the future, too. We attempted to "coerce" these functions to support custom colormaps, and provide the same selection of colormaps we do for the default plotting function. We also tried to detect whether the plotting function supports the specification of x/y/z coordinates, and if so implement interactive sliders for them. An example of this is the nilearn.plotting.plot_glass_brain function (Fig. 2).
End of explanation
"""
|
JuanIgnacioGil/basket-stats
|
sentiment_analysis/sentiment_analysis.ipynb
|
mit
|
%load_ext autoreload
%autoreload 2
import data_collection
import data_cleaning as dcl
import sentiment_analysis as sent
api = data_collection.login_into_twitter()
players = [
'Giannis Antetokounmpo',
'James Harden',
'Rudy Gobert',
'Paul George',
'Kevin Durant',
'Anthony Davis',
'Damian Lillard',
'Karl-Anthony Towns',
'Joel Embiid',
'Clint Capela'
]
players
url_player_stats = data_collection.get_url_player_stats()
stats = data_collection.generate_all_player_stats(url_player_stats)
stats[0].describe()
for cp in zip(players, stats):
cp[1].to_csv('data/{}_stats.csv'.format(cp[0].replace(' ', '')))
final_tw_account = data_collection.get_twitter_accounts(players)
final_tw_account
tweets_df = data_collection.get_all_tweet(api, final_tw_account)
tweets_df
tweets_df.loc[0, 'Tweets']
tweets_df.to_csv('tweets.csv')
"""
Explanation: Sentiment Analysis on NBA top players’ Twitter Account
Original code from Yen-Chen Chou:
Part1 Data Collection
https://towardsdatascience.com/do-tweets-from-nba-leading-players-have-correlations-with-their-performance-7358c79aa216
End of explanation
"""
df_tweet = [0]*len(tweets_df["Tweets"])
token_ls = [0]*len(tweets_df["Tweets"])
snowstemmer_token_ls = [0]*len(tweets_df["Tweets"])
for num, text in enumerate(tweets_df["Tweets"]):
token_ls[num], snowstemmer_token_ls[num] = dcl.tokenization_and_stem(dcl.tweets_cleaner(text))
sentence_tokenized = [0]*len(tweets_df["Tweets"])
for num, token in enumerate(token_ls):
sentence_tokenized[num] = dcl.back_to_clean_sent(token)
sentence_snowstemmeed = [0]*len(tweets_df["Tweets"])
for num, token in enumerate(snowstemmer_token_ls):
sentence_snowstemmeed[num] = dcl.back_to_clean_sent(token)
sentence_tokenized[0][0]
sentence_snowstemmeed[0][0]
sentiment_original = sent.sentiment_analysis(tweets_df["Tweets"])
sentiment_token = sent.sentiment_analysis(sentence_tokenized)
sentiment_snowstemmed = sent.sentiment_analysis(sentence_snowstemmeed)
sentiment_original[0][0]
sentiment_token[0][0]
sentiment_snowstemmed[0][0]
sentence_tokenized[1][0], sentiment_original[1][0], sentiment_token[1][0], sentiment_snowstemmed[1][0]
new_df = sent.proccess_sentiment(sentiment_token, sentiment_snowstemmed, tweets_df)
new_df.head()
new_df[['Name', 'sentiment_token_compound', 'sentiment_stem_compound']].groupby('Name').mean().sort_values(
'sentiment_token_compound', ascending=False)
"""
Explanation: Data cleaning
End of explanation
"""
tfidf_model, tfidf_matrix = sent.fit_vectorizer(sentence_snowstemmeed)
print ("In total, there are " + str(tfidf_matrix.shape[0]) + \
" synoposes and " + str(tfidf_matrix.shape[1]) + " terms.")
"""
Explanation: Clustering
End of explanation
"""
clusters, clusters_df, km = sent.fit_kmeans(tfidf_matrix, tweets_df)
clusters_df.sort_values('Cluster')
vocab_frame_dict, tf_selected_words,cluster_keywords_summary, cluster_nba = sent.find_common_words(
sentence_tokenized, sentence_snowstemmeed, tfidf_model)
print ("Clustering result by K-means")
# km.cluster_centers_ denotes the importances of each items in centroid.
# We need to sort it in decreasing-order and get the top k items.
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
Cluster_keywords_summary = {}
for i in range(km.n_clusters):
print ("Cluster " + str(i) + " words: ", end='')
Cluster_keywords_summary[i] = []
for ind in order_centroids[i, :5]: #replace 5 with n words per cluster
Cluster_keywords_summary[i].append(vocab_frame_dict[tf_selected_words[ind]])
print (vocab_frame_dict[tf_selected_words[ind]] + ",", end='')
cluster_NBA = clusters_df.loc[i]['Name'].values
print("\n", ", ".join(cluster_NBA), "\n")
"""
Explanation: K-Means Clustering
End of explanation
"""
|
AlexGascon/playing-with-keras
|
#3 - Improving text generation/3.2 - Increasing dataset size.ipynb
|
apache-2.0
|
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
"""
Explanation: 3.2. Increasing dataset size
The next thing we're going to try is to increase the size of our dataset. On the previous trainings we used a small subset of the book "Don Quijote de La Mancha" that contained 169KB of text.
The problem is that we have to consider that what we're going to do is to teach Spanish to our RNN. And, let's be honest, it's quite difficult to learn a language from scratch by reading only 169K characters (a few chapters of a book); we'll learn some words and maybe even a few sentences, but it's very difficult to really learn the language.
Therefore, in order to solve this, we'll greatly increase the size of the dataset. We'll use the entire "Don Quijote de la Mancha" book, and to it we'll append another very famous Spanish book, "La Regenta" by Leopoldo Alas. Combining both, we'll get a dataset of about 4MB (more than 20x the previous one). And, although this will slow down our training a lot, it will be with very high probability a very huge improvement in our code.
Let's start the code:
End of explanation
"""
# Load the books, merging them and covert the result to lowercase
filename1 = "El ingenioso hidalgo don Quijote de la Mancha.txt"
book1 = open(filename1).read()
filename2 = "La Regenta.txt"
book2 = open(filename2).read()
book = book1 + book2
# Create mapping of unique chars to integers, and its reverse
chars = sorted(list(set(book)))
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
# Summarizing the loaded data
n_chars = len(book)
n_vocab = len(chars)
print "Total Characters: ", n_chars
print "Total Vocab: ", n_vocab
# Prepare the dataset of input to output pairs encoded as integers
seq_length = 100
dataX = []
dataY = []
# Iterating over the book
for i in range(0, n_chars - seq_length, 1):
sequence_in = book[i:i + seq_length]
sequence_out = book[i + seq_length]
# Converting each char to its corresponding int
sequence_in_int = [char_to_int[char] for char in sequence_in]
sequence_out_int = char_to_int[sequence_out]
# Appending the result to the current data
dataX.append(sequence_in_int)
dataY.append(sequence_out_int)
n_patterns = len(dataX)
print "Total Patterns: ", n_patterns
# Reshaping X to be [samples, time steps, features]
X = np.reshape(dataX, (n_patterns, seq_length, 1))
# Normalizing
X = X / float(n_vocab)
# One hot encode the output variable
y = np_utils.to_categorical(dataY)
# Define the LSTM model
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(256))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
# Starting from a checkpoint (if we set one)
checkpoint = ""
if checkpoint:
model.load_weights(checkpoint)
# Amount of epochs that we still have to run
epochs_run = 0
epochs_left = 50 - epochs_run
# Define the checkpoints structure
filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# Compiling the model
model.compile(loss='categorical_crossentropy', optimizer='adam')
# Fitting the model
model.fit(X, y, nb_epoch=epochs_left, batch_size=64, callbacks=callbacks_list)
"""
Explanation: The next step will be to read both books and to combine them into a single dataset, and then we'll proceed with the usual calculations
End of explanation
"""
# Load the network weights
filename = "weights-improvement-09-1.5410.hdf5"
model.load_weights(filename)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# Pick a random seed
start = np.random.randint(0, len(dataX)-1)
pattern = dataX[start]
starting_pattern = pattern # saving a copy
seed = ''.join([int_to_char[value] for value in pattern])
print "Seed:"
print "\"", seed, "\""
result_str = ""
"""
Explanation: (We won't see the results here because I've actually executed this code in another machine, not directly in the notebook; as you can imagine, this will take a loooooong time).
We'll, so here we are again! If you're reading this once I've finished the notebook you won't notice the pause, but I'm writing this two weeks later than the previous paragraph.
As I predicted, the NN took a looooooooong time to learn. Each one of the epochs required about 11 hours to finish! And besides, there's another important thing to take into account: the NN stopped generating weights after the 10th one, although the code is still running. I'd like to thing that it happened because the loss stopped decreasing at that point (what won't be as bad as it may seem, because due to the big size of the dataset we achieved quite good results even with few epochs), but we can't know it for sure at the moment; however, I will for sure update this notebook when I analyse it more precisely, so don't stop visiting it.
And now that this part is explained, let's go back to what really matters: the results!
In order to test our neural net, we'll use the two approaches tried before, in order to see the results achieved with each one: choosing the most probable character each iteration and using the output probabilities as the Probability Density Function.
3.2.1. Preparing the prediction
In this section we're going to include all the code that is common to both prediction methods (loading the weights, preparing the seed...) in order to avoid executing the same code twice
End of explanation
"""
# Generate characters
for i in range(500):
x = numpy.reshape(pattern, (1, len(pattern), 1))
x = x / float(n_vocab)
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
result_str += result
pattern.append(index)
pattern = pattern[1:len(pattern)]
print "\nDone."
"""
Explanation: 3.2.2. Most probable character
The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method
End of explanation
"""
pattern = starting_pattern # Restoring the seed to its initial state
result_str = ""
# Generate characters
for i in range(500):
x = np.reshape(pattern, (1, len(pattern), 1))
x = x / float(n_vocab)
# Choosing the character randomly
prediction = model.predict(x, verbose=0)
prob_cum = np.cumsum(prediction[0])
rand_ind = np.random.rand()
for i in range(len(prob_cum)):
if (rand_ind <= prob_cum[i]) and (rand_ind > prob_cum[i-1]):
index = i
break
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
result_str += result
pattern.append(index)
pattern = pattern[1:len(pattern)]
print "\nDone."
"""
Explanation: 3.2.3. Randomized prediction
The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method
End of explanation
"""
|
petspats/pyhacores
|
under_construction/fsk_modulator/doc.ipynb
|
apache-2.0
|
samples_per_symbol = 64 # this is so high to make stuff plottable
symbols = [1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0]
data = []
for x in symbols:
data.extend([1 if x else -1] * samples_per_symbol)
plt.plot(data)
plt.title('Data to send')
plt.show()
"""
Explanation: Overview
Digital data to be transmitted
End of explanation
"""
fs = 300e3
deviation = 70e3 # deviation from center frequency
sensitivity = 2 * np.pi * deviation / fs
print(sensitivity)
d_phase = 0
phl = []
for symbol in data:
d_phase += symbol * sensitivity # this is FSK
d_phase = ((d_phase + np.pi) % (2.0 * np.pi)) - np.pi # keep in pi range
phl.append(d_phase * 1j)
sig = np.exp(phl)
# awgn channel
# sig = sig + np.random.normal(scale=np.sqrt(0.1))
Pxx, freqs, bins, im = plt.specgram(sig, Fs=fs, NFFT=64, noverlap=0)
plt.show()
"""
Explanation: Modulation
End of explanation
"""
import inspect
def get_objects_rednode(obj):
source_path = inspect.getsourcefile(type(obj))
source = open(source_path).read()
print(source)
from pyhacores.moving_average.model import MovingAverage
obj = MovingAverage(2)
get_objects_rednode(obj)
"""
Explanation: Spectogram shows that we have synthesized positive frequency for True bit and negative for False.
This complex data can be sent to SDR at this point.
End of explanation
"""
|
ctenix/pytheway
|
MachineL/notes/ML13-监督学习-基本分类模型.ipynb
|
gpl-3.0
|
X=[[0],[1],[2],[3]]
y=[0,0,1,1]
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X,y)
"""
Explanation: ML13——监督学习
基本分类模型
K近邻分类器
创建一组数据X和它对应的标签y
End of explanation
"""
print(neigh.predict([[1.1]]))
"""
Explanation: 调用predict()函数,对未知样本[1.1]进行分类
End of explanation
"""
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
"""
Explanation: 决策树的使用
导入sklearn内嵌的鸢尾花数据集
End of explanation
"""
clf = DecisionTreeClassifier()
iris = load_iris()
cross_val_score(clf,iris.data,iris.target,cv=10)
"""
Explanation: 创建一颗基于基尼系数的决策树
End of explanation
"""
clf.fit(X,y)
clf.predict(x)
"""
Explanation: 利用决策树fit()函数训练模型,并使用predict()函数进行预测
End of explanation
"""
import numpy as np
X = np.array([[-1,-1],[-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([1,1,1,2,2,2])
"""
Explanation: 决策树本质上是一种寻找对特征空间上的划分,旨在构建一个训练数据拟合的好,并且复杂度小的决策树
朴素贝叶斯分类器
朴素贝叶斯分类器介绍
在sklearn库中,包含三个朴素贝叶斯分类器:
naive_bayes.GussianNB:高斯朴素贝叶斯
naive_bayes.MultinomialNB:针对多项式模型的朴素贝叶斯分类器
naive_bayes.BernoulliNB:针对多元伯努利模型的朴素贝叶斯分类器
在sklearn库中,可以使用sklearn.naive_bayes.GaussianNB创建一个高斯朴素贝叶斯分类器
朴素贝叶斯分类器的使用
导入numpy库,并构造训练数据X和y
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
"""
Explanation: 导入高斯贝叶斯分类器
End of explanation
"""
clf = GaussianNB(priors=None)
clf.fit(X,y)
print(clf.predict([[-0.8,-1]]))
"""
Explanation: 使用默认参数,创建一个高斯朴素贝叶斯分类器,并将该分类器赋给变量clf。
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
dev/_downloads/1537c1215a3e40187a4513e0b5f1d03d/eeg_csd.ipynb
|
bsd-3-clause
|
# Authors: Alex Rockhill <aprockhill@mailbox.org>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD
:footcite:PerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
End of explanation
"""
meg_path = data_path / 'MEG' / 'sample'
raw = mne.io.read_raw_fif(meg_path / 'sample_audvis_raw.fif')
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,
exclude=raw.info['bads']).load_data()
events = mne.find_events(raw)
raw.set_eeg_reference(projection=True).apply_proj()
"""
Explanation: Load sample subject data
End of explanation
"""
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
"""
Explanation: Plot the raw data and CSD-transformed raw data:
End of explanation
"""
raw.plot_psd()
raw_csd.plot_psd()
"""
Explanation: Also look at the power spectral densities:
End of explanation
"""
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
"""
Explanation: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
End of explanation
"""
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
"""
Explanation: First let's look at how CSD affects scalp topography:
End of explanation
"""
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
"""
Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution:
End of explanation
"""
|
salman-jpg/maya
|
stemming_and_transliteration/Bangla Stemming and Transliteration.ipynb
|
mit
|
from indicnlp.morph import unsupervised_morph
morph = unsupervised_morph.UnsupervisedMorphAnalyzer("bn")
text = u"""\
করা করেছিলাম করেছি করতে করেছিল হয়েছে হয়েছিল হয় হওয়ার হবে আবিষ্কৃত আবিষ্কার অভিষিক্ত অভিষেক অভিষেকের আমি আমার আমাদের তুমি তোমার তোমাদের বসা বসেছিল বসে বসি বসেছিলাম বস বসার\
"""
word_token = text.split(" ")
word_morph = []
for i in word_token:
word_morph.append(morph.morph_analyze(i))
import pandas as pd
indic = pd.DataFrame({"1_Word": word_token, "2_Morpheme": word_morph})
indic
"""
Explanation: Using Indic NLP Library
https://github.com/anoopkunchukuttan/indic_nlp_library
Morphological Analysis
End of explanation
"""
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
bangla_text = "ami apni tumi tomar tomader amar apnar apnader akash"
text_trans = ItransTransliterator.from_itrans(bangla_text, "bn")
print repr(text_trans).decode("unicode_escape")
"""
Explanation: Transliteration
End of explanation
"""
from transliteration import getInstance
trans = getInstance()
text_trans = trans.transliterate(bangla_text, "bn_IN")
print repr(text_trans).decode("unicode_escape")
"""
Explanation: Using Silpa
https://github.com/libindic/Silpa-Flask
Transliteration
End of explanation
"""
import rbs
word_stem1 = []
for i in word_token:
word_stem1.append(rbs.stemWord(i, True))
bs1 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem1})
bs1
"""
Explanation: Using BengaliStemmer
https://github.com/gdebasis/BengaliStemmer
Stemming
End of explanation
"""
import jnius_config
jnius_config.set_classpath(".", "path to class")
from jnius import autoclass
cls = autoclass("RuleFileParser")
stemmer = cls()
word_stem2 = []
for i in word_token:
word_stem2.append(stemmer.stemOfWord(i))
bs2 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem2})
bs2
"""
Explanation: Using BanglaStemmer
https://github.com/rafi-kamal/Bangla-Stemmer
Stemming
End of explanation
"""
from pyavrophonetic import avro
trans_text = avro.parse(bangla_text)
print repr(trans_text).decode("unicode_escape")
"""
Explanation: Using Avro
https://github.com/kaustavdm/pyAvroPhonetic
Transliteration
End of explanation
"""
|
diazmazzaro/UC2K17_DEV
|
demos/05_jupyter/Move+existing+user+content+to+a+new+user.ipynb
|
gpl-3.0
|
from arcgis.gis import *
"""
Explanation: Mover contenido de un usuario existente a otro nuevo
End of explanation
"""
gis = GIS("https://ags-enterprise4.aeroterra.com/arcgis/", "PythonApi", "test123456", verify_cert=False)
"""
Explanation: Cree una conexión con el portal.
End of explanation
"""
orig_userid = "afernandez"
new_userid = "pmayo"
"""
Explanation: Establecer variables para el usuario actual que se está realizando la transición y para que se cree el nuevo ID de usuario
End of explanation
"""
olduser = gis.users.get(orig_userid)
olduser
"""
Explanation: Valide que el ID de usuario original es válido y accesible.
End of explanation
"""
newuser = gis.users.create(new_userid, "pm123456", "Pablo", "Mayo", \
new_userid, description=olduser.description, \
role=olduser.role, provider='arcgis', level=2)
newuser = gis.users.get(new_userid)
newuser
"""
Explanation: Crear un nuevo ID de usuario
End of explanation
"""
usergroups = olduser['groups']
for group in usergroups:
grp = gis.groups.get(group['id'])
if (grp.owner == orig_userid):
grp.reassign_to(new_userid)
else:
grp.add_users(new_userid)
grp.remove_users(orig_userid)
"""
Explanation: Una vez que se ha creado correctamente el nuevo usuario, reasigne la propiedad del grupo y la pertenencia a grupos del usuario antiguo al nuevo usuario.
End of explanation
"""
usercontent = olduser.items()
folders = olduser.folders
for item in usercontent:
try:
item.reassign_to(new_userid)
except:
print(item)
for folder in folders:
gis.content.create_folder(folder['title'], new_userid)
folderitems = olduser.items(folder=folder['title'])
for item in folderitems:
item.reassign_to(new_userid, target_folder=folder['title'])
"""
Explanation: Una vez que se ha cambiado correctamente la propiedad / pertenencia del grupo, reasigne todo el contenido del usuario original al nuevo usuario. Esto ocurre en 2 pases. En primer lugar, reasigne todo en la carpeta raíz de 'Mis contenidos'. A continuación, haga un bucle en cada carpeta, cree la misma carpeta en la nueva cuenta de usuario y reasigne los elementos de cada carpeta al nuevo usuario en la carpeta correcta.
End of explanation
"""
|
mayank-johri/LearnSeleniumUsingPython
|
Section 2 - Advance Python/Chapter S2.02 - XML/Chapter 8 - Parsing XML.ipynb
|
gpl-3.0
|
import xml.etree.ElementTree as ET
"""
Explanation: XML
In Core Python, we discussed about text files. In this chapter, we will discuss about XML.
What is XML
Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. The W3C's XML 1.0 Specification and several other related specifications —all of them free open standards—define XML. Also XML is a text formatted data which can be viewed and edited on any text editor.
Design Goal
XML design emphasize
- simplicity,
- generality, and
- usability.
Although the design of XML focused on documents, it is widely used for the representation of data structures used in web services and configurations of desktop applications.
Two most common document file formats, "Office Open XML" and "OpenDocument", are based on XML.
XML Examples
```xml
<?xml version="1.0"?>
<books>
<book title="Ṛg-Veda Khilāni">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2008</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/rvkh/rvkh.htm</web_page>
</book>
<book title="Ṛgveda-Saṃhitā">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2000</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/mt/rv.htm</web_page>
</book>
</books>
```
Detailed explanation of XML components
XML documents can be visualized as a tree, where you have one parent and children's. Nodes can have zero or more child nodes. But children nodes will always have only one parent node. As book node has a parent node in books. You will observe that both the book node has the same parent node. Also, both book nodes have multiple different child nodes describing the book.
editor, publication, year and web_page nodes have same parent book. Similarly book nodes have single parent node as books.
Also node that at the top of the document only one node books is present.
Each Node can have attributes as title is the attribute of node book.
<xml>
<book title="Ṛg-Veda Khilāni">
XML support in Python
Python has rich support for XML by having multiple libs to parse XML documents. Lets dicuss them in details. Following are the sub-modules supported nativly by Python
xml.etree.ElementTree: the ElementTree API, a simple and lightweight XML processor
xml.dom: the DOM API definition
xml.dom.minidom: a minimal DOM implementation
xml.dom.pulldom: support for building partial DOM trees
xml.sax: SAX2 base classes and convenience functions
xml.parsers.expat: the Expat parser binding
xml.etree.ElementTree
we can import ET using the following command
End of explanation
"""
old_books = 'code/data/old_books.xml'
nasa_data = 'code/data/nasa.xml'
"""
Explanation: XML can parse either the xml file using the following code,
End of explanation
"""
Opening an xml file is actually quite simple : you open it and you parse it. Who would have guessed ?
tree = ET.parse(old_books)
root = tree.getroot()
print(tree)
"""
Explanation: Opening xml file
End of explanation
"""
xml_book = """<?xml version="1.0"?>
<books>
<book title="Ṛg-Veda Khilāni">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2008</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/rvkh/rvkh.htm</web_page>
</book>
<book title="Ṛgveda-Saṃhitā">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2000</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/mt/rv.htm</web_page>
</book>
</books>
"""
root = ET.fromstring(xml_book)
"""
Explanation: or read string using the following code
End of explanation
"""
print(root.tag)
"""
Explanation: As an Element, root also has tag and to following code can be used to find the tag
End of explanation
"""
print(len(root))
"""
Explanation: We can use len to find the number of direct child nodes. As in our example we have two book nodes,
End of explanation
"""
print(ET.tostring(root))
"""
Explanation: Reading root as binary text
End of explanation
"""
dec_root = ET.tostring(root).decode()
print(dec_root)
print(type(dec_root))
"""
Explanation: Reading element as formatted text
End of explanation
"""
print(dir(root))
we can use `for` loop to traverse the direct descendents nodes.
for ele in root:
print(ele)
"""
Explanation: All attributes available to an element
End of explanation
"""
for ele in root:
print(ele.tag, ele.attrib)
"""
Explanation: as shown above we get element nodes using for loop, lets get more information from them by enhancing the existing code
End of explanation
"""
print(root[1])
"""
Explanation: we can also find the nodes using indexes.
End of explanation
"""
print(root[1].attrib['title'])
print(root[0][1].text)
"""
Explanation: If more than one attibutes are present then individual attributes can be accessas similar to dictionary
End of explanation
"""
for event, elem in ET.iterparse(old_books):
print(event, elem)
# ########################## NOTE ##########################
# Please run the commented code on command prompt to appreciate
# its power, the working code has been saved as `read_nasa.py`
# in code folder
# file_name = 'data/nasa.xml'
# for event, elem in ET.iterparse(file_name):
# print(event, elem)
"""
Explanation: Reading Large XML file using iterparse
End of explanation
"""
tree = ET.parse(old_books)
root = tree.getroot()
parser = ET.XMLPullParser(['start', 'end'])
print(parser)
"""
Explanation: !!!TODO!!! : Reading Large XML file using XMLPullParser
End of explanation
"""
for editor in root.iter('editor'):
print(editor)
print(editor.text)
"""
Explanation: ------------------ END
Finding interesting elements
Say, we are only interested in part of the whole xml document, in this section we will discuss technologies which will help us in solving this situation
Using iter
End of explanation
"""
for editor in root.findall('book'):
print(editor)
print(editor.tag)
print(root.findall('editor'))
for editor in root.findall('editor'):
print(editor)
print(editor.tag)
"""
Explanation: as, you can see we were able to directly select editor tags
using findall
It finds only elements with a tag which are direct children of the current element.
End of explanation
"""
print(root.find('book'))
print(root.find('editor'))
"""
Explanation: As you can see that editor is not direct children for the current element root, thus we got empty value
Using find
It find the first child with a particular tag
End of explanation
"""
ele = root.find('book')
ele.get('title')
"""
Explanation: Accessing Element Attributes
End of explanation
"""
a = ET.Element('a')
b = ET.SubElement(a, 'b')
b.attrib["B"] = "TEST"
c = ET.SubElement(a, 'c')
d = ET.SubElement(a, 'd')
e = ET.SubElement(d, 'e')
f = ET.SubElement(e, 'f')
ET.dump(a)
print(ET.tostring(a).decode())
"""
Explanation: Building XML documents
We can build a XML document using Element & SubElement functions of ElementTree
End of explanation
"""
xml_text = """<?xml version="1.0"?>
<actors xmlns:fictional="http://characters.example.com"
xmlns="http://people.example.com">
<actor>
<name>John Cleese</name>
<fictional:character>Lancelot</fictional:character>
<fictional:character>Archie Leach</fictional:character>
</actor>
<actor>
<name>Eric Idle</name>
<fictional:character>Sir Robin</fictional:character>
<fictional:character>Gunther</fictional:character>
<fictional:character>Commander Clement</fictional:character>
</actor>
</actors>"""
root = ET.fromstring(xml_text)
for actor in root.findall('{http://people.example.com}actor'):
name = actor.find('{http://people.example.com}name')
print(name.text)
for char in actor.findall('{http://characters.example.com}character'):
print(' |->', char.text)
"""
Explanation: Parsing XML with Namespaces
```xml
<?xml version="1.0"?>
<actors xmlns:fictional="http://characters.example.com"
xmlns="http://people.example.com">
<actor>
<name>John Cleese</name>
<fictional:character>Lancelot</fictional:character>
<fictional:character>Archie Leach</fictional:character>
</actor>
<actor>
<name>Eric Idle</name>
<fictional:character>Sir Robin</fictional:character>
<fictional:character>Gunther</fictional:character>
<fictional:character>Commander Clement</fictional:character>
</actor>
</actors>
```
End of explanation
"""
ele = root.find('book')
ele.attrib['title'] = "Rig-Veda Khilāni"
updated_xml = 'code/data/updated_old_book.xml'
tree.write(updated_xml)
with open(updated_xml) as f:
print(f.read())
"""
Explanation: XPath support
| Syntax | Meaning |
|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| tag | Selects all child elements with the given tag. For example, spam selects all child elements named spam, and spam/egg selects all grandchildren named egg in all children named spam. |
| * | Selects all child elements. For example, */egg selects all grandchildren named egg. |
| . | Selects the current node. This is mostly useful at the beginning of the path, to indicate that it’s a relative path. |
| // | Selects all subelements, on all levels beneath the current element. For example, .//egg selects all eggelements in the entire tree. |
| .. | Selects the parent element. Returns None if the path attempts to reach the ancestors of the start element (the element find was called on). |
| [@attrib] | Selects all elements that have the given attribute. |
| [@attrib='value'] | Selects all elements for which the given attribute has the given value. The value cannot contain quotes. |
| [tag] | Selects all elements that have a child named tag. Only immediate children are supported. |
| [tag='text'] | Selects all elements that have a child named tag whose complete text content, including descendants, equals the given text. |
| [position] | Selects all elements that are located at the given position. The position can be either an integer (1 is the first position), the expression last() (for the last position), or a position relative to the last position (e.g. last()-1). |
Modifying an XML File
The ElementTree.write() method can be used to save the updated document to specified file.
End of explanation
"""
xml_book = """
<?xml version="1.0"?>
<books>
<book title="Ṛg-Veda Khilāni">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2008</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/rvkh/rvkh.htm</web_page>
</book>
<book title="Ṛgveda-Saṃhitā">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2000</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/mt/rv.htm</web_page>
</book>
</books>
"""
root = ET.fromstring(xml_book)
"""
Explanation: XML vulnerabilities
he XML processing modules are not secure against maliciously constructed data. An attacker can abuse XML features to carry out denial of service attacks, access local files, generate network connections to other machines, or circumvent firewalls.
The following table gives an overview of the known attacks and whether the various modules are vulnerable to them.
| kind | sax | etree | minidom | pulldom | xmlrpc |
|---------------------------|------------|------------|------------|------------|------------|
| billion laughs | Vulnerable | Vulnerable | Vulnerable | Vulnerable | Vulnerable |
| quadratic blowup | Vulnerable | Vulnerable | Vulnerable | Vulnerable | Vulnerable |
| external entity expansion | Vulnerable | Safe (1) | Safe (2) | Vulnerable | Safe (3) |
| DTD retrieval | Vulnerable | Safe | Safe | Vulnerable | Safe |
| decompression bomb | Safe | Safe | Safe | Safe | Vulnerable |
Common Errors and causes
End of explanation
"""
xml_book = """<?xml version="1.0"?>
<books>
<book title="Ṛg-Veda Khilāni">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2008</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/rvkh/rvkh.htm</web_page>
</book>
<book title="Ṛgveda-Saṃhitā">
<editor>Jost Gippert</editor>
<publication>Frankfurt: TITUS</publication>
<year>2000</year>
<web_page>http://titus.uni-frankfurt.de/texte/etcs/ind/aind/ved/rv/mt/rv.htm</web_page>
</book>
</books>
"""
root = ET.fromstring(xml_book)
"""
Explanation: due to blank first line this error happens, to avoid this error remove the blank spaces from the start of string, as shown below
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/ukesm1-0-ll/ocnbgchem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.