repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
cjdrake/pyeda
ipynb/Survey.ipynb
bsd-2-clause
a, b, c, d = map(exprvar, 'abcd') """ Explanation: Abstract This paper introduces PyEDA, a Python library for electronic design automation (EDA). PyEDA provides both a high level interface to the representation of Boolean functions, and blazingly-fast C extensions for fundamental algorithms where performance is essential. PyEDA is a hobby project which has the simple but audacious goal of improving the state of digital design by using Python. Introduction Chip design and verification is a complicated undertaking. You must assemble a large team of engineers with many different specialties: front-end design entry, logic verification, power optimization, synthesis, place and route, physical verification, and so on. Unfortunately, the tools, languages, and work flows offered by the electronic design automation (EDA) industry are, in this author's opinion, largely a pit of despair. The languages most familiar to chip design and verification engineers are Verilog (now SystemVerilog), C/C++, TCL, and Perl. Flows are patched together from several proprietary tools with incompatible data representations. Even with Python's strength in scientific computing, it has largely failed to penetrate this space. In short, EDA needs more Python! This paper surveys some of the features and applications of PyEDA, a Python library for electronic design automation. PyEDA provides both a high level interface to the representation of Boolean functions, and blazingly-fast C extensions for fundamental algorithms where performance is essential. PyEDA is a hobby project, but in the past year it has seen some interesting adoption from University students. For example, students at Vanderbilt University used it to model system reliability, and students at Saarland University used as part of a fast DQBF Refutation tool. Even though the name "PyEDA" implies that the library is specific to EDA, it is actually general in nature. Some of the techniques used for designing and verifying digital logic are fundamental to computer science. For example, we will discuss applications of Boolean satisfiability (SAT), the definitive NP-complete problem. PyEDA's repository is hosted at https://github.com/cjdrake/pyeda, and its documentation is hosted at http://pyeda.rtfd.org. Boolean Variables and Functions At its core, PyEDA provides a powerful API for creating and manipulating Boolean functions. First, let us provide the standard definitions. A Boolean variable is an abstract numerical quantity that can take any value in the set ${0, 1}$. A Boolean function is a rule that maps points in an $N$-dimensional Boolean space to an element in ${0, 1}$. Formally, $f: B^N \Rightarrow B$, where $B^N$ means the Cartesian product of $N$ sets of type ${0, 1}$. For example, if you have three input variables, $a, b, c$, each defined on ${0, 1}$, then $B^3 = {0, 1}^3 = {(0, 0, 0), (0, 0, 1), ..., (1, 1, 1)}$. $B^3$ is the domain of the function (the input part), and $B = {0, 1}$ is the range of the function (the output part). The set of all input variables a function depends on is called its support. There are several ways to represent a Boolean function, and different data structures have different tradeoffs. In the following sections, we will give a brief overview of PyEDA's API for logic expressions, truth tables, and binary decision diagrams. In addition, we will provide implementation notes for several useful applications. Logic Expressions Logic expressions are a powerful and flexible way to represent Boolean functions. They are implemented as a graph, with atoms at the branches, and operators at the leaves. Atomic elements are literals (variables and complemented variables), and constants (zero and one). The supported algebraic operators are Not, Or, And, Xor, Equal, Implies, and ITE (if-then-else). For general purpose use, symbolic logic expressions are PyEDA's central data type. Since release 0.27, they have been implemented using a high performance C library. Expressions are fast, and reasonably compact. On the other hand, they are generally not canonical, and determining expression equivalence is NP-complete. Conversion to a canonical expression form can result in exponential size. Construction To construct a logic expression, first start by defining some symbolic variables of type Expression: End of explanation """ F = a | ~b & c ^ ~d """ Explanation: By overloading Python's logical operators, you can build expression algebraically: End of explanation """ F.support list (F.iter_relation()) """ Explanation: Use methods from the Function base class to explore the function's basic properties: End of explanation """ a ^ b ^ c Xor(a, b, c) """ Explanation: There are also several factory functions that offer more power than Python's built-in binary operators. For example, operators such as Or, And, and Xor allow you to construct N-ary expressions: End of explanation """ OneHot(a, b, c) Majority(a, b, c) """ Explanation: Also, functions such as OneHot, and Majority implement powerful, higher order functions: End of explanation """ F = ~a | a F F.simplify() Xor(a, ~b, Xnor(~a, b), c) """ Explanation: Simplification The laws of Boolean Algebra can be used to simplify expressions. For example, this table enumerates a partial list of Boolean identities for the Or and And operators. | Name | OR | AND | |:-------------:|:---------------:|:-----------------------:| | Commutativity | $x + y = y + x$ | $x \cdot y = y \cdot x$ | | Associativity | $x + (y + z) = (x + y) + z$ | $x \cdot (y \cdot z) = (x \cdot y) \cdot z$ | | Identity | $x + 0 = x$ | $x \cdot 1 = x$ | | Domination | $x + 1 = 1$ | $x \cdot 0 = 0$ | | Idempotence | $x + x = x$ | $x \cdot x = x$ | | Inverse | $x + x' = 1$ | $x \cdot x' = 0$ | Most laws are computationally easy to apply. PyEDA allows you to construct unsimplified Boolean expressions, and provides the simplify method to perform such inexpensive transformations. For example: End of explanation """ F = Xor(a >> b, c.eq(d)) F.to_nnf() """ Explanation: Performing simplification can dramatically reduce the size and depth of your logic expressions. Transformation PyEDA also supports a growing list of expression transformations. Since expressions are not a canonical form, transformations can help explore tradeoffs in time and space, as well as convert an expression to a form suitable for a particular algorithm. For example, in addition to the primary operators Not, Or, and And, expressions also natively support the secondary Xor, Equal, Implies, and ITE (if-then-else) operators. By transforming all secondary operators into primary operators, and pushing all Not operators down towards the leaf nodes, you arrive at what is known as "negation normal form". End of explanation """ F = Majority(a, b, c, d) %dotobj F """ Explanation: Currently, expressions also support conversion to the following forms: Binary operator (only two args per Or, And, etc) Disjunctive Normal Form (DNF) Conjunctive Normal Form (CNF) DNF and CNF expressions are "two-level" forms. That is, the entire expression is either an Or of And clauses (DNF), or an And of Or clauses (CNF). DNF expressions are also called "covers", and are important in both two-level and multi-level logic minimization. CNF expressions play an important role in satisfiability. We will briefly cover both of these topics in subsequent sections. Visualizaton Boolean expressions support a to_dot() method, which can be used to convert the graph structure to DOT format for consumption by Graphviz. For example, this figure shows the Graphviz output on the majority function in four variables: End of explanation """ expr(False) expr(1) expr("0") """ Explanation: Expression Parsing The expr function is a factory function that attempts to transform any input into a logic expression. It does the obvious thing when converting inputs that look like Boolean values: End of explanation """ expr("a | b ^ c & d") expr("s ? x[0] ^ x[1] : y[0] <=> y[1]") expr("a[0,1] & a[1,0] => y[0,1] | y[1,0]") """ Explanation: But it also implements a full top-down parser of expressions. For example: End of explanation """ F = OneHot(a, b, c) F.is_cnf() F.satisfy_one() list(F.satisfy_all()) """ Explanation: See the documentation for a complete list of supported operators accepted by the expr function. Satisfiability One of the most interesting questions in computer science is whether a given Boolean function is satisfiable, or SAT. That is, for a given function $F$, is there a set of input assignments that will produce an output of $1$? PyEDA Boolean functions implement two functions for this purpose, satisfy_one, and satisfy_all. The former answers the question in a yes/no fashion, returning a satisfying input point if the function is satisfiable, and None otherwise. The latter returns a generator that will iterate through all satisfying input points. SAT has all kinds of applications in both digital design and verification. In digital design, it can be used in equivalence checking, test pattern generation, model checking, formal verification, and constrained-random verification, among others. SAT finds its way into other areas as well. For example, modern package management systems such as apt and yum might use SAT to guarantee that certain dependencies are satisfied for a given configuration. The pyeda.boolalg.picosat module provides an interface to the modern SAT solver PicoSAT. When a logic expression is in conjunctive normal form (CNF), calling the satisfy_* methods will invoke PicoSAT transparently. For example: End of explanation """ Or(And(a, b), And(c, d)).to_cnf() """ Explanation: When an expression is not a CNF, PyEDA will resort to a standard, backtracking algorithm. The worst-case performance of this implementation is exponential, but is acceptable for many real-world scenarios. Tseitin Transformation The worst case memory consumption when converting to CNF is exponential. This is due to the fact that distribution of $M$ Or clauses over $N$ And clauses (or vice-versa) requires $M \times N$ clauses. End of explanation """ F = Xor(a, b, c, d) soln = F.tseitin().satisfy_one() soln """ Explanation: Logic expressions support the tseitin method, which perform's Tseitin's transformation on the input expression. For more information about this transformation, see (ref needed). The Tseitin transformation does not produce an equivalent expression, but rather an equisatisfiable CNF, with the addition of auxiliary variables. The important feature is that it can convert any expression into a CNF, which can be solved using PicoSAT. End of explanation """ {k: v for k, v in soln.items() if k.name != 'aux'} """ Explanation: You can safely discard the aux variables to get the solution: End of explanation """ truthtable([a, b], [False, False, False, True]) # This also works truthtable([a, b], "0001") """ Explanation: Truth Tables The most straightforward way to represent a Boolean function is to simply enumerate all possible mappings from input assignment to output values. This is known as a truth table, It is implemented as a packed list, where the index of the output value corresponds to the assignment of the input variables. The nature of this data structure implies an exponential size. For $N$ input variables, the table will be size $2^N$. It is therefore mostly useful for manual definition and inspection of functions of reasonable size. To construct a truth table from scratch, use the truthtable factory function. For example, to represent the And function: End of explanation """ expr2truthtable(OneHot0(a, b, c)) """ Explanation: You can also convert expressions to truth tables using the expr2truthtable function: End of explanation """ X = ttvars('x', 4) F1 = truthtable(X, "0000011111------") F2 = truthtable(X, "0001111100------") """ Explanation: Partial Definitions Another use for truth tables is the representation of partially defined functions. Logic expressions and binary decision diagrams are completely defined, meaning that their implementation imposes a complete mapping from all points in the domain to ${0, 1}$. Truth tables allow you to specify some function outputs as "don't care". You can accomplish this by using either "-" or "X" with the truthtable function. For example, a seven segment display is used to display decimal numbers. The codes "0000" through "1001" are used for 0-9, but codes "1010" through "1111" are not important, and therefore can be labeled as "don't care". End of explanation """ truthtable2expr(F1) """ Explanation: To convert a table to a two-level, disjunctive normal form (DNF) expression, use the truthtable2expr function: End of explanation """ F1M, F2M = espresso_tts(F1, F2) F1M F2M """ Explanation: Two-Level Logic Minimization When choosing a physical implementation for a Boolean function, the size of the logic network is proportional to its cost, in terms of area and power. Therefore it is desirable to reduce the size of that network. Logic minimization of two-level forms is an NP-complete problem. It is equivalent to finding a minimal-cost set of subsets of a set $S$ that covers $S$. This is sometimes called the "paving problem", because it is conceptually similar to finding the cheapest configuration of tiles that cover a floor. Due to the complexity of this operation, PyEDA uses a C extension to the Berkeley Espresso library. After calling the espresso_tts function on the F1 and F2 truth tables from above, observe how much smaller (and therefore cheaper) the resulting DNF expression is: End of explanation """ a, b, c = map(bddvar, 'abc') F = a & b & c F.support F.restrict({a: 1, b: 1}) F & 0 """ Explanation: Binary Decision Diagrams A binary decision diagram is a directed acyclic graph used to represent a Boolean function. They were originally introduced by Lee, and later by Akers. In 1986, Randal Bryant introduced the reduced, ordered BDD (ROBDD). The ROBDD is a canonical form, which means that given an identical ordering of input variables, equivalent Boolean functions will always reduce to the same ROBDD. This is a desirable property for determining formal equivalence. Also, it means that unsatisfiable functions will be reduced to zero, making SAT/UNSAT calculations trivial. Due to these auspicious properties, the term BDD almost always refers to some minor variation of the ROBDD devised by Bryant. The downside of BDDs is that certain functions, no matter how cleverly you order their input variables, will result in an exponentially-sized graph data structure. Construction Like logic expressions, you can construct a BDD by starting with symbolic variables and combining them with operators. For example: End of explanation """ expr2bdd(expr("(s ? d1 : d0) <=> (s & d1 | ~s & d0)")) """ Explanation: The expr2bdd function can also be used to convert any expression into an equivalent BDD: End of explanation """ ~a & a ~a & ~b | ~a & b | a & ~b | a & b F = a ^ b G = ~a & b | a & ~b F.equivalent(G) F is G """ Explanation: Equivalence As we mentioned before, BDDs are a canonical form. This makes checking for SAT, UNSAT, and formal equivalence trivial. End of explanation """ %dotobj expr2bdd(expr("Majority(a, b, c)")) """ Explanation: PyEDA's BDD implementation uses a unique table, so F and G from the previous example are actually just two different names for the same object. Visualization Like expressions, binary decision diagrams also support a to_dot() method, which can be used to convert the graph structure to DOT format for consumption by Graphviz. For example, this figure shows the Graphviz output on the majority function in three variables: End of explanation """ a, b, c, d = map(exprvar, 'abcd') F = farray([a, b, And(a, c), Or(b, d)]) F.ndim F.size F.shape """ Explanation: Function Arrays When dealing with several related Boolean functions, it is usually convenient to index the inputs and outputs. For this purpose, PyEDA includes a multi-dimensional array (MDA) data type, called an farray (function array). The most pervasive example is computation involving any numeric data type. If these numbers are 32-bit integers, there are 64 total inputs, not including a carry-in. The conventional way of labeling the input variables is $a_0, a_1, \ldots, a_{31}$, and $b_0, b_1, \ldots, b_{31}$. Furthermore, you can extend the symbolic algebra of Boolean functions to arrays. For example, the element-wise XOR of A and B is also an array. In this section, we will briefly discuss farray construction, slicing operations, and algebraic operators. Function arrays can be constructed using any Function implementation, but for simplicity we will restrict the discussion to logic expressions. Construction The farray constructor can be used to create an array of arbitrary expressions. End of explanation """ G = farray([ [a, b], [And(a, c), Or(b, d)], [Xor(b, c), Equal(c, d)] ]) G.ndim G.size G.shape """ Explanation: As you can see, this produces a one-dimensional array of size 4. The shape of the previous array uses Python's conventional, exclusive indexing scheme in one dimension. The farray constructor also supports multi-dimensional arrays: End of explanation """ xs = exprvars('x', 8) xs ys = exprvars('y', 4, 4) ys """ Explanation: Though arrays can be constructed from arbitrary functions in arbitrary shapes, it is far more useful to start with arrays of variables and constants, and build more complex arrays from them using operators. To construct arrays of expression variables, use the exprvars factory function: End of explanation """ uint2exprs(42, 8) int2exprs(-42, 8) """ Explanation: Use the uint2exprs and int2exprs function to convert integers to their binary encoding in unsigned, and twos-complement, respectively. End of explanation """ xs = exprvars('x', 4, 4, 4) xs[1,2,3] xs[2,:,2] xs[...,1] """ Explanation: Note that the bits are in order from LSB to MSB, so the conventional bitstring representation of $-42$ in eight bits would be "11010110". Slicing PyEDA's function arrays support numpy-style slicing operators: End of explanation """ X = exprvars('x', 4) S = exprvars('s', 2) X[S].simplify() """ Explanation: A special feature of PyEDA farray slicing that is useful for digital logic is the ability to multiplex (mux) array items over a select input. For example, to create a simple, 4:1 mux: End of explanation """ from pyeda.logic.addition import kogge_stone_add A = exprvars('a', 8) B = exprvars('b', 8) S, C = kogge_stone_add(A, B) S.vrestrict({A: "01000000", B: "01000000"}) """ Explanation: Algebraic Operations Function arrays are algebraic data types, which support the following symbolic operators: unary reductions (uor, uand, uxor, ...) bitwise logic (~ | &amp; ^) shifts (&lt;&lt; &gt;&gt;) concatenation (+) repetition (*) Combining function and array operators allows us to implement a reasonably complete domain-specific language (DSL) for symbolic Boolean algebra in Python. Consider, for example, the implementation of the xtime function, which is an integral part of the AES algorithm. The Verilog implementation, as a function: verilog function automatic logic [7:0] xtime(logic [7:0] b, int n); xtime = b; for (int i = 0; i &lt; n; i++) xtime = {xtime[6:0], 1'b0} ^ (8'h1b &amp; {8{xtime[7]}}); endfunction And the PyEDA implementation: python def xtime(b, n): for _ in range(n): b = (exprzeros(1) + b[:7] ^ uint2exprs(0x1b, 8) &amp; b[7]*8) return b Practical Applications Arrays of functions have many practical applications. For example, the pyeda.logic.addition module contains implementations of ripple-carry, brent-kung, and kogge-stone addition logic. Here is the digital logic implementation of $2 + 2 = 4$: End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/end-to-end-structured/solutions/3a_bqml_baseline_babyweight.ipynb
apache-2.0
%%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 """ Explanation: LAB 3a: BigQuery ML Model Baseline. Learning Objectives Create baseline model with BQML Evaluate baseline model Calculate RMSE of baseline model Introduction In this notebook, we will create a baseline model to predict the weight of a baby before it is born. We will use BigQuery ML to build a linear babyweight prediction model with the base features and no feature engineering, yet. We will create a baseline model with BQML, evaluate our baseline model, and calculate the its RMSE. Verify tables exist Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. End of explanation """ %%bigquery CREATE OR REPLACE MODEL babyweight.baseline_model OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_train """ Explanation: Create the baseline model Next, we'll create a linear regression baseline model with no feature engineering. We'll use this to compare our later, more complex models against. Train the "Baseline Model". When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (weight_pounds). Note also that we are using the training data table as the data source and we don't need BQML to split the data because we have already split it ourselves. End of explanation """ %%bigquery -- Information from model training SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.baseline_model) """ Explanation: REMINDER: The query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline model Even though BigQuery can automatically split the data it is given, and training on only a part of the data and using the rest for evaluation, to compare with our custom models later we wanted to decide the split ourselves so that it is completely reproducible. NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab. End of explanation """ %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.baseline_model, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) """ Explanation: Get evaluation statistics for the baseline_model. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data. End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.baseline_model, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) """ Explanation: Resource for an explanation of the Regression Metrics. Write a SQL query to find the RMSE of the evaluation data Since this is regression, we typically use the RMSE, but natively this is not in the output of our evaluation metrics above. However, we can simply take the SQRT() of the mean squared error of our loss metric from evaluation of the baseline_model to get RMSE. End of explanation """
albahnsen/ML_SecurityInformatics
exercises/05-IntrusionDetection.ipynb
mit
import pandas as pd pd.set_option('display.max_columns', 500) import zipfile with zipfile.ZipFile('../datasets/UNB_ISCX_NSL_KDD.csv.zip', 'r') as z: f = z.open('UNB_ISCX_NSL_KDD.csv') data = pd.io.parsers.read_table(f, sep=',') data.head() """ Explanation: Exercise 05 Logistic regression exercise to detect network intrusions Software to detect network intrusions protects a computer network from unauthorized users, including perhaps insiders. The intrusion detector learning task is to build a predictive model (i.e. a classifier) capable of distinguishing between bad connections, called intrusions or attacks, and good normal connections. The 1998 DARPA Intrusion Detection Evaluation Program was prepared and managed by MIT Lincoln Labs. The objective was to survey and evaluate research in intrusion detection. A standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment, was provided. The 1999 KDD intrusion detection contest uses a version of this dataset. Lincoln Labs set up an environment to acquire nine weeks of raw TCP dump data for a local-area network (LAN) simulating a typical U.S. Air Force LAN. They operated the LAN as if it were a true Air Force environment, but peppered it with multiple attacks. The raw training data was about four gigabytes of compressed binary TCP dump data from seven weeks of network traffic. This was processed into about five million connection records. Similarly, the two weeks of test data yielded around two million connection records. description A connection is a sequence of TCP packets starting and ending at some well defined times, between which data flows to and from a source IP address to a target IP address under some well defined protocol. Each connection is labeled as either normal, or as an attack, with exactly one specific attack type. Each connection record consists of about 100 bytes. Attacks fall into four main categories: DOS: denial-of-service, e.g. syn flood; R2L: unauthorized access from a remote machine, e.g. guessing password; U2R: unauthorized access to local superuser (root) privileges, e.g., various buffer overflow attacks; probing: surveillance and other probing, e.g., port scanning. It is important to note that the test data is not from the same probability distribution as the training data, and it includes specific attack types not in the training data. This makes the task more realistic. Some intrusion experts believe that most novel attacks are variants of known attacks and the "signature" of known attacks can be sufficient to catch novel variants. The datasets contain a total of 24 training attack types, with an additional 14 types in the test data only. Read the data into Pandas End of explanation """ y = (data['class'] == 'anomaly').astype(int) y.value_counts() X = data[['same_srv_rate','dst_host_srv_count']] """ Explanation: Create X and y Use only same_srv_rate and dst_host_srv_count End of explanation """
mathLab/RBniCS
tutorials/12_stokes/tutorial_stokes_2_rb.ipynb
lgpl-3.0
from dolfin import * from rbnics import * from sampling import LinearlyDependentUniformDistribution """ Explanation: TUTORIAL 12 - Stokes Equations Keywords: geometrical parametrization, reduced basis method, mixed formulation, inf sup condition 1. Introduction This tutorial addresses geometrical parametrization and the reduced basis method applied to the steady Stokes equations in a domain $\Omega_o \subset \mathbb{R}^2$ divided into 4 parts with boundary $\Gamma_o$ shown below: <img src="data/t_bypass.png" width="50%"/> The problem is characterized by six parameters. We introduce a vector of parameters $\boldsymbol{\mu} = {t,D,L,S,H,\theta }$ that control the shape of the subdomains. The ranges of the six parameters are the following: The parameter vector $\boldsymbol{\mu}$ is thus given by $$\boldsymbol{\mu}=(\mu_0,\mu_1,\mu_2,\mu_3,\mu_4,\mu_5)$$ which corresponds to $\boldsymbol{\mu} = {t,D,L,S,H,\theta }$, respectively, on the parameter domain $$\mathbb{P}=[0.5,1.5]\times[0.5,1.5]\times[0.5,1.5]\times[0.5,1.5]\times[0.5,1.5]\times[0,\pi/6]$$ In this program, we apply the following conditions on the boundaries: * Zero velocity on the left boundary $\Gamma_{o,w}$ * Constant inflow on the right boundary $\Gamma_{o,in}$ * Stress free Neumann condition on the bottom boundary $\Gamma_{o,out}$ In order to obtain a faster approximation of the problem we pursue a model reduction by means of a reduced order method from a fixed reference domain. 2. Parametrized formulation Let $\boldsymbol{u_o}(\boldsymbol{\mu})$ be the velocity vector and $p_o(\boldsymbol{\mu})$ be the pressure in the domain $\Omega_o(\boldsymbol{\mu})$. We will directly provide a weak formulation for this problem: for a given parameter $\boldsymbol{\mu} \in\mathbb{P}$, find $\boldsymbol{u_o}(\boldsymbol{\mu}) \in\mathbb{V_o}(\boldsymbol{\mu})$, $p_o \in\mathbb{M_o}$ such that <center> $ \begin{cases} \nu \int_{\Omega_o} \nabla \boldsymbol{u_o} : \nabla \boldsymbol{v_o} \ d\Omega - \int_{\Omega_o} p_o \nabla \cdot \boldsymbol{v_o} \ d\Omega = \int_{\Omega_o} \boldsymbol{f_o} \cdot \boldsymbol{v_o} \ d\Omega, \quad \forall \boldsymbol{v_o} \in\mathbb{V_o}, \ \int_{\Omega_o} q_o \nabla \cdot \boldsymbol{u_o} \ d\Omega = 0, \quad \forall q_o \in\mathbb{M_o} \end{cases} $ </center> where $\nu$ represents kinematic viscosity the function space $\mathbb{V_o}(\boldsymbol{\mu})$ is defined as $$\mathbb{V_o}(\boldsymbol{\mu}) = [H_{\Gamma_{o,w}}^{1}(\Omega_o)]^2$$ the function space $\mathbb{M_o}(\boldsymbol{\mu})$ is defined as $$\mathbb{M_o}(\boldsymbol{\mu}) = L^2(\Omega_o)$$ Note that the function spaces are parameter dependent due to the shape variation Since this problem utilizes mixed finite element discretization with the velocity and pressure as solution variables, the inf-sup condition is necessary for the well posedness of this problem. Thus, the supremizer operator $T^{\mu}: \mathbb{M_o}_h \rightarrow \mathbb{V_o}_h$ will be used. End of explanation """ @PullBackFormsToReferenceDomain() @AffineShapeParametrization("data/t_bypass_vertices_mapping.vmp") class Stokes(StokesProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization StokesProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "subdomains" in kwargs assert "boundaries" in kwargs self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"] up = TrialFunction(V) (self.u, self.p) = split(up) vq = TestFunction(V) (self.v, self.q) = split(vq) self.dx = Measure("dx")(subdomain_data=self.subdomains) self.ds = Measure("ds")(subdomain_data=self.boundaries) # ... as well as forcing terms and inlet velocity self.inlet = Expression(("- 1./0.25*(x[1] - 1)*(2 - x[1])", "0."), degree=2) self.f = Constant((0.0, 0.0)) self.g = Constant(0.0) # Return custom problem name def name(self): return "Stokes2RB" # Return the lower bound for inf-sup constant. def get_stability_factor_lower_bound(self): return 1. # Return theta multiplicative terms of the affine expansion of the problem. @compute_theta_for_supremizers def compute_theta(self, term): if term == "a": theta_a0 = 1.0 return (theta_a0, ) elif term in ("b", "bt"): theta_b0 = 1.0 return (theta_b0, ) elif term == "f": theta_f0 = 1.0 return (theta_f0, ) elif term == "g": theta_g0 = 1.0 return (theta_g0, ) elif term == "dirichlet_bc_u": theta_bc0 = 1. return (theta_bc0, ) else: raise ValueError("Invalid term for compute_theta().") # Return forms resulting from the discretization of the affine expansion of the problem operators. @assemble_operator_for_supremizers def assemble_operator(self, term): dx = self.dx if term == "a": u = self.u v = self.v a0 = inner(grad(u), grad(v)) * dx return (a0, ) elif term == "b": u = self.u q = self.q b0 = - q * div(u) * dx return (b0, ) elif term == "bt": p = self.p v = self.v bt0 = - p * div(v) * dx return (bt0, ) elif term == "f": v = self.v f0 = inner(self.f, v) * dx return (f0, ) elif term == "g": q = self.q g0 = self.g * q * dx return (g0, ) elif term == "dirichlet_bc_u": bc0 = [DirichletBC(self.V.sub(0), self.inlet, self.boundaries, 1), DirichletBC(self.V.sub(0), Constant((0.0, 0.0)), self.boundaries, 3)] return (bc0,) elif term == "inner_product_u": u = self.u v = self.v x0 = inner(grad(u), grad(v)) * dx return (x0, ) elif term == "inner_product_p": p = self.p q = self.q x0 = inner(p, q) * dx return (x0, ) else: raise ValueError("Invalid term for assemble_operator().") """ Explanation: 3. Affine decomposition In order to obtain an affine decomposition, we recast the problem on a fixed, parameter independent, reference domain $\Omega$. We choose one characterized by $\mu_0=\mu_1=\mu_2=\mu_3=\mu_4=1$ and $\mu_5=0$, which we generate through the generate_mesh notebook provided in the data folder. End of explanation """ mesh = Mesh("data/t_bypass.xml") subdomains = MeshFunction("size_t", mesh, "data/t_bypass_physical_region.xml") boundaries = MeshFunction("size_t", mesh, "data/t_bypass_facet_region.xml") """ Explanation: 4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook. End of explanation """ element_u = VectorElement("Lagrange", mesh.ufl_cell(), 2) element_p = FiniteElement("Lagrange", mesh.ufl_cell(), 1) element = MixedElement(element_u, element_p) V = FunctionSpace(mesh, element, components=[["u", "s"], "p"]) """ Explanation: 4.2. Create Finite Element space (Taylor-Hood P2-P1) End of explanation """ problem = Stokes(V, subdomains=subdomains, boundaries=boundaries) mu_range = [ (0.5, 1.5), (0.5, 1.5), (0.5, 1.5), (0.5, 1.5), (0.5, 1.5), (0., pi / 6.) ] problem.set_mu_range(mu_range) """ Explanation: 4.3. Allocate an object of the Stokes class End of explanation """ reduction_method = ReducedBasis(problem) reduction_method.set_Nmax(25) reduction_method.set_tolerance(1e-6) """ Explanation: 4.4. Prepare reduction with a reduced basis method End of explanation """ lifting_mu = (1.0, 1.0, 1.0, 1.0, 1.0, 0.0) problem.set_mu(lifting_mu) reduction_method.initialize_training_set(100, sampling=LinearlyDependentUniformDistribution()) reduced_problem = reduction_method.offline() """ Explanation: 4.5. Perform the offline phase End of explanation """ online_mu = (1.0, 1.0, 1.0, 1.0, 1.0, pi / 6.) reduced_problem.set_mu(online_mu) reduced_solution = reduced_problem.solve() plot(reduced_solution, reduced_problem=reduced_problem, component="u") plot(reduced_solution, reduced_problem=reduced_problem, component="p") """ Explanation: 4.6. Perform an online solve End of explanation """ reduction_method.initialize_testing_set(100, sampling=LinearlyDependentUniformDistribution()) reduction_method.error_analysis() """ Explanation: 4.7. Perform an error analysis End of explanation """ reduction_method.speedup_analysis() """ Explanation: 4.8. Perform a speedup analysis End of explanation """
deepchem/deepchem
examples/tutorials/Working_With_Datasets.ipynb
mit
!pip install --pre deepchem """ Explanation: Working With Datasets Data is central to machine learning. This tutorial introduces the Dataset class that DeepChem uses to store and manage data. It provides simple but powerful tools for efficiently working with large amounts of data. It also is designed to easily interact with other popular Python frameworks such as NumPy, Pandas, TensorFlow, and PyTorch. Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. End of explanation """ import deepchem as dc dc.__version__ """ Explanation: We can now import the deepchem package to play with. End of explanation """ tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv') train_dataset, valid_dataset, test_dataset = datasets """ Explanation: Anatomy of a Dataset In the last tutorial we loaded the Delaney dataset of molecular solubilities. Let's load it again. End of explanation """ print(test_dataset) """ Explanation: We now have three Dataset objects: the training, validation, and test sets. What information does each of them contain? We can start to get an idea by printing out the string representation of one of them. End of explanation """ test_dataset.y """ Explanation: There's a lot of information there, so let's start at the beginning. It begins with the label "DiskDataset". Dataset is an abstract class. It has a few subclasses that correspond to different ways of storing data. DiskDataset is a dataset that has been saved to disk. The data is stored in a way that can be efficiently accessed, even if the total amount of data is far larger than your computer's memory. NumpyDataset is an in-memory dataset that holds all the data in NumPy arrays. It is a useful tool when manipulating small to medium sized datasets that can fit entirely in memory. ImageDataset is a more specialized class that stores some or all of the data in image files on disk. It is useful when working with models that have images as their inputs or outputs. Now let's consider the contents of the Dataset. Every Dataset stores a list of samples. Very roughly speaking, a sample is a single data point. In this case, each sample is a molecule. In other datasets a sample might correspond to an experimental assay, a cell line, an image, or many other things. For every sample the dataset stores the following information. The features, referred to as X. This is the input that should be fed into a model to represent the sample. The labels, referred to as y. This is the desired output from the model. During training, it tries to make the model's output for each sample as close as possible to y. The weights, referred to as w. This can be used to indicate that some data values are more important than others. In later tutorials we will see examples of how this is useful. An ID, which is a unique identifier for the sample. This can be anything as long as it is unique. Sometimes it is just an integer index, but in this dataset the ID is a SMILES string describing the molecule. Notice that X, y, and w all have 113 as the size of their first dimension. That means this dataset contains 113 samples. The final piece of information listed in the output is task_names. Some datasets contain multiple pieces of information for each sample. For example, if a sample represents a molecule, the dataset might record the results of several different experiments on that molecule. This dataset has only a single task: "measured log solubility in mols per litre". Also notice that y and w each have shape (113, 1). The second dimension of these arrays usually matches the number of tasks. Accessing Data from a Dataset There are many ways to access the data contained in a dataset. The simplest is just to directly access the X, y, w, and ids properties. Each of these returns the corresponding information as a NumPy array. End of explanation """ for X, y, w, id in test_dataset.itersamples(): print(y, id) """ Explanation: This is a very easy way to access data, but you should be very careful about using it. This requires the data for all samples to be loaded into memory at once. That's fine for small datasets like this one, but for large datasets it could easily take more memory than you have. A better approach is to iterate over the dataset. That lets it load just a little data at a time, process it, then free the memory before loading the next bit. You can use the itersamples() method to iterate over samples one at a time. End of explanation """ for X, y, w, ids in test_dataset.iterbatches(batch_size=50): print(y.shape) """ Explanation: Most deep learning models can process a batch of multiple samples all at once. You can use iterbatches() to iterate over batches of samples. End of explanation """ test_dataset.to_dataframe() """ Explanation: iterbatches() has other features that are useful when training models. For example, iterbatches(batch_size=100, epochs=10, deterministic=False) will iterate over the complete dataset ten times, each time with the samples in a different random order. Datasets can also expose data using the standard interfaces for TensorFlow and PyTorch. To get a tensorflow.data.Dataset, call make_tf_dataset(). To get a torch.utils.data.IterableDataset, call make_pytorch_dataset(). See the API documentation for more details. The final way of accessing data is to_dataframe(). This copies the data into a Pandas DataFrame. This requires storing all the data in memory at once, so you should only use it with small datasets. End of explanation """ import numpy as np X = np.random.random((10, 5)) y = np.random.random((10, 2)) dataset = dc.data.NumpyDataset(X=X, y=y) print(dataset) """ Explanation: Creating Datasets Now let's talk about how you can create your own datasets. Creating a NumpyDataset is very simple: just pass the arrays containing the data to the constructor. Let's create some random arrays, then wrap them in a NumpyDataset. End of explanation """ dataset.to_dataframe() """ Explanation: Notice that we did not specify weights or IDs. These are optional, as is y for that matter. Only X is required. Since we left them out, it automatically built w and ids arrays for us, setting all weights to 1 and setting the IDs to integer indices. End of explanation """ import tempfile with tempfile.TemporaryDirectory() as data_dir: disk_dataset = dc.data.DiskDataset.from_numpy(X=X, y=y, data_dir=data_dir) print(disk_dataset) """ Explanation: What about creating a DiskDataset? If you have the data in NumPy arrays, you can call DiskDataset.from_numpy() to save it to disk. Since this is just a tutorial, we will save it to a temporary directory. End of explanation """
mne-tools/mne-tools.github.io
0.17/_downloads/956c2e52efc7e768d096c7da98299333/plot_stats_cluster_spatio_temporal_2samp.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Eric Larson <larson.eric.d@gmail.com> # License: BSD (3-clause) import os.path as op import numpy as np from scipy import stats as stats import mne from mne import spatial_src_connectivity from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc from mne.datasets import sample print(__doc__) """ Explanation: 2 samples permutation test on source data with spatio-temporal clustering Tests if the source space data are significantly different between 2 groups of subjects (simulated here using one subject's data). The multiple comparisons problem is addressed with a cluster-level permutation test across space and time. End of explanation """ data_path = sample.data_path() stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc' subjects_dir = data_path + '/subjects' src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif' # Load stc to in common cortical space (fsaverage) stc = mne.read_source_estimate(stc_fname) stc.resample(50, npad='auto') # Read the source space we are morphing to src = mne.read_source_spaces(src_fname) fsave_vertices = [s['vertno'] for s in src] morph = mne.compute_source_morph(stc, 'sample', 'fsaverage', spacing=fsave_vertices, smooth=20, subjects_dir=subjects_dir) stc = morph.apply(stc) n_vertices_fsave, n_times = stc.data.shape tstep = stc.tstep n_subjects1, n_subjects2 = 7, 9 print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2)) # Let's make sure our results replicate, so set the seed. np.random.seed(0) X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10 X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10 X1[:, :, :] += stc.data[:, :, np.newaxis] # make the activity bigger for the second set of subjects X2[:, :, :] += 3 * stc.data[:, :, np.newaxis] # We want to compare the overall activity levels for each subject X1 = np.abs(X1) # only magnitude X2 = np.abs(X2) # only magnitude """ Explanation: Set parameters End of explanation """ print('Computing connectivity.') connectivity = spatial_src_connectivity(src) # Note that X needs to be a list of multi-dimensional array of shape # samples (subjects_k) x time x space, so we permute dimensions X1 = np.transpose(X1, [2, 1, 0]) X2 = np.transpose(X2, [2, 1, 0]) X = [X1, X2] # Now let's actually do the clustering. This can take a long time... # Here we set the threshold quite high to reduce computation. p_threshold = 0.0001 f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2., n_subjects1 - 1, n_subjects2 - 1) print('Clustering.') T_obs, clusters, cluster_p_values, H0 = clu =\ spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1, threshold=f_threshold, buffer_size=None) # Now select the clusters that are sig. at p < 0.05 (note that this value # is multiple-comparisons corrected). good_cluster_inds = np.where(cluster_p_values < 0.05)[0] """ Explanation: Compute statistic To use an algorithm optimized for spatio-temporal clustering, we just pass the spatial connectivity matrix (instead of spatio-temporal) End of explanation """ print('Visualizing clusters.') # Now let's build a convenient representation of each cluster, where each # cluster becomes a "time point" in the SourceEstimate fsave_vertices = [np.arange(10242), np.arange(10242)] stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep, vertices=fsave_vertices, subject='fsaverage') # Let's actually plot the first "time point" in the SourceEstimate, which # shows all the clusters, weighted by duration subjects_dir = op.join(data_path, 'subjects') # blue blobs are for condition A != condition B brain = stc_all_cluster_vis.plot('fsaverage', hemi='both', views='lateral', subjects_dir=subjects_dir, time_label='Duration significant (ms)', clim=dict(kind='value', lims=[0, 1, 40])) brain.save_image('clusters.png') """ Explanation: Visualize the clusters End of explanation """
antoniomezzacapo/qiskit-tutorial
community/terra/qis_adv/single-qubit_quantum_random_access_coding.ipynb
apache-2.0
# useful math functions from math import pi # importing the QISKit from qiskit import Aer, IBMQ from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute # import basic plot tools from qiskit.tools.visualization import plot_histogram # useful additional packages from qiskit.wrapper.jupyter import * from qiskit.backends.ibmq import least_busy IBMQ.load_accounts() """ Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left"> Single-qubit Quantum Random Access Coding (QRAC) The latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial. Contributors Rudy Raymond, Takashi Imamichi Introduction Random Access Coding (RAC) is one of few examples where a small number of quantum bits can exhibit properties that cannot be achieved by the same amount of classical bits. To demonstrate the power of Quantum RAC with a single qubit, consider a case where the sender, say, Alice, wants to encode $n$ bits of information into a qubit. The qubit is then sent to the receiver, say, Bob, who will decode it to obtain one bit (out of $n$ bits sent by Alice) of his choice (which is unknown to Alice). It is known that there is no way for Alice to encode her $2$ bits into one classical bit so that Bob can then recover any bit with success probability better than half. On the other hand, with a single qubit it is possible to encode up to $3$ bits of information such that the receiver is quaranteed to observe his choice of bit with probability better than random guessing. The encoding and decoding scheme for encoding 2 bits and 3 bits of information into 1 qubit are denoted as, respectively, $(2,1)$-QRAC and $(3,1)$-QRAC. Here, using QuantumProgram of IBM Q Experience we describe how to construct the $(2,1)$-QRAC, i.e., encoding $2$ bits of information into $1$ qubit so that any bit can be recovered with probability at least $0.85$, as well as the $(3,1)$-QRAC, i.e., encoding $3$-bits of information into $1$ qubit so that any bit can be recovered with probability at least $0.78$. All necessary unitary gates and measurements can be realized by the $u3$ rotation gates included in the qelib1.inc whose mathematical definition can be found here. First, we prepare the environment. End of explanation """ backend = Aer.get_backend('qasm_simulator') # the device to run on shots = 1024 # the number of shots in the experiment #to record the rotation number for encoding 00, 10, 11, 01 rotationNumbers = {"00":1, "10":3, "11":5, "01":7} # Creating registers # qubit for encoding 2 bits of information qr = QuantumRegister(1) # bit for recording the measurement of the qubit cr = ClassicalRegister(1) # dictionary for encoding circuits encodingCircuits = {} # Quantum circuits for encoding 00, 10, 11, 01 for bits in ("00", "01", "10", "11"): circuitName = "Encode"+bits encodingCircuits[circuitName] = QuantumCircuit(qr, cr, name=circuitName) encodingCircuits[circuitName].u3(rotationNumbers[bits]*pi/4.0, 0, 0, qr[0]) encodingCircuits[circuitName].barrier() # dictionary for decoding circuits decodingCircuits = {} # Quantum circuits for decoding the first and second bit for pos in ("First", "Second"): circuitName = "Decode"+pos decodingCircuits[circuitName] = QuantumCircuit(qr, cr, name=circuitName) if pos == "Second": #if pos == "First" we can directly measure decodingCircuits[circuitName].h(qr[0]) decodingCircuits[circuitName].measure(qr[0], cr[0]) #combine encoding and decoding of QRACs to get a list of complete circuits circuitNames = [] circuits = [] for k1 in encodingCircuits.keys(): for k2 in decodingCircuits.keys(): circuitNames.append(k1+k2) circuits.append(encodingCircuits[k1]+decodingCircuits[k2]) print("List of circuit names:", circuitNames) #list of circuit names """ Explanation: Encoding 2 bits into 1 qubit with $(2,1)$-QRAC We follow Example 1 described in the paper here. Alice encodes her $2$ bits $x_1x_2$ by preparing the following 1-qubit states $|\phi_{x_1x_2}\rangle$. \begin{eqnarray} |\phi_{00}\rangle &=& \cos\left(\pi/8\right)|0\rangle + \sin\left(\pi/8\right)|1\rangle\ |\phi_{10}\rangle &=& \cos\left(3\pi/8\right)|0\rangle + \sin\left(3\pi/8\right)|1\rangle\ |\phi_{11}\rangle &=& \cos\left(5\pi/8\right)|0\rangle + \sin\left(5\pi/8\right)|1\rangle\ |\phi_{01}\rangle &=& \cos\left(7\pi/8\right)|0\rangle + \sin\left(7\pi/8\right)|1\rangle\ \end{eqnarray} Bob recovers his choice of bit by measuring the qubit in the following way. If he wants to recover the first bit (i.e., $x_1$), he measures the qubit in the $\left{|0\rangle, |1\rangle\right}$ basis, namely, he concludes $0$ if he observes $|0\rangle$, and $1$ otherwise. On the other hand, if he wants to recover the second bit (i.e., $x_2$), he measures the qubit in the $\left{|+\rangle, |-\rangle\right}$ basis, where $|+\rangle = 1/\sqrt{2}\left(|0\rangle + |1\rangle\right)$, and $|-\rangle = 1/\sqrt{2}\left(|0\rangle - |1\rangle\right)$. Below is the code to create quantum circuits for performing experiments of $(2,1)$-QRAC. Each of the circuits consits of encoding $2$ bits of information into $1$ qubit and decoding either the first or the second bit by performing measurement on the qubit. Notice that because in the IBM Q Experience we can only perform measurement in the $\left{|0\rangle, |1\rangle\right}$ basis, the measurement in the $\left{|+\rangle, |-\rangle\right}$ basis is performed by first applying the Hadamard gate. End of explanation """ job = execute(circuits, backend=backend, shots=shots) results = job.result() print("Experimental Result of Encode01DecodeFirst") #We should measure "0" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode01DecodeFirst")])) print("Experimental Result of Encode01DecodeSecond") #We should measure "1" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode01DecodeSecond")])) print("Experimental Result of Encode11DecodeFirst") #We should measure "1" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode11DecodeFirst")])) print("Experimental Result of Encode11DecodeSecond") #We should measure "1" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode11DecodeSecond")])) """ Explanation: Now, we can perform various experiments of $(2,1)$-QRAC. Below, we execute all circuits of QRACs and plot some experimental results. End of explanation """ %%qiskit_job_status # Use the IBM Quantum Experience backend = least_busy(IBMQ.backends(simulator=False)) job_exp = execute(circuits, backend=backend, shots=shots) results = job_exp.result() print("Experimental Result of Encode01DecodeFirst") #We should measure "0" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode01DecodeFirst")])) print("Experimental Result of Encode01DecodeSecond") #We should measure "1" with probability 0.85 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode01DecodeSecond")])) """ Explanation: From the above simulations, we can see that each of the encoded bits can be decoded with probability closed to the theoretical values. We can now proceed to perform experiments on the real devices or local simulator, as below. End of explanation """ backend = Aer.get_backend('qasm_simulator') # the device to run on shots = 1024 # the number of shots in the experiment from math import sqrt, cos, acos #compute the value of theta theta = acos(sqrt(0.5 + sqrt(3.0)/6.0)) #to record the u3 parameters for encoding 000, 010, 100, 110, 001, 011, 101, 111 rotationParams = {"000":(2*theta, pi/4, -pi/4), "010":(2*theta, 3*pi/4, -3*pi/4), "100":(pi-2*theta, pi/4, -pi/4), "110":(pi-2*theta, 3*pi/4, -3*pi/4), "001":(2*theta, -pi/4, pi/4), "011":(2*theta, -3*pi/4, 3*pi/4), "101":(pi-2*theta, -pi/4, pi/4), "111":(pi-2*theta, -3*pi/4, 3*pi/4)} # Creating registers # qubit for encoding 3 bits of information qr = QuantumRegister(1) # bit for recording the measurement of the qubit cr = ClassicalRegister(1) # dictionary for encoding circuits encodingCircuits = {} # Quantum circuits for encoding 000, ..., 111 for bits in rotationParams.keys(): circuitName = "Encode"+bits encodingCircuits[circuitName] = QuantumCircuit(qr, cr, name=circuitName) encodingCircuits[circuitName].u3(*rotationParams[bits], qr[0]) encodingCircuits[circuitName].barrier() # dictionary for decoding circuits decodingCircuits = {} # Quantum circuits for decoding the first, second and third bit for pos in ("First", "Second", "Third"): circuitName = "Decode"+pos decodingCircuits[circuitName] = QuantumCircuit(qr, cr, name=circuitName) if pos == "Second": #if pos == "First" we can directly measure decodingCircuits[circuitName].h(qr[0]) elif pos == "Third": decodingCircuits[circuitName].u3(pi/2, -pi/2, pi/2, qr[0]) decodingCircuits[circuitName].measure(qr[0], cr[0]) #combine encoding and decoding of QRACs to get a list of complete circuits circuitNames = [] circuits = [] for k1 in encodingCircuits.keys(): for k2 in decodingCircuits.keys(): circuitNames.append(k1+k2) circuits.append(encodingCircuits[k1]+decodingCircuits[k2]) print("List of circuit names:", circuitNames) #list of circuit names """ Explanation: Encoding 3 bits into 1 qubit with $(3,1)$-QRAC We follow Example 2 described in the paper here. Alice encodes her $3$ bits $x_1x_2x_3$ by preparing the following 1-qubit states $|\phi_{x_1x_2x_3}\rangle$ as below, where $\cos^2\left(\theta\right) = 1/2 + \sqrt{3}/6 > 0.788$. \begin{eqnarray} |\phi_{000}\rangle &=& \cos\left(\theta\right)|0\rangle + e^{i\pi/4}\sin\left(\theta\right)|1\rangle\ |\phi_{001}\rangle &=& \cos\left(\theta\right)|0\rangle + e^{-i\pi/4}\sin\left(\theta\right)|1\rangle\ |\phi_{010}\rangle &=& \cos\left(\theta\right)|0\rangle + e^{3i\pi/4}\sin\left(\theta\right)|1\rangle\ |\phi_{011}\rangle &=& \cos\left(\theta\right)|0\rangle + e^{-3i\pi/4}\sin\left(\theta\right)|1\rangle\ |\phi_{100}\rangle &=& \sin\left(\theta\right)|0\rangle + e^{i\pi/4}\cos\left(\theta\right)|1\rangle\ |\phi_{101}\rangle &=& \sin\left(\theta\right)|0\rangle + e^{-i\pi/4}\cos\left(\theta\right)|1\rangle\ |\phi_{110}\rangle &=& \sin\left(\theta\right)|0\rangle + e^{3i\pi/4}\cos\left(\theta\right)|1\rangle\ |\phi_{111}\rangle &=& \sin\left(\theta\right)|0\rangle + e^{-i\pi/4}\cos\left(\theta\right)|1\rangle\ \end{eqnarray} Bob recovers his choice of bit by measuring the qubit similarly as in the $(2,1,0.85)$-QRAC for the first and second bit, and to recover the third bit, he measures the qubit in the $\left{|+'\rangle, |-'\rangle\right}$ basis, where $|+'\rangle = 1/\sqrt{2}\left(|0\rangle + i |1\rangle\right)$, and $|-'\rangle = 1/\sqrt{2}\left(|0\rangle - i |1\rangle\right)$. Intuitively, the encoding of $(3,1)$-QRAC correspond to assigning the $8$ states to the corners of the unit cube inside the Bloch Sphere as depicted in the figure below. <img src="../../images/blochsphere31.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="300 px" align="center"> Below is the code to create quantum circuits for performing experiments of $(3,1)$-QRAC. Similarly to the $(2,1)$-QRAC, each of the circuits consists of encoding $3$ bits of information into $1$ qubit and decoding either the first, the second, or the third bit by performing measurement on the qubit. End of explanation """ job = execute(circuits, backend=backend, shots=shots) results = job.result() print("Experimental Result of Encode010DecodeFirst") # We should measure "0" with probability 0.78 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeFirst")])) print("Experimental Result of Encode010DecodeSecond") # We should measure "1" with probability 0.78 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeSecond")])) print("Experimental Result of Encode010DecodeThird") # We should measure "0" with probability 0.78 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeThird")])) """ Explanation: Now, we can perform various experiments of $(3,1)$-QRAC. Below, we execute all circuits of QRACs and plot some experimental results. End of explanation """ %%qiskit_job_status # Use the IBM Quantum Experience backend = least_busy(IBMQ.backends(simulator=False)) job_exp = execute(circuits, backend=backend, shots=shots) results = job_exp.result() print("Experimental Result of Encode010DecodeFirst") plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeFirst")])) #We should measure "0" with probability 0.78 print("Experimental Result of Encode010DecodeSecond") #We should measure "1" with probability 0.78 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeSecond")])) print("Experimental Result of Encode010DecodeThird") #We should measure "0" with probability 0.78 plot_histogram(results.get_counts(circuits[circuitNames.index("Encode010DecodeThird")])) """ Explanation: Now, we can perform various experiments of $(3,1)$-QRAC. Below, we execute all circuits of QRACs and plot some experimental results. End of explanation """
GoogleCloudPlatform/covid-19-open-data
examples/logistic_modeling.ipynb
apache-2.0
ESTIMATE_DAYS = 3 data_key = 'KR' date_limit = '2020-03-18' import pandas as pd import seaborn as sns sns.set() df = pd.read_csv(f'https://storage.googleapis.com/covid19-open-data/v3/location/{data_key}.csv').set_index('date') """ Explanation: Logistic Modeling of COVID-19 Confirmed Cases This notebook explores modeling the spread of COVID-19 confirmed cases as a logistic function. It compares the accuracy of two sigmoid models: simple logistic function and Gompertz function, and finds the Gompertz function to be a fairly accurate short-term predictor of future confirmed cases. Defining our parameters and loading the data Here we are looking at the confirmed and fatal cases for Korea through March 18. To apply the model to other countries or dates, just change the code below. End of explanation """ def get_outbreak_mask(data: pd.DataFrame, threshold: int = 10): ''' Returns a mask for > N confirmed cases ''' return data['total_confirmed'] > threshold cols = ['total_confirmed', 'total_deceased'] # Get data only for the columns we care about df = df[cols] # Get data only for the selected dates df = df[df.index <= date_limit] # Get data only after the outbreak begun df = df[get_outbreak_mask(df)] """ Explanation: Looking at the outbreak There are months of data, but we only care about when the number of cases started to grow. We define outbreak as whenever the number of cases exceeded certain threshold. In this case, we are using 10. End of explanation """ df.plot(kind='bar', figsize=(16, 8)); """ Explanation: Plotting the data Let's take a first look at the data. A visual inspection will typically give us a lot of information. End of explanation """ import math import numpy as np from scipy import optimize def logistic_function(x: float, a: float, b: float, c: float): ''' 1 / (1 + e^-x) ''' return a / (1.0 + np.exp(-b * (x - c))) X, y = list(range(len(df))), df['total_confirmed'].tolist() # Providing a reasonable initial guess is crucial for this model params, _ = optimize.curve_fit(logistic_function, X, y, maxfev=int(1E5), p0=[max(y), 1, np.median(X)]) print('Estimated function: {0:.3f} / (1 + e^({1:.3f} * (X - {2:.3f}))'.format(*params)) confirmed = df[['total_confirmed']].rename(columns={'total_confirmed': 'Ground Truth'}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend(); """ Explanation: Modeling the data The data appears to follow very closely a sigmoid function (S-shaped curve). Logically, it makes sense: by the time the outbreak is discovered, there are many undiagnosed (and even asymptomatic) cases which lead to very rapid initial growth; later on, after a combination of aggressive measures to avoid further spread and immunity developed by potential hosts, the growth becomes much slower. Let's see if we can model it using some parameter fitting: End of explanation """ def logistic_function(x: float, a: float, b: float, c: float): ''' a * e^(-b * e^(-cx)) ''' return a * np.exp(-b * np.exp(-c * x)) X, y = list(range(len(df))), df['total_confirmed'].tolist() # Providing a reasonable initial guess is crucial for this model params, _ = optimize.curve_fit(logistic_function, X, y, maxfev=int(1E5), p0=[max(y), np.median(X), .1]) print('Estimated function: {0:.3f} * e^(-{1:.3f} * e^(-{2:.3f}X))'.format(*params)) confirmed = df[['total_confirmed']].rename(columns={'total_confirmed': 'Ground Truth'}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend(); """ Explanation: Gompertz function While the simple logistic function provides a reasonably good fit, it appears to under-estimate the growth rate after the initial outbreak. A better fit might be Gompertz function which is an asymmetric logistic function that has a slower growth decay until the curve goes flat over time. Let's take a look at using this new function to find the best paremeters that fit the data: End of explanation """ params_validate, _ = optimize.curve_fit(logistic_function, X[:-ESTIMATE_DAYS], y[:-ESTIMATE_DAYS]) # Project zero for all values except for the last ESTIMATE_DAYS projected = [0] * len(X[:-ESTIMATE_DAYS]) + [logistic_function(x, *params_validate) for x in X[-ESTIMATE_DAYS:]] projected = pd.Series(projected, index=df.index, name='Projected') confirmed = pd.DataFrame({'Ground Truth': df['total_confirmed'], 'Projected': projected}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params_validate) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend(); """ Explanation: Evaluating the model That curve looks like a very good fit! Traditional epidemiology models generally capture a number of different parameters representing biology and social factors; however, the COVID-19 pandemic might be very challenging to fit for traditional models for a number of reasons: * It's a completely new disease, never seen before * Unprecedented, very aggressive measures have been taken by many nations to try to stop the spread of the disease * Testing has been held back by a combination of shortage of tests and political reasons If a known model is not being used, then a simpler model is more likely to be a better fit; too many parameters have a tendency to overfit the data which diminishes the model's ability to make predictions. In other words, the model may appear to be able to perfectly follow known data, but when asked to make a prediction about future data it will likely be wrong. This is one of the main reasons why machine learning is not a good tool for this task, since there is not enough data to avoid overfitting a model. Validating the model To validate our model, let's try to fit it again without looking at the last 3 days of data. Then, we can estimate the missing days using our model, and verify if the results still hold by comparing what the model thought was going to happen with the actual data. End of explanation """ import datetime # Append N new days to our indices date_format = '%Y-%m-%d' date_range = [datetime.datetime.strptime(date, date_format) for date in df.index] for _ in range(ESTIMATE_DAYS): date_range.append(date_range[-1] + datetime.timedelta(days=1)) date_range = [datetime.datetime.strftime(date, date_format) for date in date_range] # Perform projection with the previously estimated parameters projected = [0] * len(X) + [logistic_function(x, *params) for x in range(len(X), len(X) + ESTIMATE_DAYS)] projected = pd.Series(projected, index=date_range, name='Projected') df_ = pd.DataFrame({'Confirmed': df['total_confirmed'], 'Projected': projected}) ax = df_.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in range(len(date_range))] ax.plot(date_range, estimate, color='red', label='Estimate') ax.legend(); """ Explanation: Projecting future data It looks like our logistic model slightly underestimates the confirmed cases. This indicates that the model is optimistic about the slowdown of new cases being reported. A number of factors could affect this, like wider availability of tests. Ultimately, it is also possible that the logistic model is not an appropriate function to use. However, the predictions are close enough to the real data that this is probably a good starting point for a rough estimate over a short time horizon. Now, let's use the model we fitted earlier which used all the data, and try to predict what the next 3 days will look like. End of explanation """
niazangels/CADL
session-3/lecture-3.ipynb
apache-2.0
# imports %matplotlib inline # %pylab osx import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx # Some additional libraries which we'll use just # to produce some visualizations of our training from libs.utils import montage from libs import gif import IPython.display as ipyd plt.style.use('ggplot') # Bit of formatting because I don't like the default inline code style: from IPython.core.display import HTML HTML("""<style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style>""") """ Explanation: Session 3: Unsupervised and Supervised Learning <p class="lead"> Parag K. Mital<br /> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br /> <a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br /> <a href="https://twitter.com/hashtag/CADL">#CADL</a> </p> <a name="learning-goals"></a> Learning Goals Build an autoencoder w/ linear and convolutional layers Understand how one hot encodings work Build a classification network w/ linear and convolutional layers <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> Introduction Unsupervised vs. Supervised Learning Autoencoders MNIST Fully Connected Model Convolutional Autoencoder Denoising Autoencoder Variational Autoencoders Predicting Image Labels One-Hot Encoding Using Regression for Classification Fully Connected Network Convolutional Networks Saving/Loading Models Checkpoint Protobuf Wrap Up Reading <!-- /MarkdownTOC --> <a name="introduction"></a> Introduction In the last session we created our first neural network. We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network <TODO: Insert animation of gradient descent from previous session>. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. <TODO: Insert graphic of activation functions from previous session>. We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image. In this session, we'll see how to use some simple deep nets with about 3 or 4 layers capable of performing unsupervised and supervised learning, and I'll explain those terms in a bit. The components we learn here will let us explore data in some very interesting ways. <a name="unsupervised-vs-supervised-learning"></a> Unsupervised vs. Supervised Learning Machine learning research in deep networks performs one of two types of learning. You either have a lot of data and you want the computer to reason about it, maybe to encode the data using less data, and just explore what patterns there might be. That's useful for clustering data, reducing the dimensionality of the data, or even for generating new data. That's generally known as unsupervised learning. In the supervised case, you actually know what you want out of your data. You have something like a label or a class that is paired with every single piece of data. In this first half of this session, we'll see how unsupervised learning works using something called an autoencoder and how it can be extended using convolution.. Then we'll get into supervised learning and show how we can build networks for performing regression and classification. By the end of this session, hopefully all of that will make a little more sense. Don't worry if it doesn't yet! Really the best way to learn is to put this stuff into practice in the homeworks. <a name="autoencoders"></a> Autoencoders <TODO: Graphic of autoencoder network diagram> An autoencoder is a type of neural network that learns to encode its inputs, often using much less data. It does so in a way that it can still output the original input with just the encoded values. For it to learn, it does not require "labels" as its output. Instead, it tries to output whatever it was given as input. So in goes an image, and out should also go the same image. But it has to be able to retain all the details of the image, even after possibly reducing the information down to just a few numbers. We'll also explore how this method can be extended and used to cluster or organize a dataset, or to explore latent dimensions of a dataset that explain some interesting ideas. For instance, we'll see how with handwritten numbers, we will be able to see how each number can be encoded in the autoencoder without ever telling it which number is which. <TODO: place teaser of MNIST video learning> But before we get there, we're going to need to develop an understanding of a few more concepts. First, imagine a network that takes as input an image. The network can be composed of either matrix multiplications or convolutions to any number of filters or dimensions. At the end of any processing, the network has to be able to recompose the original image it was input. In the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Instead if having 2 inputs, we'll now have an entire image as an input, the brightness of every pixel in our image. And as output, we're going to have the same thing, the entire image being output. <a name="mnist"></a> MNIST Let's first get some standard imports: End of explanation """ from libs.datasets import MNIST ds = MNIST() """ Explanation: Then we're going to try this with the MNIST dataset, which I've included a simple interface for in the libs module. End of explanation """ # ds.<tab> """ Explanation: Let's take a look at what this returns: End of explanation """ print(ds.X.shape) """ Explanation: So we can see that there are a few interesting accessors. ... we're not going to worry about the labels until a bit later when we talk about a different type of model which can go from the input image to predicting which label the image is. But for now, we're going to focus on trying to encode the image and be able to reconstruct the image from our encoding. let's take a look at the images which are stored in the variable X. Remember, in this course, we'll always use the variable X to denote the input to a network. and we'll use the variable Y to denote its output. End of explanation """ plt.imshow(ds.X[0].reshape((28, 28))) # Let's get the first 1000 images of the dataset and reshape them imgs = ds.X[:1000].reshape((-1, 28, 28)) # Then create a montage and draw the montage plt.imshow(montage(imgs), cmap='gray') """ Explanation: So each image has 784 features, and there are 70k of them. If we want to draw the image, we're going to have to reshape it to a square. 28 x 28 is 784. So we're just going to reshape it to a square so that we can see all the pixels arranged in rows and columns instead of one giant vector. End of explanation """ # Take the mean across all images mean_img = np.mean(ds.X, axis=0) # Then plot the mean image. plt.figure() plt.imshow(mean_img.reshape((28, 28)), cmap='gray') """ Explanation: Let's take a look at the mean of the dataset: End of explanation """ # Take the std across all images std_img = np.std(ds.X, axis=0) # Then plot the std image. plt.figure() plt.imshow(std_img.reshape((28, 28))) """ Explanation: And the standard deviation End of explanation """ dimensions = [512, 256, 128, 64] """ Explanation: So recall from session 1 that these two images are really saying whats more or less contant across every image, and what's changing. We're going to try and use an autoencoder to try to encode everything that could possibly change in the image. <a name="fully-connected-model"></a> Fully Connected Model To try and encode our dataset, we are going to build a series of fully connected layers that get progressively smaller. So in neural net speak, every pixel is going to become its own input neuron. And from the original 784 neurons, we're going to slowly reduce that information down to smaller and smaller numbers. It's often standard practice to use other powers of 2 or 10. I'll create a list of the number of dimensions we'll use for each new layer. End of explanation """ # So the number of features is the second dimension of our inputs matrix, 784 n_features = ds.X.shape[1] # And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs. X = tf.placeholder(tf.float32, [None, n_features]) """ Explanation: So we're going to reduce our 784 dimensions down to 512 by multiplyling them by a 784 x 512 dimensional matrix. Then we'll do the same thing again using a 512 x 256 dimensional matrix, to reduce our dimensions down to 256 dimensions, and then again to 128 dimensions, then finally to 64. To get back to the size of the image, we're going to just going to do the reverse. But we're going to use the exact same matrices. We do that by taking the transpose of the matrix, which reshapes the matrix so that the rows become columns, and vice-versa. So our last matrix which was 128 rows x 64 columns, when transposed, becomes 64 rows x 128 columns. So by sharing the weights in the network, we're only really learning half of the network, and those 4 matrices are going to make up the bulk of our model. We just have to find out what they are using gradient descent. We're first going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. We're going to pass our entire dataset in minibatches. So we'll send 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible in the graph. End of explanation """ # let's first copy our X placeholder to the name current_input current_input = X n_input = n_features # We're going to keep every matrix we create so let's create a list to hold them all Ws = [] # We'll create a for loop to create each layer: for layer_i, n_output in enumerate(dimensions): # just like in the last session, # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("encoder/layer/{}".format(layer_i)): # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = tf.get_variable( name='W', shape=[n_input, n_output], initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02)) # Now we'll multiply our input by our newly created W matrix # and add the bias h = tf.matmul(current_input, W) # And then use a relu activation function on its output current_input = tf.nn.relu(h) # Finally we'll store the weight matrix so we can build the decoder. Ws.append(W) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output """ Explanation: Now we're going to create a network which will perform a series of multiplications on X, followed by adding a bias, and then wrapping all of this in a non-linearity: End of explanation """ print(current_input.get_shape()) """ Explanation: So now we've created a series of multiplications in our graph which take us from our input of batch size times number of features which started as None x 784, and then we're multiplying it by a series of matrices which will change the size down to None x 64. End of explanation """ # We'll first reverse the order of our weight matrices Ws = Ws[::-1] # then reverse the order of our dimensions # appending the last layers number of inputs. dimensions = dimensions[::-1][1:] + [ds.X.shape[1]] print(dimensions) for layer_i, n_output in enumerate(dimensions): # we'll use a variable scope again to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("decoder/layer/{}".format(layer_i)): # Now we'll grab the weight matrix we created before and transpose it # So a 3072 x 784 matrix would become 784 x 3072 # or a 256 x 64 matrix, would become 64 x 256 W = tf.transpose(Ws[layer_i]) # Now we'll multiply our input by our transposed W matrix h = tf.matmul(current_input, W) # And then use a relu activation function on its output current_input = tf.nn.relu(h) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output """ Explanation: In order to get back to the original dimensions of the image, we're going to reverse everything we just did. Let's see how we do that: End of explanation """ Y = current_input """ Explanation: After this, our current_input will become the output of the network: End of explanation """ # We'll first measure the average difference across every pixel cost = tf.reduce_mean(tf.squared_difference(X, Y), 1) print(cost.get_shape()) """ Explanation: Now that we have the output of the network, we just need to define a training signal to train the network with. To do that, we create a cost function which will measure how well the network is doing: End of explanation """ cost = tf.reduce_mean(cost) """ Explanation: And then take the mean again across batches: End of explanation """ learning_rate = 0.001 optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: We can now train our network just like we did in the last session. We'll need to create an optimizer which takes a parameter learning_rate. And we tell it that we want to minimize our cost, which is measuring the difference between the output of the network and the input. End of explanation """ # %% # We create a session to use the graph sess = tf.Session() sess.run(tf.global_variables_initializer()) """ Explanation: Now we'll create a session to manage the training in minibatches: End of explanation """ # Some parameters for training batch_size = 100 n_epochs = 5 # We'll try to reconstruct the same first 100 images and show how # The network does over the course of training. examples = ds.X[:100] # We'll store the reconstructions in a list imgs = [] fig, ax = plt.subplots(1, 1) for epoch_i in range(n_epochs): for batch_X, _ in ds.train.next_batch(): sess.run(optimizer, feed_dict={X: batch_X - mean_img}) recon = sess.run(Y, feed_dict={X: examples - mean_img}) recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255) img_i = montage(recon).astype(np.uint8) imgs.append(img_i) ax.imshow(img_i, cmap='gray') fig.canvas.draw() print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img})) gif.build_gif(imgs, saveto='ae.gif', cmap='gray') ipyd.Image(url='ae.gif?{}'.format(np.random.rand()), height=500, width=500) """ Explanation: Now we'll train: End of explanation """ from tensorflow.python.framework.ops import reset_default_graph reset_default_graph() # And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs. X = tf.placeholder(tf.float32, [None, n_features]) """ Explanation: <a name="convolutional-autoencoder"></a> Convolutional Autoencoder To get even better encodings, we can also try building a convolutional network. Why would a convolutional network perform any different to a fully connected one? Let's see what we were doing in the fully connected network. For every pixel in our input, we have a set of weights corresponding to every output neuron. Those weights are unique to each pixel. Each pixel gets its own row in the weight matrix. That really doesn't make a lot of sense, since we would guess that nearby pixels are probably not going to be so different. And we're not really encoding what's happening around that pixel, just what that one pixel is doing. In a convolutional model, we're explicitly modeling what happens around a pixel. And we're using the exact same convolutions no matter where in the image we are. But we're going to use a lot of different convolutions. Recall in session 1 we created a Gaussian and Gabor kernel and used this to convolve an image to either blur it or to accentuate edges. Armed with what you know now, you could try to train a network to learn the parameters that map an untouched image to a blurred or edge filtered version of it. What you should find is the kernel will look sort of what we built by hand. I'll leave that as an excercise for you. But in fact, that's too easy really. That's just 1 filter you would have to learn. We're going to see how we can use many convolutional filters, way more than 1, and how it will help us to encode the MNIST dataset. To begin we'll need to reset the current graph and start over. End of explanation """ X_tensor = tf.reshape(X, [-1, 28, 28, 1]) """ Explanation: Since X is currently [batch, height*width], we need to reshape it to a 4-D tensor to use it in a convolutional graph. Remember back to the first session that in order to perform convolution, we have to use 4-dimensional tensors describing the: N x H x W x C We'll reshape our input placeholder by telling the shape parameter to be these new dimensions. However, since our batch dimension is None, we cannot reshape without using the special value -1, which says that the size of that dimension should be computed so that the total size remains constant. Since we haven't defined the batch dimension's shape yet, we use -1 to denote this dimension should not change size. End of explanation """ n_filters = [16, 16, 16] filter_sizes = [4, 4, 4] """ Explanation: We'll now setup the first convolutional layer. Remember from Session 2 that the weight matrix for convolution should be [height x width x input_channels x output_channels] Think a moment about how this is different to the fully connected network. In the fully connected network, every pixel was being multiplied by its own weight to every other neuron. With a convolutional network, we use the extra dimensions to allow the same set of filters to be applied everywhere across an image. This is also known in the literature as weight sharing, since we're sharing the weights no matter where in the input we are. That's unlike the fully connected approach, which has unique weights for every pixel. What's more is after we've performed the convolution, we've retained the spatial organization of the input. We still have dimensions of height and width. That's again unlike the fully connected network which effectively shuffles or takes int account information from everywhere, not at all caring about where anything is. That can be useful or not depending on what we're trying to achieve. Often, it is something we might want to do after a series of convolutions to encode translation invariance. Don't worry about that for now. With MNIST especially we won't need to do that since all of the numbers are in the same position. Now with our tensor ready, we're going to do what we've just done with the fully connected autoencoder. Except, instead of performing matrix multiplications, we're going to create convolution operations. To do that, we'll need to decide on a few parameters including the filter size, how many convolution filters we want, and how many layers we want. I'll start with a fairly small network, and let you scale this up in your own time. End of explanation """ current_input = X_tensor # notice instead of having 784 as our input features, we're going to have # just 1, corresponding to the number of channels in the image. # We're going to use convolution to find 16 filters, or 16 channels of information # in each spatial location we perform convolution at. n_input = 1 # We're going to keep every matrix we create so let's create a list to hold them all Ws = [] shapes = [] # We'll create a for loop to create each layer: for layer_i, n_output in enumerate(n_filters): # just like in the last session, # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("encoder/layer/{}".format(layer_i)): # we'll keep track of the shapes of each layer # As we'll need these for the decoder shapes.append(current_input.get_shape().as_list()) # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = tf.get_variable( name='W', shape=[ filter_sizes[layer_i], filter_sizes[layer_i], n_input, n_output], initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02)) # Now we'll convolve our input by our newly created W matrix h = tf.nn.conv2d(current_input, W, strides=[1, 2, 2, 1], padding='SAME') # And then use a relu activation function on its output current_input = tf.nn.relu(h) # Finally we'll store the weight matrix so we can build the decoder. Ws.append(W) # We'll also replace n_input with the current n_output, so that on the # next iteration, our new number inputs will be correct. n_input = n_output """ Explanation: Now we'll create a loop to create every layer's convolution, storing the convolution operations we create so that we can do the reverse. End of explanation """ # We'll first reverse the order of our weight matrices Ws.reverse() # and the shapes of each layer shapes.reverse() # and the number of filters (which is the same but could have been different) n_filters.reverse() # and append the last filter size which is our input image's number of channels n_filters = n_filters[1:] + [1] print(n_filters, filter_sizes, shapes) # and then loop through our convolution filters and get back our input image # we'll enumerate the shapes list to get us there for layer_i, shape in enumerate(shapes): # we'll use a variable scope to help encapsulate our variables # This will simply prefix all the variables made in this scope # with the name we give it. with tf.variable_scope("decoder/layer/{}".format(layer_i)): # Create a weight matrix which will increasingly reduce # down the amount of information in the input by performing # a matrix multiplication W = Ws[layer_i] # Now we'll convolve by the transpose of our previous convolution tensor h = tf.nn.conv2d_transpose(current_input, W, tf.stack([tf.shape(X)[0], shape[1], shape[2], shape[3]]), strides=[1, 2, 2, 1], padding='SAME') # And then use a relu activation function on its output current_input = tf.nn.relu(h) """ Explanation: Now with our convolutional encoder built and the encoding weights stored, we'll reverse the whole process to decode everything back out to the original image. End of explanation """ Y = current_input Y = tf.reshape(Y, [-1, n_features]) Y.get_shape() """ Explanation: Now we have the reconstruction through the network: End of explanation """ cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(X, Y), 1)) learning_rate = 0.001 # pass learning rate and cost to optimize optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) # Session to manage vars/train sess = tf.Session() sess.run(tf.global_variables_initializer()) # Some parameters for training batch_size = 100 n_epochs = 5 # We'll try to reconstruct the same first 100 images and show how # The network does over the course of training. examples = ds.X[:100] # We'll store the reconstructions in a list imgs = [] fig, ax = plt.subplots(1, 1) for epoch_i in range(n_epochs): for batch_X, _ in ds.train.next_batch(): sess.run(optimizer, feed_dict={X: batch_X - mean_img}) recon = sess.run(Y, feed_dict={X: examples - mean_img}) recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255) img_i = montage(recon).astype(np.uint8) imgs.append(img_i) ax.imshow(img_i, cmap='gray') fig.canvas.draw() print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img})) gif.build_gif(imgs, saveto='conv-ae.gif', cmap='gray') ipyd.Image(url='conv-ae.gif?{}'.format(np.random.rand()), height=500, width=500) """ Explanation: We can measure the cost and train exactly like before with the fully connected network: End of explanation """ from libs import datasets # ds = datasets.MNIST(one_hot=True) """ Explanation: <a name="denoising-autoencoder"></a> Denoising Autoencoder The denoising autoencoder is a very simple extension to an autoencoder. Instead of seeing the input, it is corrupted, for instance by masked noise. but the reconstruction loss is still measured on the original uncorrupted image. What this does is lets the model try to interpret occluded or missing parts of the thing it is reasoning about. It would make sense for many models, that not every datapoint in an input is necessary to understand what is going on. Denoising autoencoders try to enforce that, and as a result, the encodings at the middle most layer are often far more representative of the actual classes of different objects. In the resources section, you'll see that I've included a general framework autoencoder allowing you to use either a fully connected or convolutional autoencoder, and whether or not to include denoising. If you interested in the mechanics of how this works, I encourage you to have a look at the code. <a name="variational-autoencoders"></a> Variational Autoencoders A variational autoencoder extends the traditional autoencoder by using an additional layer called the variational layer. It is actually two networks that are cleverly connected using a simple reparameterization trick, to help the gradient flow through both networks during backpropagation allowing both to be optimized. We dont' have enough time to get into the details, but I'll try to quickly explain: it tries to optimize the likelihood that a particular distribution would create an image, rather than trying to optimize simply the L2 loss at the end of the network. Or put another way it hopes that there is some distribution that a distribution of image encodings could be defined as. This is a bit tricky to grasp, so don't worry if you don't understand the details. The major difference to hone in on is that instead of optimizing distance in the input space of pixel to pixel distance, which is actually quite arbitrary if you think about it... why would we care about the exact pixels being the same? Human vision would not care for most cases, if there was a slight translation of our image, then the distance could be very high, but we would never be able to tell the difference. So intuitively, measuring error based on raw pixel to pixel distance is not such a great approach. Instead of relying on raw pixel differences, the variational autoencoder tries to optimize two networks. One which says that given my pixels, I am pretty sure I can encode them to the parameters of some well known distribution, like a set of Gaussians, instead of some artbitrary density of values. And then I can optimize the latent space, by saying that particular distribution should be able to represent my entire dataset, and I try to optimize the likelihood that it will create the images I feed through a network. So distance is somehow encoded in this latent space. Of course I appreciate that is a difficult concept so forgive me for not being able to expand on it in more details. But to make up for the lack of time and explanation, I've included this model under the resources section for you to play with! Just like the "vanilla" autoencoder, this one supports both fully connected, convolutional, and denoising models. This model performs so much better than the vanilla autoencoder. In fact, it performs so well that I can even manage to encode the majority of MNIST into 2 values. The following visualization demonstrates the learning of a variational autoencoder over time. <mnist visualization> There are of course a lot more interesting applications of such a model. You could for instance, try encoding a more interesting dataset, such as CIFAR which you'll find a wrapper for in the libs/datasets module. <TODO: produce GIF visualization madness> Or the celeb faces dataset: <celeb dataset> Or you could try encoding an entire movie. We tried it with the copyleft movie, "Sita Sings The Blues". Every 2 seconds, we stored an image of this movie, and then fed all of these images to a deep variational autoencoder. This is the result. <show sita sings the blues training images> And I'm sure we can get closer with deeper nets and more train time. But notice how in both celeb faces and sita sings the blues, the decoding is really blurred. That is because of the assumption of the underlying representational space. We're saying the latent space must be modeled as a gaussian, and those factors must be distributed as a gaussian. This enforces a sort of discretization of my representation, enforced by the noise parameter of the gaussian. In the last session, we'll see how we can avoid this sort of blurred representation and get even better decodings using a generative adversarial network. For now, consider the applications that this method opens up. Once you have an encoding of a movie, or image dataset, you are able to do some very interesting things. You have effectively stored all the representations of that movie, although its not perfect of course. But, you could for instance, see how another movie would be interpretted by the same network. That's similar to what Terrance Broad did for his project on reconstructing blade runner and a scanner darkly, though he made use of both the variational autoencoder and the generative adversarial network. We're going to look at that network in more detail in the last session. We'll also look at how to properly handle very large datasets like celeb faces or the one used here to create the sita sings the blues autoencoder. Taking every 60th frame of Sita Sings The Blues gives you aobut 300k images. And that's a lot of data to try and load in all at once. We had to size it down considerably, and make use of what's called a tensorflow input pipeline. I've included all the code for training this network, which took about 1 day on a fairly powerful machine, but I will not get into the details of the image pipeline bits until session 5 when we look at generative adversarial networks. I'm delaying this because we'll need to learn a few things along the way before we can build such a network. <a name="predicting-image-labels"></a> Predicting Image Labels We've just seen a variety of types of autoencoders and how they are capable of compressing information down to its inner most layer while still being able to retain most of the interesting details. Considering that the CelebNet dataset was nearly 200 thousand images of 64 x 64 x 3 pixels, and we're able to express those with just an inner layer of 50 values, that's just magic basically. Magic. Okay, let's move on now to a different type of learning often called supervised learning. Unlike what we just did, which is work with a set of data and not have any idea what that data should be labeled as, we're going to explicitly tell the network what we want it to be labeled by saying what the network should output for a given input. In the previous cause, we just had a set of Xs, our images. Now, we're going to have Xs and Ys given to us, and use the Xs to try and output the Ys. With MNIST, the outputs of each image are simply what numbers are drawn in the input image. The wrapper for grabbing this dataset from the libs module takes an additional parameter which I didn't talk about called one_hot. End of explanation """ ds = datasets.MNIST(one_hot=False) # let's look at the first label print(ds.Y[0]) # okay and what does the input look like plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray') # great it is just the label of the image plt.figure() # Let's look at the next one just to be sure print(ds.Y[1]) # Yea the same idea plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray') """ Explanation: To see what this is doing, let's compare setting it to false versus true: End of explanation """ ds = datasets.MNIST(one_hot=True) plt.figure() plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray') print(ds.Y[0]) # array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]) # Woah a bunch more numbers. 10 to be exact, which is also the number # of different labels in the dataset. plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray') print(ds.Y[1]) # array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]) """ Explanation: And now let's look at what the one hot version looks like: End of explanation """ print(ds.X.shape) """ Explanation: So instead of have a number from 0-9, we have 10 numbers corresponding to the digits, 0-9, and each value is either 0 or 1. Whichever digit the image represents is the one that is 1. To summarize, we have all of the images of the dataset stored as: n_observations x n_features tensor (n-dim array) End of explanation """ print(ds.Y.shape) print(ds.Y[0]) """ Explanation: And labels stored as n_observations x n_labels where each observation is a one-hot vector, where only one element is 1 indicating which class or label it is. End of explanation """ # cost = tf.reduce_sum(tf.abs(y_pred - y_true)) """ Explanation: <a name="one-hot-encoding"></a> One-Hot Encoding Remember in the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Just like in our unsupervised model, instead of having 2 inputs, we'll now have 784 inputs, the brightness of every pixel in our image. And instead of 3 outputs, like in our painting network from last session, or the 784 outputs we had in our unsupervised MNIST network, we'll now have 10 outputs representing the one-hot encoding of its label. So why don't we just have 1 output? A number from 0-9? Wouldn't having 10 different outputs instead of just 1 be harder to learn? Consider how we normally train the network. We have to give it a cost which it will use to minimize. What could our cost be if our output was just a single number, 0-9? We would still have the true label, and the predicted label. Could we just take the subtraction of the two values? e.g. the network predicted 0, but the image was really the number 8. Okay so then our distance could be: End of explanation """ import tensorflow as tf from libs import datasets ds = datasets.MNIST(split=[0.8, 0.1, 0.1]) n_input = 28 * 28 """ Explanation: But in this example, the cost would be 8. If the image was a 4, and the network predicted a 0 again, the cost would be 4... but isn't the network still just as wrong, not half as much as when the image was an 8? In a one-hot encoding, the cost would be 1 for both, meaning they are both just as wrong. So we're able to better measure the cost, by separating each class's label into its own dimension. <a name="using-regression-for-classification"></a> Using Regression for Classification The network we build will be trained to output values between 0 and 1. They won't output exactly a 0 or 1. But rather, they are able to produce any value. 0, 0.1, 0.2, ... and that means the networks we've been using are actually performing regression. In regression, the output is "continuous", rather than "discrete". The difference is this: a discrete output means the network can only output one of a few things. Like, 0, 1, 2, or 3, and that's it. But a continuous output means it can output any real number. In order to perform what's called classification, we're just simply going to look at whichever value is the highest in our one hot encoding. In order to do that a little better, we're actually going interpret our one hot encodings as probabilities by scaling the total output by their sum. What this does is allows us to understand that as we grow more confident in one prediction, we should grow less confident in all other predictions. We only have so much certainty to go around, enough to add up to 1. If we think the image might also be the number 1, then we lose some certainty of it being the number 0. It turns out there is a better cost function that simply measuring the distance between two vectors when they are probabilities. It's called cross entropy: \begin{align} \Large{H(x) = -\sum{y_{\text{t}}(x) * \log(y_{\text{p}}(x))}} \end{align} What this equation does is measures the similarity of our prediction with our true distribution, by exponentially increasing error whenever our prediction gets closer to 1 when it should be 0, and similarly by exponentially increasing error whenever our prediction gets closer to 0, when it should be 1. I won't go into more detail here, but just know that we'll be using this measure instead of a normal distance measure. <a name="fully-connected-network"></a> Fully Connected Network Defining the Network Let's see how our one hot encoding and our new cost function will come into play. We'll create our network for predicting image classes in pretty much the same way we've created previous networks: We will have as input to the network 28 x 28 values. End of explanation """ n_output = 10 """ Explanation: As output, we have our 10 one-hot-encoding values End of explanation """ X = tf.placeholder(tf.float32, [None, n_input]) """ Explanation: We're going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. Remember from our unsupervised model, this is just something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. Since we're going to pass our entire dataset in batches we'll need this to be say 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible. End of explanation """ Y = tf.placeholder(tf.float32, [None, n_output]) """ Explanation: For the output, we'll have None again, since for every input, we'll have the same number of images that have outputs. End of explanation """ # We'll use the linear layer we created in the last session, which I've stored in the libs file: # NOTE: The lecture used an older version of this function which had a slightly different definition. from libs import utils Y_pred, W = utils.linear( x=X, n_output=n_output, activation=tf.nn.softmax, name='layer1') """ Explanation: Now we'll connect our input to the output with a linear layer. Instead of relu, we're going to use softmax. This will perform our exponential scaling of the outputs and make sure the output sums to 1, making it a probability. End of explanation """ # We add 1e-12 because the log is undefined at 0. cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cross_entropy) """ Explanation: And then we write our loss function as the cross entropy. And then we'll give our optimizer the cross_entropy measure just like we would with GradientDescent. The formula for cross entropy is: \begin{align} \Large{H(x) = -\sum{\text{Y}{\text{true}} * log(\text{Y}{pred})}} \end{align} End of explanation """ predicted_y = tf.argmax(Y_pred, 1) actual_y = tf.argmax(Y, 1) """ Explanation: To determine the correct class from our regression output, we have to take the maximum index. End of explanation """ correct_prediction = tf.equal(predicted_y, actual_y) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) """ Explanation: We can then measure the accuracy by seeing whenever these are equal. Note, this is just for us to see, and is not at all used to "train" the network! End of explanation """ sess = tf.Session() sess.run(tf.global_variables_initializer()) # Now actually do some training: batch_size = 50 n_epochs = 5 for epoch_i in range(n_epochs): for batch_xs, batch_ys in ds.train.next_batch(): sess.run(optimizer, feed_dict={ X: batch_xs, Y: batch_ys }) valid = ds.valid print(sess.run(accuracy, feed_dict={ X: valid.images, Y: valid.labels })) # Print final test accuracy: test = ds.test print(sess.run(accuracy, feed_dict={ X: test.images, Y: test.labels })) """ Explanation: Training the Network The rest of the code will be exactly the same as before. We chunk the training dataset into batch_size chunks, and let these images help train the network over a number of iterations. End of explanation """ # We first get the graph that we used to compute the network g = tf.get_default_graph() # And can inspect everything inside of it [op.name for op in g.get_operations()] """ Explanation: What we should see is the accuracy being printed after each "epoch", or after every run over the entire dataset. Since we're using batches, we use the notion of an "epoch" to denote whenever we've gone through the entire dataset. <a name="inspecting-the-network"></a> Inspecting the Trained Network Let's try and now inspect how the network is accomplishing this task. We know that our network is a single matrix multiplication of our 784 pixel values. The weight matrix, W, should therefore have 784 rows. As outputs, it has 10 values. So the matrix is composed in the linear function as n_input x n_output values. So the matrix is 784 rows x 10 columns. <TODO: graphic w/ wacom showing network and matrix multiplication and pulling out single neuron/column> In order to get this matrix, we could have had our linear function return the tf.Tensor. But since everything is part of the tensorflow graph, and we've started using nice names for all of our operations, we can actually find this tensor using tensorflow: End of explanation """ W = g.get_tensor_by_name('layer1/W:0') """ Explanation: Looking at the names of the operations, we see there is one linear/W. But this is the tf.Operation. Not the tf.Tensor. The tensor is the result of the operation. To get the result of the operation, we simply add ":0" to the name of the operation: End of explanation """ W_arr = np.array(W.eval(session=sess)) print(W_arr.shape) """ Explanation: We can use the existing session to compute the current value of this tensor: End of explanation """ fig, ax = plt.subplots(1, 10, figsize=(20, 3)) for col_i in range(10): ax[col_i].imshow(W_arr[:, col_i].reshape((28, 28)), cmap='coolwarm') """ Explanation: And now we have our tensor! Let's try visualizing every neuron, or every column of this matrix: End of explanation """ from tensorflow.python.framework.ops import reset_default_graph reset_default_graph() """ Explanation: We're going to use the coolwarm color map, which will use "cool" values, or blue-ish colors for low values. And "warm" colors, red, basically, for high values. So what we begin to see is that there is a weighting of all the input values, where pixels that are likely to describe that number are being weighted high, and pixels that are not likely to describe that number are being weighted low. By summing all of these multiplications together, the network is able to begin to predict what number is in the image. This is not a very good network though, and the representations it learns could still do a much better job. We were only right about 93% of the time according to our accuracy. State of the art models will get about 99.9% accuracy. <a name="convolutional-networks"></a> Convolutional Networks To get better performance, we can build a convolutional network. We've already seen how to create a convolutional network with our unsupervised model. We're going to make the same modifications here to help us predict the digit labels in MNIST. Defining the Network I'll first reset the current graph, so we can build a new one. We'll use tensorflow's nice helper function for doing this. End of explanation """ # We first get the graph that we used to compute the network g = tf.get_default_graph() # And can inspect everything inside of it [op.name for op in g.get_operations()] """ Explanation: And just to confirm, let's see what's in our graph: End of explanation """ # We'll have placeholders just like before which we'll fill in later. ds = datasets.MNIST(one_hot=True, split=[0.8, 0.1, 0.1]) X = tf.placeholder(tf.float32, [None, 784]) Y = tf.placeholder(tf.float32, [None, 10]) """ Explanation: Great. Empty. Now let's get our dataset, and create some placeholders like before: End of explanation """ X_tensor = tf.reshape(X, [-1, 28, 28, 1]) """ Explanation: Since X is currently [batch, height*width], we need to reshape to a 4-D tensor to use it in a convolutional graph. Remember, in order to perform convolution, we have to use 4-dimensional tensors describing the: N x H x W x C We'll reshape our input placeholder by telling the shape parameter to be these new dimensions and we'll use -1 to denote this dimension should not change size. End of explanation """ filter_size = 5 n_filters_in = 1 n_filters_out = 32 W_1 = tf.get_variable( name='W', shape=[filter_size, filter_size, n_filters_in, n_filters_out], initializer=tf.random_normal_initializer()) """ Explanation: We'll now setup the first convolutional layer. Remember that the weight matrix for convolution should be [height x width x input_channels x output_channels] Let's create 32 filters. That means every location in the image, depending on the stride I set when we perform the convolution, will be filtered by this many different kernels. In session 1, we convolved our image with just 2 different types of kernels. Now, we're going to let the computer try to find out what 32 filters helps it map the input to our desired output via our training signal. End of explanation """ b_1 = tf.get_variable( name='b', shape=[n_filters_out], initializer=tf.constant_initializer()) """ Explanation: Bias is always [output_channels] in size. End of explanation """ h_1 = tf.nn.relu( tf.nn.bias_add( tf.nn.conv2d(input=X_tensor, filter=W_1, strides=[1, 2, 2, 1], padding='SAME'), b_1)) """ Explanation: Now we can build a graph which does the first layer of convolution: We define our stride as batch x height x width x channels. This has the effect of resampling the image down to half of the size. End of explanation """ n_filters_in = 32 n_filters_out = 64 W_2 = tf.get_variable( name='W2', shape=[filter_size, filter_size, n_filters_in, n_filters_out], initializer=tf.random_normal_initializer()) b_2 = tf.get_variable( name='b2', shape=[n_filters_out], initializer=tf.constant_initializer()) h_2 = tf.nn.relu( tf.nn.bias_add( tf.nn.conv2d(input=h_1, filter=W_2, strides=[1, 2, 2, 1], padding='SAME'), b_2)) """ Explanation: And just like the first layer, add additional layers to create a deep net. End of explanation """ # We'll now reshape so we can connect to a fully-connected/linear layer: h_2_flat = tf.reshape(h_2, [-1, 7 * 7 * n_filters_out]) """ Explanation: 4d -> 2d End of explanation """ # NOTE: This uses a slightly different version of the linear function than the lecture! h_3, W = utils.linear(h_2_flat, 128, activation=tf.nn.relu, name='fc_1') """ Explanation: Create a fully-connected layer: End of explanation """ # NOTE: This uses a slightly different version of the linear function than the lecture! Y_pred, W = utils.linear(h_3, n_output, activation=tf.nn.softmax, name='fc_2') """ Explanation: And one last fully-connected layer which will give us the correct number of outputs, and use a softmax to expoentially scale the outputs and convert them to a probability: End of explanation """ cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12)) optimizer = tf.train.AdamOptimizer().minimize(cross_entropy) """ Explanation: <TODO: Draw as graphical representation> Training the Network The rest of the training process is the same as the previous network. We'll define loss/eval/training functions: End of explanation """ correct_prediction = tf.equal(tf.argmax(Y_pred, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float')) """ Explanation: Monitor accuracy: End of explanation """ sess = tf.Session() sess.run(tf.global_variables_initializer()) """ Explanation: And create a new session to actually perform the initialization of all the variables: End of explanation """ batch_size = 50 n_epochs = 10 for epoch_i in range(n_epochs): for batch_xs, batch_ys in ds.train.next_batch(): sess.run(optimizer, feed_dict={ X: batch_xs, Y: batch_ys }) valid = ds.valid print(sess.run(accuracy, feed_dict={ X: valid.images, Y: valid.labels })) # Print final test accuracy: test = ds.test print(sess.run(accuracy, feed_dict={ X: test.images, Y: test.labels })) """ Explanation: Then we'll train in minibatches and report accuracy: End of explanation """ from libs.utils import montage_filters W1 = sess.run(W_1) plt.figure(figsize=(10, 10)) plt.imshow(montage_filters(W1), cmap='coolwarm', interpolation='nearest') """ Explanation: <TODO: Fun timelapse of waiting> Inspecting the Trained Network Let's take a look at the kernels we've learned using the following montage function, similar to the one we've been using for creating image montages, except this one is suited for the dimensions of convolution kernels instead of 4-d images. So it has the height and width first, unlike images which have batch then height then width. We'll use this function to visualize every convolution kernel in the first and second layers of our network. End of explanation """ W2 = sess.run(W_2) plt.imshow(montage_filters(W2 / np.max(W2)), cmap='coolwarm') """ Explanation: What we're looking at are all of the convolution kernels that have been learned. Compared to the previous network we've learned, it is much harder to understand what's happening here. But let's try and explain these a little more. The kernels that have been automatically learned here are responding to edges of different scales, orientations, and rotations. It's likely these are really describing parts of letters, or the strokes that make up letters. Put another way, they are trying to get at the "information" in the image by seeing what changes. That's a pretty fundamental idea. That information would be things that change. Of course, there are filters for things that aren't changing as well. Some filters may even seem to respond to things that are mostly constant. However, if our network has learned a lot of filters that look like that, it's likely that the network hasn't really learned anything at all. The flip side of this is if the filters all look more or less random. That's also a bad sign. Let's try looking at the second layer's kernels: End of explanation """ import os sess = tf.Session() init_op = tf.global_variables_initializer() saver = tf.train.Saver() sess.run(init_op) if os.path.exists("model.ckpt"): saver.restore(sess, "model.ckpt") print("Model restored.") """ Explanation: It's really difficult to know what's happening here. There are many more kernels in this layer. They've already passed through a set of filters and an additional non-linearity. How can we really know what the network is doing to learn its objective function? The important thing for now is to see that most of these filters are different, and that they are not all constant or uniformly activated. That means it's really doing something, but we aren't really sure yet how to see how that effects the way we think of and perceive the image. In the next session, we'll learn more about how we can start to interrogate these deeper representations and try to understand what they are encoding. Along the way, we'll learn some pretty amazing tricks for producing entirely new aesthetics that eventually led to the "deep dream" viral craze. <a name="savingloading-models"></a> Saving/Loading Models Tensorflow provides a few ways of saving/loading models. The easiest way is to use a checkpoint. Though, this really useful while you are training your network. When you are ready to deploy or hand out your network to others, you don't want to pass checkpoints around as they contain a lot of unnecessary information, and it also requires you to still write code to create your network. Instead, you can create a protobuf which contains the definition of your graph and the model's weights. Let's see how to do both: <a name="checkpoint"></a> Checkpoint Creating a checkpoint requires you to have already created a set of operations in your tensorflow graph. Once you've done this, you'll create a session like normal and initialize all of the variables. After this, you create a tf.train.Saver which can restore a previously saved checkpoint, overwriting all of the variables with your saved parameters. End of explanation """ save_path = saver.save(sess, "./model.ckpt") print("Model saved in file: %s" % save_path) """ Explanation: Creating the checkpoint is easy. After a few iterations of training, depending on your application say between 1/10 of the time to train the full model, you'll want to write the saved model. You can do this like so: End of explanation """ path='./' ckpt_name = './model.ckpt' fname = 'model.tfmodel' dst_nodes = ['Y'] g_1 = tf.Graph() with tf.Session(graph=g_1) as sess: x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3)) # Replace this with some code which will create your tensorflow graph: net = create_network() sess.run(tf.global_variables_initializer()) saver.restore(sess, ckpt_name) graph_def = tf.python.graph_util.convert_variables_to_constants( sess, sess.graph_def, dst_nodes) g_2 = tf.Graph() with tf.Session(graph=g_2) as sess: tf.train.write_graph( tf.python.graph_util.extract_sub_graph( graph_def, dst_nodes), path, fname, as_text=False) """ Explanation: <a name="protobuf"></a> Protobuf The second way of saving a model is really useful for when you don't want to pass around the code for producing the tensors or computational graph itself. It is also useful for moving the code to deployment or for use in the C++ version of Tensorflow. To do this, you'll want to run an operation to convert all of your trained parameters into constants. Then, you'll create a second graph which copies the necessary tensors, extracts the subgraph, and writes this to a model. The summarized code below shows you how you could use a checkpoint to restore your models parameters, and then export the saved model as a protobuf. End of explanation """ with open("model.tfmodel", mode='rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(net['graph_def'], name='model') """ Explanation: When you wanted to import this model, now you wouldn't need to refer to the checkpoint or create the network by specifying its placeholders or operations. Instead, you'd use the import_graph_def operation like so: End of explanation """
dcavar/python-tutorial-for-ipython
notebooks/Python Parsing with NLTK and Foma.ipynb
apache-2.0
import nltk """ Explanation: Python Parsing with NLTK and Foma (C) 2017-2019 by Damir Cavar Download: This and various other Jupyter notebooks are available from my GitHub repo. License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0) This is a tutorial related to the discussion of grammar engineering and parsing in the classes Alternative Syntactic Theories and Advanced Natural Language Processing taught at Indiana University in Spring 2017, Fall 2018, and Spring 2019. The following code examples require the foma application and library to be installed, as well as the foma.py module. Since I am using Python 3.x here, I would recommend to use my version of foma.py and install it in the local modules folder of your Python distribution. In my case, since I use Anaconda, the file goes in anaconda/lib/python3.6/ in my home directory. On a Linux distribution you might have a folder /usr/local/lib/python3.6 or similar, where the module foma.py has to be copied to. Grammars and Parsers We will use NLTK in the following: End of explanation """ fstr = nltk.FeatStruct("[POS='N', AGR=[PER=3, NUM='pl', GND='fem']]") print(fstr) """ Explanation: We can declare a feature structure and display it using NLTK: End of explanation """ from nltk.grammar import FeatureGrammar, FeatDict """ Explanation: We will also use Feature grammars: End of explanation """ grammarText = """ % start S # ############################ # Grammar Rules # ############################ S[TYPE=decl] -> NP[NUM=?n, PERS=?p, CASE='nom'] VP[NUM=?n, PERS=?p] '.' S[TYPE=inter] -> NP[NUM=?n, PERS=?p, CASE='nom', PRONTYPE=inter] VP[NUM=?n, PERS=?p] '?' S[TYPE=inter] -> NP[NUM=?n, PERS=?p, CASE='acc', PRONTYPE=inter] AUX NP[NUM=?n, PERS=?p, CASE='nom'] VP[NUM=?n, PERS=?p, VAL=2] '?' NP[NUM=?n, PERS=?p, CASE=?c] -> N[NUM=?n, PERS=?p, CASE=?c] NP[NUM=?n, PERS=?p, CASE=?c] -> D[NUM=?n, CASE=?c] N[NUM=?n,PERS=?p, CASE=?c] NP[NUM=?n, PERS=?p, CASE=?c] -> Pron[NUM=?n, PERS=?p, CASE=?c] VP[NUM=?n, PERS=?p] -> V[NUM=?n, PERS=?p] VP[NUM=?n, PERS=?p, VAL=2] -> V[NUM=?n, PERS=?p, VAL=2] NP[CASE='acc'] VP[NUM=?n, PERS=?p, VAL=2] -> V[NUM=?n, PERS=?p, VAL=2] """ grammar = FeatureGrammar.fromstring(grammarText) """ Explanation: We can define a feature grammar in the following way: End of explanation """ parser = nltk.parse.FeatureChartParser(grammar) sentences = ["John loves Mary", "Mary loves John"] for sentence in sentences: result = [] try: result = list(parser.parse(sentence.split())) except ValueError: print("The grammar is missing token definitions.") if result: for x in result: print(x) else: print("*", sentence) """ Explanation: Testing the grammar with the input sentence John loves Mary fails, because there is not lexical defintion of these entries in the grammar. End of explanation """ import foma """ Explanation: We can include foma and a morphology using the following code: End of explanation """ fst = foma.FST.load(b'eng.fst') """ Explanation: The following line will load the eng.fst Foma morphology: End of explanation """ tokens = "John loves Mary".split() for token in tokens: result = list(fst.apply_up(str.encode(token))) for r in result: print(r.decode('utf8')) """ Explanation: We can print out the analysis for each single token by submitting it to the FST: End of explanation """ featureMapping = { 'Q' : "PRONTYPE = inter", 'Animate' : "ANIMATE = 1", 'Def' : "DETTYPE = def", 'Indef' : "DETTYPE = indef", 'Sg' : "NUM = sg", 'Pl' : "NUM = pl", '3P' : "PERS = 3", 'Masc' : "GENDSEM = male", 'Fem' : "GENDSEM = female", 'Dat' : "CASE = dat", 'Acc' : "CASE = acc", 'Nom' : "CASE = nom", 'NEPersonName' : """NTYPE = [NSYN = proper, NSEM = [PROPER = [PROPERTYPE = name, NAMETYPE = first_name]]], HUMAN = 1""" } """ Explanation: If we want to convert the flat string annotation from the morphology to a NLTK feature structure, we need to translate some entires to a corresponding Attribute Value Matrix (AVM). In the following table we define a feature in the morphology output and the corresponging feature structure that it corresponds with: End of explanation """ def feat2LFG(f): result = featureMapping.get(f, "") return(nltk.FeatStruct("".join( ("[", result, "]") ))) """ Explanation: We use a function feat2LFG to convert the feature tag in the morphological analysis to a LFG-compatible AVM: End of explanation """ def flatFStructure(f): res = "" for key in f.keys(): val = f[key] if res: res += ', ' if (isinstance(val, FeatDict)): res += key + '=' + flatFStructure(val) else: res += key + "=" + str(val) return('[' + res + ']') """ Explanation: The following helper function is recursive. It mapps the AVM to a bracketed string annotation of feature structures, as used in the NLTK feature grammar format: End of explanation """ def parseFoma(sentence): tokens = sentence.split() tokenAnalyses = {} rules = [] count = 0 for token in tokens: aVal = [] result = list(fst.apply_up(str.encode(token))) for r in result: elements = r.decode('utf8').split('+') lemma = elements[0] tokFeat = nltk.FeatStruct("[PRED=" + lemma + "]") cat = elements[1] if len(elements) > 2: feats = tuple(elements[2:]) else: feats = () for x in feats: fRes2 = feat2LFG(x) fRes = tokFeat.unify(fRes2) if fRes: tokFeat = fRes else: print("Error unifying:", tokFeat, fRes2) flatFStr = flatFStructure(tokFeat) aVal.append(cat + flatFStr) rules.append(cat + flatFStr + " -> " + "'" + token + "'") tokenAnalyses[count] = aVal count += 1 grammarText2 = grammarText + "\n" + "\n".join(rules) grammar = FeatureGrammar.fromstring(grammarText2) parser = nltk.parse.FeatureChartParser(grammar) result = list(parser.parse(tokens)) if result: for x in result: print(x) else: print("*", sentence) """ Explanation: The following function is a parse function that prints out parse trees for an input, maintaining the extended feature structures at the lexical level. It can now parse sentences that contain words that are not specified as lexical words in the grammar, but rather as paths in the morphological finite state transducer. End of explanation """ parseFoma("who called John ?") """ Explanation: We can call this function using the following code: End of explanation """
geektoni/shogun
doc/ipython-notebooks/neuralnets/rbms_dbns.ipynb
bsd-3-clause
import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') import networkx as nx import shogun as sg import numpy as np import matplotlib.pyplot as plt import matplotlib %matplotlib inline G = nx.Graph() pos = {} for i in range(8): pos['V'+str(i)] = (i,0) pos['H'+str(i)] = (i,1) for j in range(8): G.add_edge('V'+str(j),'H'+str(i)) plt.figure(figsize=(7,2)) nx.draw(G, pos, node_color='y', node_size=750) """ Explanation: Restricted Boltzmann Machines & Deep Belief Networks by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn In this notebook we'll take a look at training and evaluating restricted Boltzmann machines and deep belief networks in Shogun. Introduction Restricted Boltzmann Machines An RBM is an energy based probabilistic model. It consists of two groups of variables: the visible variables $ v $ and the hidden variables $ h $. The key assumption that RBMs make is that the hidden units are conditionally independent given the visible units, and vice versa. The RBM defines its distribution through an energy function $E(v,h)$, which is a function that assigns a number (called energy) to each possible state of the visible and hidden variables. The probability distribution is defined as: $$ P(v,h) := \frac{\exp(-E(v,h))}{Z} , \qquad Z = \sum_{v,h} \exp(-E(v,h))$$ where $Z$ is a constant that makes sure that the distribution sums/integrates to $1$. This distribution is also called a Gibbs distribution and $Z$ is sometimes called the partition function. From the definition of $P(v,h)$ we can see that the probability of a configuration increases as its energy decreases. Training an RBM in an unsupervised manner involves manipulating the energy function so that it would assign low energy (and therefore high probability) to values of $v$ that are similar to the training data, and high energy to values that are far from the training data. For an RBM with binary visible and hidden variables the energy function is defined as: $ E(v,h) = -\sum_i \sum_j h_i W_{ij} v_j - \sum_i h_i c_i - \sum_j v_j b_j $ where $b \in \mathbb{R^n} $ is the bias for the visible units, $c \in \mathbb{R^m}$ is the bias for hidden units and $ W \in \mathbb{R^{mxn}}$ is the weight matrix between the hidden units and the visible units. Plugging that definition into the definition of the probability distribution will yield the following conditional distributions for each of the hidden and visible variables: $$ P(h=1|v) = \frac{1}{1+exp(-Wv-c)}, \quad P(v=1|h) = \frac{1}{1+exp(-W^T h-b)} $$ We can do a quick visualization of an RBM: End of explanation """ G = nx.DiGraph() pos = {} for i in range(8): pos['V'+str(i)] = (i,0) pos['H'+str(i)] = (i,1) pos['P'+str(i)] = (i,2) pos['Q'+str(i)] = (i,3) for j in range(8): G.add_edge('H'+str(j),'V'+str(i)) G.add_edge('P'+str(j),'H'+str(i)) G.add_edge('Q'+str(j),'P'+str(i)) G.add_edge('P'+str(j),'Q'+str(i)) plt.figure(figsize=(5,5)) nx.draw(G, pos, node_color='y', node_size=750) """ Explanation: The nodes labeled V are the visible units, the ones labeled H are the hidden units. There's an indirected connection between each hidden unit and all the visible units and similarly for visible unit. There are no connections among the visible units or among the hidden units, which implies the the hidden units are independent of each other given the visible units, and vice versa. Deep Belief Networks If an RBM is properly trained, the hidden units learn to extract useful features from training data. An obvious way to go further would be transform the training data using the trained RBM, and train yet another RBM on the transformed data. The second RBM will learn to extract useful features from the features that the first RBM extracts. The process can be repeated to add a third RBM, and so. When stacked on top of each other, those RBMs form a Deep Belief Network [1]. The network has directed connections going from the units in each layer to units in the layer below it. The connections between the top layer and the layer below it are undirected. The process of stacking RBMs to form a DBN is called pre-training the DBN. After pre-training, the DBN can be used to initialize a similarly structured neural network which can be used for supervised classification. We can do a visualization of a 4-layer DBN: End of explanation """ rbms = [] for i in range(10): # 25 hidden units, 256 visible units (one for each pixel in a 16x16 binary image) layer = sg.RBM(25, 256, sg.RBMVUT_BINARY) layer.put("seed", 10) rbms.append(layer) rbms[i].initialize_neural_network() """ Explanation: RBMs in Shogun RBMs in Shogun are handled through the RBM class. We create one by specifying the number of visible units and their type (binary, Gaussian, and Softmax visible units are supported), and the number of hidden units (only binary hidden units are supported). In this notebook we'll train a few RBMs on the USPS dataset for handwritten digits. We'll have one RBM for each digit class, making 10 RBMs in total: End of explanation """ from scipy.io import loadmat dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = dataset['data'] # the usps dataset has the digits labeled from 1 to 10 # we'll subtract 1 to make them in the 0-9 range instead Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 """ Explanation: Next we'll load the USPS dataset: End of explanation """ # uncomment this line to allow the training progress to be printed on the console #from shogun import MSG_INFO; rbms[0].io.set_loglevel(MSG_INFO) for i in range(10): # obtain the data for digit i X_i = Xall[:,Yall==i] # binarize the data for use with the RBM X_i = (X_i>0).astype(np.float64) # set the number of contrastive divergence steps rbms[i].cd_num_steps = 5 # set the gradient descent parameters rbms[i].gd_learning_rate = 0.005 rbms[i].gd_mini_batch_size = 100 rbms[i].max_num_epochs = 30 # set the monitoring method to pseudo-likelihood rbms[i].monitoring_method = sg.RBMMM_PSEUDO_LIKELIHOOD # start training rbms[i].train(sg.create_features(X_i)) """ Explanation: Now we'll move on to training the RBMs using Persistent Contrastive Divergence [2]. Training using regular Contrastive Divergence [3] is also supported.The optimization is performed using Gradient Descent. The training progress can be monitored using the reconstruction error or the psuedo-likelihood. Check the public attributes of the RBM class for all the available training options. End of explanation """ samples = np.zeros((256,100)) for i in range(10): # initialize the sampling chain with a random state for the visible units rbms[i].reset_chain() # run 10 chains for a 1000 steps to obtain the samples samples[:,i*10:i*10+10] = rbms[i].sample_group(0, 1000, 10).get("feature_matrix") # plot the samples plt.figure(figsize=(7,7)) for i in range(100): ax=plt.subplot(10,10,i+1) ax.imshow(samples[:,i].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r) ax.set_xticks([]) ax.set_yticks([]) """ Explanation: After training, we can draw samples from the RBMs to see what they've learned. Samples are drawn using Gibbs sampling. We'll draw 10 samples from each RBM and plot them: End of explanation """ dbn = sg.DeepBeliefNetwork(256) # 256 visible units dbn.put("seed", 10) dbn.add_hidden_layer(200) # 200 units in the first hidden layer dbn.add_hidden_layer(300) # 300 units in the second hidden layer dbn.initialize_neural_network() """ Explanation: DBNs in Shogun Now we'll create a DBN, pre-train it on the digits dataset, and use it initialize a neural network which we can use for classification. DBNs are handled in Shogun through the DeepBeliefNetwork class. We create a network by specifying the number of visible units it has, and then add the desired number of hidden layers using add_hidden_layer(). When done, we call initialize_neural_network() to initialize the network: End of explanation """ # take 3000 examples for training, the rest for testing Xtrain = Xall[:,0:3000] Ytrain = Yall[0:3000] Xtest = Xall[:,3000:-1] Ytest = Yall[3000:-1] # set the number of contrastive divergence steps dbn.put("pt_cd_num_steps", np.array([5,5,5], dtype=np.int32)) # set the gradient descent parameters dbn.put("pt_gd_learning_rate", np.array([0.01,0.01,0.01])) dbn.put("pt_gd_mini_batch_size", np.array([100,100,100], dtype=np.int32)) dbn.put("pt_max_num_epochs", np.array([30,30,30], dtype=np.int32)) # binarize the data and start pre-training dbn.pre_train(sg.create_features((Xtrain>0).astype(np.float64))) """ Explanation: Then we'll pre-train the DBN on the USPS dataset. Since we have 3 layers, the DBN will be pre-trained as two RBMs: one that consists of the first hidden layer and the visible layer, the other consists of the first hidden layer and the second hidden layer. Pre-training parameters can be specified using the pt_* public attributes of the class. Each of those attributes is an SGVector whose length is the number of RBMs (2 in our case). It can be used to set the parameters for each RBM indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all RBMs. End of explanation """ # obtain the weights of the first hidden layer w1 = dbn.get_weights(0) # plot the weights between the first 100 units in the hidden layer and the visible units plt.figure(figsize=(7,7)) for i in range(100): ax1=plt.subplot(10,10,i+1) ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = matplotlib.cm.Greys_r) ax1.set_xticks([]) ax1.set_yticks([]) """ Explanation: After pre-training, we can visualize the features learned by the first hidden layer by plotting the weights between some hidden units and the visible units: End of explanation """ # get the neural network nn = dbn.convert_to_neural_network(sg.create_layer("NeuralSoftmaxLayer", num_neurons=10)) # add some L2 regularization nn.put("l2_coefficient", 0.0001) # start training nn.put('labels', sg.create_labels(Ytrain)) nn.train(sg.create_features(Xtrain)) """ Explanation: Now, we'll use the DBN to initialize a NeuralNetwork. This is done through the convert_to_neural_network() method. The neural network will consist of a NeuralInputLayer with 256 neurons, a NeuralLogisticLayer with 200 neurons, and another NeuralLogisticLayer with 300 neurons. We'll also add a NeuralSoftmaxLayer as an output layer so that we can train the network in a supervised manner. We'll also train the network on the training set: End of explanation """ predictions = nn.apply(sg.create_features(Xtest)) accuracy = sg.create_evaluation("MulticlassAccuracy").evaluate(predictions, sg.create_labels(Ytest)) * 100 print("Classification accuracy on the test set =", accuracy, "%") """ Explanation: And finally we'll measure the classification accuracy on the test set: End of explanation """
dprn/CDC15
Invariants-computation.ipynb
gpl-2.0
import numpy as np from numpy import fft from numpy import linalg as LA from scipy import ndimage from scipy import signal import matplotlib.pyplot as plt import matplotlib.cm as cm import os %matplotlib inline """ Explanation: Computation and comparision of the bispectrum and the rotational bispectrum We show how to compute the bispectrum and the rotational bispectrum, as presented in the paper Image processing in the semidiscrete group of rototranslations by D. Prandi, U. Boscain and J.-P. Gauthier. End of explanation """ def int2intvec(a): """ Auxiliary function to recover a vector with the digits of a given integer (in inverse order) `a` : integer """ digit = a%10 vec = np.array([digit],dtype=int) a = (a-digit)/10 while a!=0: digit = a%10 vec = np.append(vec,int(digit)) a = (a-digit)/10 return vec ALPHABET7 = "0123456" ALPHABET10 = "0123456789" def base_encode(num, alphabet): """ Encode a number in Base X `num`: The number to encode """ if (str(num) == alphabet[0]): return int(0) arr = [] base = len(alphabet) while num: rem = num % base num = num // base arr.append(alphabet[rem]) arr.reverse() return int(''.join(arr)) def base7to10(num): """ Convert a number from base 10 to base 7 `num`: The number to convert """ arr = int2intvec(num) num = 0 for i in range(len(arr)): num += arr[i]*(7**(i)) return num def base10to7(num): """ Convert a number from base 7 to base 10 `num`: The number to convert """ return base_encode(num, ALPHABET7) def rgb2gray(rgb): """ Convert an image from RGB to grayscale `rgb`: The image to convert """ r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b return gray def oversampling(image, factor = 7): """ Oversample a grayscale image by a certain factor, dividing each pixel in factor*factor subpixels with the same intensity. `image`: The image to oversample `factor`: The oversampling factor """ old_shape = image.shape new_shape = (factor*old_shape[0], factor*old_shape[1]) new_image = np.zeros(new_shape, dtype = image.dtype) for i in range(old_shape[0]): for j in range(old_shape[1]): new_image[factor*i:factor*i+factor,factor*j:factor*j+factor] = image[i,j]*np.ones((factor,factor)) return new_image """ Explanation: Auxiliary functions End of explanation """ # The centered hyperpel hyperpel = np.array([\ [-1,4],[0,4],[1,4],[2,4],[3,4],\ [-2,3],[-1,3], [0,3], [1,3], [2,3], [3,3], [4,3],\ [-2,2],[-1,2], [0,2], [1,2], [2,2], [3,2], [4,2],\ [-3,1],[-2,1],[-1,1], [0,1], [1,1], [2,1], [3,1], [4,1],[5,1],\ [-3,0],[-2,0],[-1,0], [0,0], [1,0], [2,0], [3,0], [4,0],[5,0],\ [-2,-1],[-1,-1], [0,-1], [1,-1], [2,-1], [3,-1], [4,-1],\ [-2,-2],[-1,-2], [0,-2], [1,-2], [2,-2], [3,-2], [4,-2],\ [-1,-3], [0,-3], [1,-3], [2,-3], [3,-3]]) hyperpel_sa = hyperpel - np.array([1,1]) """ Explanation: Spiral architecture implementation Spiral architecture has been introduced by Sheridan in Spiral Architecture for Machine Vision, PhD thesis Pseudo-invariant image transformations on a hexagonal lattice, P. Sheridan, T. Hintz, and D. Alexander, Image Vis. Comput. 18, 907 (2000). The implementation with hyperpels that we use in the following is presented in A New Simulation of Spiral Architecture, X. He, T. Hintz, Q. Wu, H. Wang, and W. Jia, Proceedings of International Conference on Image Processing, Computer Vision, and Pattern Recognition (2006). Hexagonal structure for intelligent vision, X. He and W. Jia, in Proc. 1st Int. Conf. Inf. Commun. Technol. ICICT 2005 (2005), pp. 52–64. For a more detailed implementation, see the notebook Hexagonal grid. We start by defining the centered hyperpel, which is defined on a 9x9 grid and is composed of 56 pixels. It has the shape # o o x x x x x o o # o x x x x x x x o # o x x x x x x x o # x x x x x x x x x # x x x C x x x x x # o x x x x x x x o # o x x x x x x x o # o o x x x x x o o End of explanation """ def sa2hex(spiral_address): # Split the number in basic unit and call the auxiliary function # Here we reverse the order, so that the index corresponds to the # decimal position digits = str(spiral_address)[::-1] hex_address = np.array([0,0]) for i in range(len(digits)): if int(digits[i])<0 or int(digits[i])>6: print("Invalid spiral address!") return elif digits[i]!= '0': hex_address += sa2hex_aux(int(digits[i]),i) return hex_address # This computes the row/column positions of the base cases, # that is, in the form a*10^(zeros). def sa2hex_aux(a, zeros): # Base cases if zeros == 0: if a == 0: return np.array([0,0]) elif a == 1: return np.array([0,8]) elif a == 2: return np.array([-7,4]) elif a == 3: return np.array([-7,-4]) elif a == 4: return np.array([0,-8]) elif a == 5: return np.array([7,-4]) elif a == 6: return np.array([7,4]) return sa2hex_aux(a,zeros-1)+ 2*sa2hex_aux(a%6 +1,zeros-1) """ Explanation: We now compute, in sa2hex, the address of the center of the hyperpel corresponding to a certain spiral address. End of explanation """ def sa_value(oversampled_image,spiral_address): """ Computes the value of the hyperpel corresponding to the given spiral coordinate. """ hp = hyperpel_sa + sa2hex(spiral_address) val = 0. for i in range(56): val += oversampled_image[hp[i,0],hp[i,1]] return val/56 """ Explanation: Then, we compute the value of the hyperpel corresponding to the spiral address, by averaging the values on the subpixels. End of explanation """ def spiral_add(a,b,mod=0): addition_table = [ [0,1,2,3,4,5,6], [1,63,15,2,0,6,64], [2,15,14,26,3,0,1], [3,2,26,25,31,4,0], [4,0,3,31,36,42,5], [5,6,0,4,42,41,53], [6,64,1,0,5,53,52] ] dig_a = int2intvec(a) dig_b = int2intvec(b) if (dig_a<0).any() or (dig_a>7).any() \ or (dig_b<0).any() or (dig_b>7).any(): print("Invalid spiral address!") return if len(dig_a) == 1 and len(dig_b)==1: return addition_table[a][b] if len(dig_a) < len(dig_b): dig_a.resize(len(dig_b)) elif len(dig_b) < len(dig_a): dig_b.resize(len(dig_a)) res = 0 for i in range(len(dig_a)): if i == len(dig_a)-1: res += spiral_add(dig_a[i],dig_b[i])*(10**i) else: temp = spiral_add(dig_a[i],dig_b[i]) res += (temp%10)*(10**i) carry_on = spiral_add(dig_a[i+1],(temp - temp%10)/10) dig_a[i+1] = str(carry_on) if mod!=0: return res%mod return res def spiral_mult(a,b, mod=0): multiplication_table = [ [0,0,0,0,0,0,0], [0,1,2,3,4,5,6], [0,2,3,4,5,6,1], [0,3,4,5,6,1,2], [0,4,5,6,1,2,3], [0,5,6,1,2,3,4], [0,6,1,2,3,4,5], ] dig_a = int2intvec(a) dig_b = int2intvec(b) if (dig_a<0).any() or (dig_a>7).any() \ or (dig_b<0).any() or (dig_b>7).any(): print("Invalid spiral address!") return sa_mult = int(0) for i in range(len(dig_b)): for j in range(len(dig_a)): temp = multiplication_table[dig_a[j]][dig_b[i]]*(10**(i+j)) sa_mult=spiral_add(sa_mult,temp) if mod!=0: return sa_mult%mod return sa_mult """ Explanation: Spiral addition and multiplication End of explanation """ def omegaf(fft_oversampled, sa): """ Evaluates the vector omegaf corresponding to the given spiral address sa. `fft_oversampled`: the oversampled FFT of the image `sa`: the spiral address where to compute the vector """ omegaf = np.zeros(6, dtype=fft_oversampled.dtype) for i in range(1,7): omegaf[i-1] = sa_value(fft_oversampled,spiral_mult(sa,i)) return omegaf """ Explanation: Computation of the bispectrum We start by computing the vector $\omega_f(\lambda)$, where $\lambda$ is a certain spiral address. End of explanation """ def invariant(fft_oversampled, sa1,sa2,sa3): """ Evaluates the generalized invariant of f on sa1, sa2 and sa3 `fft_oversampled`: the oversampled FFT of the image `sa1`, `sa2`, `sa3`: the spiral addresses where to compute the invariant """ omega1 = omegaf(fft_oversampled,sa1) omega2 = omegaf(fft_oversampled,sa2) omega3 = omegaf(fft_oversampled,sa3) # Attention: np.vdot uses the scalar product with the complex # conjugation at the first place! return np.vdot(omega1*omega2,omega3) """ Explanation: Then, we can compute the "generalized invariant" corresponding to $\lambda_1$, $\lambda_2$ and $\lambda_3$, starting from the FFT of the image. That is $$ I^3_f(\lambda_1,\lambda_2,\lambda_3) = \langle\omega_f(\lambda_1)\odot\omega_f(\lambda_2),\omega_f(\lambda_3)\rangle. $$ End of explanation """ def bispectral_inv(fft_oversampled_example, rotational = False): """ Computes the (rotational) bispectral invariants for any sa1 and any sa2 in the above picture. `fft_oversampled_example`: oversampled FFT of the image `rotational`: if True, we compute the rotational bispectrum """ if rotational == True: bispectrum = np.zeros(9**2*6,dtype = fft_oversampled_example.dtype) else: bispectrum = np.zeros(9**2,dtype = fft_oversampled_example.dtype) indexes = [0,1,10,11,12,13,14,15,16] count = 0 for i in range(9): sa1 = indexes[i] sa1_base10 = base7to10(sa1) for k in range(9): sa2 = indexes[k] if rotational == True: for r in range(6): sa2_rot = spiral_mult(sa2,r) sa2_rot_base10 = base7to10(sa2_rot) sa3 = base10to7(sa1_base10+sa2_rot_base10) bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3) count += 1 else: sa2_base10 = base7to10(sa2) sa3 = base10to7(sa1_base10+sa2_base10) bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3) count += 1 return bispectrum """ Explanation: Finally, this function computes the bispectrum (or the rotational bispectrum) corresponding to the spiral addresses in the following picture. <img src="./pixels.png" alt="Hexagonal pixels" style="width: 200px;"/> End of explanation """ example = 1 - rgb2gray(plt.imread('./test-images/butterfly.png')) fft_example = np.fft.fftshift(np.fft.fft2(example)) fft_oversampled_example = oversampling(fft_example) %%timeit bispectral_inv(fft_oversampled_example) %%timeit bispectral_inv(fft_oversampled_example, rotational=True) """ Explanation: Some timing tests. End of explanation """ folder = './test-images' def evaluate_invariants(image, rot = False): """ Evaluates the invariants of the given image. `image`: the matrix representing the image (not oversampled) `rot`: if True we compute the rotational bispectrum """ # compute the normalized FFT fft = np.fft.fftshift(np.fft.fft2(image)) fft /= fft / LA.norm(fft) # oversample it fft_oversampled = oversampling(fft) return bispectral_inv(fft_oversampled, rotational = rot) """ Explanation: Tests Here we define various functions to batch test the images in the test folder. End of explanation """ %%timeit evaluate_invariants(example) %%timeit evaluate_invariants(example, rot = True) def bispectral_folder(folder_name = folder, rot = False): """ Evaluates all the invariants of the images in the selected folder, storing them in a dictionary with their names as keys. `folder_name`: path to the folder `rot`: if True we compute the rotational bispectrum """ # we store the results in a dictionary results = {} for filename in os.listdir(folder_name): infilename = os.path.join(folder_name, filename) if not os.path.isfile(infilename): continue base, extension = os.path.splitext(infilename) if extension == '.png': test_img = 1 - rgb2gray(plt.imread(infilename)) bispectrum = evaluate_invariants(test_img, rot = rot) results[os.path.splitext(filename)[0]] = bispectrum return results def bispectral_comparison(bispectrums, comparison = 'triangle', plot = True, log_scale = True): """ Returns the difference of the norms of the given invariants w.r.t. the comparison element. `bispectrums`: a dictionary with as keys the names of the images and as values their invariants `comparison`: the element to use as comparison """ if comparison not in bispectrums: print("The requested comparison is not in the folder") return bispectrum_diff = {} for elem in bispectrums: diff = LA.norm(bispectrums[elem]-bispectrums[comparison]) # we remove nan results if not np.isnan(diff): bispectrum_diff[elem] = diff return bispectrum_diff def bispectral_plot(bispectrums, comparison = 'triangle', log_scale = True): """ Plots the difference of the norms of the given invariants w.r.t. the comparison element (by default in logarithmic scale). `bispectrums`: a dictionary with as keys the names of the images and as values their invariants `comparison`: the element to use as comparison `log_scale`: wheter the plot should be in log_scale """ bispectrum_diff = bispectral_comparison(bispectrums, comparison = comparison) plt.plot(bispectrum_diff.values(),'ro') if log_scale == True: plt.yscale('log') for i in range(len(bispectrum_diff.values())): # if we plot in log scale, we do not put labels on items that are # too small, otherwise they exit the plot area. if log_scale and bispectrum_diff.values()[i] < 10**(-3): continue plt.text(i,bispectrum_diff.values()[i],bispectrum_diff.keys()[i][:3]) plt.title("Comparison with as reference '"+ comparison +"'") """ Explanation: Some timing tests. End of explanation """ comparisons_paper = ['triangle', 'rectangle', 'ellipse', 'etoile', 'diamond'] def extract_table_values(bispectrums, comparisons = comparisons_paper): """ Extract the values for the table of the paper. `bispectrums`: a dictionary with as keys the names of the images and as values their invariants `comparison`: list of elements to use as comparison Returns a list of tuples. Each tuple contains the name of the comparison element, the maximal value of the difference of the norm of the invariants with its rotated and the minimal values of the same difference with the other images. """ table_values = [] for elem in comparisons: diff = bispectral_comparison(bispectrums, comparison= elem, plot=False) l = len(elem) match = [x for x in diff.keys() if x[:l]==elem] not_match = [x for x in diff.keys() if x[:l]!=elem] max_match = max([ diff[k] for k in match ]) min_not_match = min([ diff[k] for k in not_match ]) table_values.append((elem,'%.2E' % (max_match),'%.2E' % min_not_match)) return table_values bispectrums = bispectral_folder() bispectrums_rotational = bispectral_folder(rot=True) extract_table_values(bispectrums) extract_table_values(bispectrums_rotational) """ Explanation: Construction of the table for the paper End of explanation """
samgoodgame/sf_crime
iterations/KK_scripts/KK_development_work/W207_Final_Project_errorAnalysis_updated_08_21_1930.ipynb
mit
# Additional Libraries %matplotlib inline import matplotlib.pyplot as plt # Import relevant libraries: import time import numpy as np import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn import preprocessing from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.naive_bayes import BernoulliNB from sklearn.naive_bayes import MultinomialNB from sklearn.naive_bayes import GaussianNB from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import log_loss from sklearn.linear_model import LogisticRegression from sklearn import svm from sklearn.neural_network import MLPClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier # Import Meta-estimators from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import GradientBoostingClassifier # Import Calibration tools from sklearn.calibration import CalibratedClassifierCV # Set random seed and format print output: np.random.seed(0) np.set_printoptions(precision=3) """ Explanation: Kaggle San Francisco Crime Classification Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore Environment and Data End of explanation """ # Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above data_path = "./data/x_data_3.csv" df = pd.read_csv(data_path, header=0) x_data = df.drop('category', 1) y = df.category.as_matrix() # Impute missing values with mean values: #x_complete = df.fillna(df.mean()) x_complete = x_data.fillna(x_data.mean()) X_raw = x_complete.as_matrix() # Scale the data between 0 and 1: X = MinMaxScaler().fit_transform(X_raw) #### #X = np.around(X, decimals=2) #### # Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time: np.random.seed(0) shuffle = np.random.permutation(np.arange(X.shape[0])) X, y = X[shuffle], y[shuffle] # Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare # crimes from the data for quality issues. X_minus_trea = X[np.where(y != 'TREA')] y_minus_trea = y[np.where(y != 'TREA')] X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')] # Separate training, dev, and test data: test_data, test_labels = X_final[800000:], y_final[800000:] dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000] train_data, train_labels = X_final[100000:700000], y_final[100000:700000] calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000] # Create mini versions of the above sets mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000] mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000] mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000] # Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow crime_labels = list(set(y_final)) crime_labels_mini_train = list(set(mini_train_labels)) crime_labels_mini_dev = list(set(mini_dev_labels)) crime_labels_mini_calibrate = list(set(mini_calibrate_labels)) print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate)) #print(len(train_data),len(train_labels)) #print(len(dev_data),len(dev_labels)) print(len(mini_train_data),len(mini_train_labels)) print(len(mini_dev_data),len(mini_dev_labels)) #print(len(test_data),len(test_labels)) print(len(mini_calibrate_data),len(mini_calibrate_labels)) #print(len(calibrate_data),len(calibrate_labels)) """ Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets. End of explanation """ tuned_DT_calibrate_isotonic = RandomForestClassifier(min_impurity_split=1, n_estimators=100, bootstrap= True, max_features=15, criterion='entropy', min_samples_leaf=10, max_depth=None ).fit(train_data, train_labels) ccv_isotonic = CalibratedClassifierCV(tuned_DT_calibrate_isotonic, method = 'isotonic', cv = 'prefit') ccv_isotonic.fit(calibrate_data, calibrate_labels) ccv_predictions = ccv_isotonic.predict(dev_data) ccv_prediction_probabilities_isotonic = ccv_isotonic.predict_proba(dev_data) working_log_loss_isotonic = log_loss(y_true = dev_labels, y_pred = ccv_prediction_probabilities_isotonic, labels = crime_labels) print("Multi-class Log Loss with RF and calibration with isotonic is:", working_log_loss_isotonic) pd.DataFrame(np.amax(ccv_prediction_probabilities_isotonic, axis=1)).hist() """ Explanation: The Best RF Classifier End of explanation """ #clf_probabilities, clf_predictions, labels def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels): """inputs: clf_probabilities = clf.predict_proba(dev_data) clf_predictions = clf.predict(dev_data) labels = dev_labels""" #buckets = [0.05, 0.15, 0.3, 0.5, 0.8] #buckets = [0.15, 0.25, 0.3, 1.0] correct = [0 for i in buckets] total = [0 for i in buckets] lLimit = 0 uLimit = 0 for i in range(len(buckets)): uLimit = buckets[i] for j in range(clf_probabilities.shape[0]): if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit): if clf_predictions[j] == labels[j]: correct[i] += 1 total[i] += 1 lLimit = uLimit print(sum(correct)) print(sum(total)) print(correct) print(total) #here we report the classifier accuracy for each posterior probability bucket accuracies = [] for k in range(len(buckets)): print(1.0*correct[k]/total[k]) accuracies.append(1.0*correct[k]/total[k]) print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \ %(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k])) plt.plot(buckets,accuracies) plt.title("Calibration Analysis") plt.xlabel("Posterior Probability") plt.ylabel("Classifier Accuracy") return buckets, accuracies #i think you'll need to look at how the posteriors are distributed in order to set the best bins in 'buckets' pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist() buckets = [0.15, 0.25, 0.3, 1.0] calibration_buckets, calibration_accuracies = error_analysis_calibration(buckets, clf_probabilities=bestLRPredictionProbabilities, \ clf_predictions=bestLRPredictions, \ labels=mini_dev_labels) """ Explanation: Error Analysis: Calibration End of explanation """ def error_analysis_classification_report(clf_predictions, labels): """inputs: clf_predictions = clf.predict(dev_data) labels = dev_labels""" print('Classification Report:') report = classification_report(labels, clf_predictions) print(report) return report classification_report = error_analysis_classification_report(clf_predictions=bestLRPredictions, \ labels=mini_dev_labels) """ Explanation: Error Analysis: Classification Report End of explanation """ crime_labels_mini_dev def error_analysis_confusion_matrix(label_names, clf_predictions, labels): """inputs: clf_predictions = clf.predict(dev_data) labels = dev_labels""" cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names)) cm.columns=label_names cm.index=label_names cm.to_csv(path_or_buf="./confusion_matrix.csv") #print(cm) return cm error_analysis_confusion_matrix(label_names=crime_labels_mini_dev, clf_predictions=bestLRPredictions, \ labels=mini_dev_labels) """ Explanation: Error Analysis: Confusion Matrix End of explanation """
MBARIMike/biofloat
notebooks/save_to_odv.ipynb
mit
from biofloat import ArgoData, converters from os.path import join, expanduser ad = ArgoData(cache_file=join(expanduser('~'),'6881StnP_5903891.hdf'), verbosity=2) wmo_list = ad.get_cache_file_all_wmo_list() df = ad.get_float_dataframe(wmo_list) """ Explanation: Save to Ocean Data View file Load a biofloat DataFrame, apply WOA calibrated gain factor, and save it as an ODV spreadsheet Use the local cache file for float 5903891 that drifted around ocean station Papa. It's the file that was produced for compare_oxygen_calibrations.ipynb. End of explanation """ df.head() """ Explanation: Show top 5 records. End of explanation """ corr_df = df.dropna().copy() corr_df['DOXY_ADJUSTED'] *= 1.12 corr_df.head() """ Explanation: Remove NaNs and apply the gain factor from compare_oxygen_calibrations.ipynb. End of explanation """ converters.to_odv(corr_df, '6881StnP_5903891.txt') """ Explanation: Convert to ODV format and save in a .txt file. End of explanation """ from IPython.display import Image Image('../doc/screenshots/Screen_Shot_2015-11-25_at_1.42.00_PM.png') """ Explanation: Import as an ODV Spreadsheet and use the tool. End of explanation """
Jhanelle/Jhanelle_New_Version_of_final_project
bin/Compiled_Codes_for_Final_Project.ipynb
mit
# Identitfy version of software used pd.__version__ #Identify version of software used np.__version__ # import libraries import pandas as pd import matplotlib.pyplot as plt import numpy as np #stats library import statsmodels.api as sm import scipy #T-test is imported to complete the statistical analysis from scipy.stats import ttest_ind from scipy import stats #The function below is used to show the plots within the notebook %matplotlib inline """ Explanation: Loading Data Statistis for my data End of explanation """ data=pd.read_csv('../data/Testdata.dat', delimiter=' ') # Print the first 8 rows of the dataset data.head(8) #Print the last 8 rows of the dataset data.tail(8) # Commands used to check the title names in each column as some of the titles were omitted data.dtypes.head() """ Explanation: Loading Data using Pandas End of explanation """ # Here we extract only two columns from the data as these are the main variables for the statistcal analysis strain_df=data[['strain','speed']] strain_df.head() # Eliminate NaN from the dataset strain_df=strain_df.dropna() strain_df.head() #Resample the data to group by strain strain_resampled=strain_df.groupby('strain') strain_resampled.head() #Created a histogram to check the normal distribution the data strain_resampled.hist(column='speed', bins=50) # I need help adding titles to these histograms """ Explanation: Hypothesis and Questions .......... Things I need to do: Format the date properly. I need help do use regular expression to fix the format of my date End of explanation """ # I know I should applu Apply Kruskal. wallis statistics to data, however I am not sure how to deal with the array scipy.stats.mstats.kruskalwallis( , ) def test_mean1(): '''This function created to give the mean values for the different strains in the dataset. The input of the function is the raw speed of the strains while the output is the mean of the strains tested''' mean=strain_resampled.mean() assert mean > -1, 'The mean should be greater than 0.00' return(mean) #assert mean == speed > 0.00, 'The mean should be greater than 0' #assert mean == speed < 0.00, 'The mean should not be less than 0' def test_mean1(): '''This function created to give the mean values for the different strains in the dataset. The input of the function is the raw speed of the strains while the output is the mean of the strains tested''' mean=MX1027_mean.mean() assert mean < -1, 'The mean should be greater than 0.00' return(mean) #assert mean == speed > 0.00, 'The mean should be greater than 0' #assert mean == speed < 0.00, 'The mean should not be less than 0' MX1027=strain_df.groupby('strain').get_group('MX1027') MX1027.head() MX1027.mean() N2=strain_df.groupby('strain').get_group('N2') N2.head() N2.mean() def test_mean(): n=('N2') for n in N2: assert n == -1, 'The mean is greater than 0' assert n > 0, 'Yes, the mean is greater than 0' mean = N2.mean() return(mean) def test_mean2(): n= ('MX1027') for n in MX1027: assert n >= 0.1, "The mean is greater than 0.1" mean_2 = MX1027.mean() return (mean_2) print('mean is:', mean_2) test_mean() test_mean2() N2.mean() print(MX1027_mean.mean()) mean() MX1027_mean=['0.127953'] N2_mean= ['0.084662'] new_data= strain_df.iloc[3,1] new_data1= strain_df.iloc[4,1] new_data.mean() new_data new_data.mean() #def mean(): #mean=strain_resampled.mean() #return(mean) #mean() # more generic fucntion to find the mean def hope_mean(strain_df, speed): n= len(strain_df) if n == 0.127953: return 0.127953 hope_mean =(sum(strain_df.speed))/n print(hope_mean) return hope_mean #Create test functions of the mean of N2 def test_mean1(): """The input of the function is the mean of N2 where the output is the expected mean""" #obs = N2.mean() #exp = 0.084662 #assert obs == exp, ' The mean of N2 should be 0.084662' #Create test function for the mean of MX1027 #def test_mean2(): #"""The input of the function is the mean of MX1027 #where the output is the expected mean""" #obs = MX1027.mean() #exp= 0.127953 #assert obs == exp, ' The mean of MX1027 should be 0.127953' """ Explanation: Interpretation of Histograms Based on the histograms of the respective strain it is clear that the data does not follow a normal distribution. Therefore t-tests and linear regression cannot be applied to this data set as planned. End of explanation """
retnuh/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 3 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ return np.array(x)/255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ import tensorflow as tf def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ one_hot = np.array(tf.one_hot(x, 10).eval(session=tf.Session())) return one_hot """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, [None] + list(image_shape), name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, [None, n_classes], name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # Weight and bias weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape[-1].value, conv_num_outputs] , mean=0.0, stddev=0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) # Apply Convolution conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='VALID') # Add bias conv_layer = tf.nn.bias_add(conv_layer, bias) # Apply activation function conv_layer = tf.nn.relu(conv_layer) pool_layer = tf.nn.max_pool(conv_layer, [1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[1], pool_strides[1], 1], padding='VALID') return pool_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function t = 1 for d in map(lambda d: d.value, x_tensor.shape[1:]): t *= d dim = [-1, t] return tf.reshape(x_tensor, dim) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # Weight and bias weight = tf.Variable(tf.truncated_normal([x_tensor.shape[-1].value, num_outputs], mean=0.0, stddev=0.1)) bias = tf.Variable(tf.zeros(num_outputs)) # Apply Convolution # Apply activation function layer = tf.nn.relu_layer(x_tensor, weight, bias) return layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # Weight and bias weight = tf.Variable(tf.truncated_normal([x_tensor.shape[-1].value, num_outputs], mean=0.0, stddev=0.1)) bias = tf.Variable(tf.zeros(num_outputs)) layer = tf.nn.bias_add(tf.matmul(x_tensor, weight), bias) return layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ net = x # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: net = conv2d_maxpool(net, 10, (3,3), (1,1), (2,2), (1,1)) net = conv2d_maxpool(net, 20, (3,3), (1,1), (2,2), (1,1)) # net = conv2d_maxpool(net, 30, (3,3), (1,1), (2,2), (1,1)) # TODO: Apply a Flatten Layer # Function Definition from Above: flattened = flatten(net) # flattened = tf.nn.dropout(flattened, keep_prob) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: fc1 = fully_conn(flattened, 512) fc1 = tf.nn.dropout(fc1, keep_prob) fc2 = fully_conn(fc1, 256) # fc2 = tf.nn.dropout(fc2, keep_prob) fc3 = fully_conn(fc2, 128) # fc3 = tf.nn.dropout(fc3, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: out = output(fc3, 10) # TODO: return output return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy, end='\n'): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ stats_cost= sess.run(cost, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.0}) stats_acc = sess.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.0}) print("Cost:", stats_cost, "Acc:", stats_acc, end=end) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 30 batch_size = 1024 keep_probability = 0.4 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ import timeit """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 start = timeit.default_timer() for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy, end='') end = timeit.default_timer() print(" time:", end-start) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/custom/custom-tabular-bq-managed-dataset.ipynb
apache-2.0
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip install {USER_FLAG} --upgrade google-cloud-aiplatform """ Explanation: Training a TensorFlow model on BigQuery data <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/custom-tabular-bq-managed-dataset.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/custom-tabular-bq-managed-dataset.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/custom/custom-tabular-bq-managed-dataset.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> Overview This tutorial demonstrates how to use the Vertex AI SDK for Python to train and deploy a custom tabular classification model for online prediction. Dataset The dataset used for this tutorial is the penguins dataset from BigQuery public datasets. In this version of the dataset, you will use only the fields culmen_length_mm, culmen_depth_mm, flipper_length_mm, body_mass_g to predict the penguins species (species). Objective In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then get a prediction from the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console. The steps performed include: Create a Vertex AI custom TrainingPipeline for training a model. Train a TensorFlow model. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model resource. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex AI SDK for Python. End of explanation """ ! pip install {USER_FLAG} -U google-cloud-storage """ Explanation: Install the latest version of google-cloud-storage library as well. End of explanation """ ! pip install {USER_FLAG} -U "google-cloud-bigquery[all]" """ Explanation: Install the latest version of google-cloud-bigquery library as well. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed everything, you need to restart the notebook kernel so it can find the packages. End of explanation """ import os PROJECT_ID = "" if not os.getenv("IS_TESTING"): # Get your Google Cloud project ID from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) """ Explanation: Before you begin Select a GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU" Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you might be able to get your project ID using gcloud. End of explanation """ if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} """ Explanation: Otherwise, set your project ID here. End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. End of explanation """ import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_URI = "gs://[your-bucket-name]" REGION = "[your-region]" # @param {type:"string"} if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP if REGION == "[your-region]": REGION = "us-central1" """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. End of explanation """ ! gsutil mb -l $REGION $BUCKET_URI """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_URI """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import json import os import sys import numpy as np from google.cloud import aiplatform, bigquery from google.cloud.aiplatform import gapic as aip aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI) """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import Vertex AI SDK for Python Import the Vertex AI SDK for Python into your Python environment and initialize it. End of explanation """ TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) """ Explanation: Set hardware accelerators You can set hardware accelerators for both training and prediction. Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) See the locations where accelerators are available. Otherwise specify (None, None) to use a container image to run on a CPU. Learn which accelerators are available in your region. End of explanation """ TRAIN_VERSION = "tf-gpu.2-8" DEPLOY_VERSION = "tf2-gpu.2-8" TRAIN_IMAGE = "us-docker.pkg.dev/vertex-ai/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(DEPLOY_VERSION) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) """ Explanation: Set pre-built containers Vertex AI provides pre-built containers to run training and prediction. For the latest list, see Pre-built containers for training and Pre-built containers for prediction End of explanation """ MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) """ Explanation: Set machine types Next, set the machine types to use for training and prediction. Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction. machine type n1-standard: 3.75GB of memory per vCPU n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. Learn which machine types are available for training and which machine types are available for prediction End of explanation """ BQ_SOURCE = "bq://bigquery-public-data.ml_datasets.penguins" # Calculate mean and std across all rows NA_VALUES = ["NA", "."] # Set up BigQuery clients bqclient = bigquery.Client(project=PROJECT_ID) # Download a table def download_table(bq_table_uri: str): # Remove bq:// prefix if present prefix = "bq://" if bq_table_uri.startswith(prefix): bq_table_uri = bq_table_uri[len(prefix) :] table = bigquery.TableReference.from_string(bq_table_uri) rows = bqclient.list_rows( table, ) return rows.to_dataframe() # Remove NA values def clean_dataframe(df): return df.replace(to_replace=NA_VALUES, value=np.NaN).dropna() def calculate_mean_and_std(df): # Calculate mean and std for each applicable column mean_and_std = {} dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32" or dtype == "float64": mean_and_std[column] = { "mean": df[column].mean(), "std": df[column].std(), } return mean_and_std dataframe = download_table(BQ_SOURCE) dataframe = clean_dataframe(dataframe) mean_and_std = calculate_mean_and_std(dataframe) print("The mean and stds for each column are: " + str(mean_and_std)) # Write to a file MEAN_AND_STD_JSON_FILE = "mean_and_std.json" with open(MEAN_AND_STD_JSON_FILE, "w") as outfile: json.dump(mean_and_std, outfile) # Save to the staging bucket ! gsutil cp {MEAN_AND_STD_JSON_FILE} {BUCKET_URI} """ Explanation: Prepare the data To improve the convergence of the custom deep learning model, normalize the data. To prepare for this, calculate the mean and standard deviation for each numeric column. Pass these summary statistics to the training script to normalize the data before training. Later, during prediction, use these summary statistics again to normalize the testing data. End of explanation """ dataset = aiplatform.TabularDataset.create( display_name="sample-penguins", bq_source=BQ_SOURCE ) """ Explanation: Create a managed tabular dataset from BigQuery dataset Your first step in training a model is to create a managed dataset instance. End of explanation """ JOB_NAME = "custom_job_" + TIMESTAMP if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 BATCH_SIZE = 10 CMDARGS = [ "--epochs=" + str(EPOCHS), "--batch_size=" + str(BATCH_SIZE), "--distribute=" + TRAIN_STRATEGY, "--mean_and_std_json_file=" + f"{BUCKET_URI}/{MEAN_AND_STD_JSON_FILE}", ] """ Explanation: Train a model There are two ways you can train a model using a container image: Use a Vertex AI pre-built container. If you use a pre-built training container, you must additionally specify a Python package to install into the container image. This Python package contains your training code. Use your own custom container image. If you use your own container, the container image must contain your training code. Define the command args for the training script Prepare the command-line arguments to pass to your training script. - args: The command line arguments to pass to the corresponding Python module. In this example, they are: - "--epochs=" + EPOCHS: The number of epochs for training. - "--batch_size=" + BATCH_SIZE: The number of batch size for training. - "--distribute=" + TRAIN_STRATEGY : The training distribution strategy to use for single or distributed training. - "single": single device. - "mirror": all GPU devices on a single compute instance. - "multi": all GPU devices on all compute instances. - "--mean_and_std_json_file=" + FILE_PATH: The file on Google Cloud Storage with pre-calculated means and standard deviations. End of explanation """ %%writefile task.py import argparse import tensorflow as tf import numpy as np import os import pandas as pd import tensorflow as tf from google.cloud import bigquery from google.cloud import storage # Read environmental variables training_data_uri = os.getenv("AIP_TRAINING_DATA_URI") validation_data_uri = os.getenv("AIP_VALIDATION_DATA_URI") test_data_uri = os.getenv("AIP_TEST_DATA_URI") # Read args parser = argparse.ArgumentParser() parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--batch_size', dest='batch_size', default=10, type=int, help='Batch size.') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='Distributed training strategy.') parser.add_argument('--mean_and_std_json_file', dest='mean_and_std_json_file', type=str, help='GCS URI to the JSON file with pre-calculated column means and standard deviations.') args = parser.parse_args() def download_blob(bucket_name, source_blob_name, destination_file_name): """Downloads a blob from the bucket.""" # bucket_name = "your-bucket-name" # source_blob_name = "storage-object-name" # destination_file_name = "local/path/to/file" storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) # Construct a client side representation of a blob. # Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve # any content from Google Cloud Storage. As we don't need additional data, # using `Bucket.blob` is preferred here. blob = bucket.blob(source_blob_name) blob.download_to_filename(destination_file_name) print( "Blob {} downloaded to {}.".format( source_blob_name, destination_file_name ) ) def extract_bucket_and_prefix_from_gcs_path(gcs_path: str): """Given a complete GCS path, return the bucket name and prefix as a tuple. Example Usage: bucket, prefix = extract_bucket_and_prefix_from_gcs_path( "gs://example-bucket/path/to/folder" ) # bucket = "example-bucket" # prefix = "path/to/folder" Args: gcs_path (str): Required. A full path to a Google Cloud Storage folder or resource. Can optionally include "gs://" prefix or end in a trailing slash "/". Returns: Tuple[str, Optional[str]] A (bucket, prefix) pair from provided GCS path. If a prefix is not present, a None will be returned in its place. """ if gcs_path.startswith("gs://"): gcs_path = gcs_path[5:] if gcs_path.endswith("/"): gcs_path = gcs_path[:-1] gcs_parts = gcs_path.split("/", 1) gcs_bucket = gcs_parts[0] gcs_blob_prefix = None if len(gcs_parts) == 1 else gcs_parts[1] return (gcs_bucket, gcs_blob_prefix) # Download means and std def download_mean_and_std(mean_and_std_json_file): """Download mean and std for each column""" import json bucket, file_path = extract_bucket_and_prefix_from_gcs_path(mean_and_std_json_file) download_blob(bucket_name=bucket, source_blob_name=file_path, destination_file_name=file_path) with open(file_path, 'r') as file: return json.loads(file.read()) mean_and_std = download_mean_and_std(args.mean_and_std_json_file) # Single Machine, single compute device if args.distribute == 'single': if tf.test.is_gpu_available(): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") else: strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0") # Single Machine, multiple compute device elif args.distribute == 'mirror': strategy = tf.distribute.MirroredStrategy() # Multiple Machine, multiple compute device elif args.distribute == 'multi': strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # Set up training variables LABEL_COLUMN = "species" UNUSED_COLUMNS = [] NA_VALUES = ["NA", "."] # Possible categorical values SPECIES = ['Adelie Penguin (Pygoscelis adeliae)', 'Chinstrap penguin (Pygoscelis antarctica)', 'Gentoo penguin (Pygoscelis papua)'] ISLANDS = ['Dream', 'Biscoe', 'Torgersen'] SEXES = ['FEMALE', 'MALE'] # Set up BigQuery clients bqclient = bigquery.Client() # Download a table def download_table(bq_table_uri: str): # Remove bq:// prefix if present prefix = "bq://" if bq_table_uri.startswith(prefix): bq_table_uri = bq_table_uri[len(prefix):] table = bigquery.TableReference.from_string(bq_table_uri) rows = bqclient.list_rows( table, ) return rows.to_dataframe(create_bqstorage_client=False) df_train = download_table(training_data_uri) df_validation = download_table(validation_data_uri) df_test = download_table(test_data_uri) # Remove NA values def clean_dataframe(df): return df.replace(to_replace=NA_VALUES, value=np.NaN).dropna() df_train = clean_dataframe(df_train) # df_validation = clean_dataframe(df_validation) df_validation = clean_dataframe(df_validation) _CATEGORICAL_TYPES = { "island": pd.api.types.CategoricalDtype(categories=ISLANDS), "species": pd.api.types.CategoricalDtype(categories=SPECIES), "sex": pd.api.types.CategoricalDtype(categories=SEXES), } def standardize(df, mean_and_std): """Scales numerical columns using their means and standard deviation to get z-scores: the mean of each numerical column becomes 0, and the standard deviation becomes 1. This can help the model converge during training. Args: df: Pandas df Returns: Input df with the numerical columns scaled to z-scores """ dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32": df[column] -= mean_and_std[column]["mean"] df[column] /= mean_and_std[column]["std"] return df def preprocess(df): """Converts categorical features to numeric. Removes unused columns. Args: df: Pandas df with raw data Returns: df with preprocessed data """ df = df.drop(columns=UNUSED_COLUMNS) # Drop rows with NaN's df = df.dropna() # Convert integer valued (numeric) columns to floating point numeric_columns = df.select_dtypes(["int32", "float32", "float64"]).columns df[numeric_columns] = df[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = df.select_dtypes(["object"]).columns df[cat_columns] = df[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name]) ) df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) return df def convert_dataframe_to_dataset( df_train, df_validation, mean_and_std ): df_train = preprocess(df_train) df_validation = preprocess(df_validation) df_train_x, df_train_y = df_train, df_train.pop(LABEL_COLUMN) df_validation_x, df_validation_y = df_validation, df_validation.pop(LABEL_COLUMN) # Join train_x and eval_x to normalize on overall means and standard # deviations. Then separate them again. all_x = pd.concat([df_train_x, df_validation_x], keys=["train", "eval"]) all_x = standardize(all_x, mean_and_std) df_train_x, df_validation_x = all_x.xs("train"), all_x.xs("eval") y_train = np.asarray(df_train_y).astype("float32") y_validation = np.asarray(df_validation_y).astype("float32") # Convert to numpy representation x_train = np.asarray(df_train_x) x_test = np.asarray(df_validation_x) # Convert to one-hot representation y_train = tf.keras.utils.to_categorical(y_train, num_classes=len(SPECIES)) y_validation = tf.keras.utils.to_categorical(y_validation, num_classes=len(SPECIES)) dataset_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) dataset_validation = tf.data.Dataset.from_tensor_slices((x_test, y_validation)) return (dataset_train, dataset_validation) # Create datasets dataset_train, dataset_validation = convert_dataframe_to_dataset(df_train, df_validation, mean_and_std) # Shuffle train set dataset_train = dataset_train.shuffle(len(df_train)) def create_model(num_features): # Create model Dense = tf.keras.layers.Dense model = tf.keras.Sequential( [ Dense( 100, activation=tf.nn.relu, kernel_initializer="uniform", input_dim=num_features, ), Dense(75, activation=tf.nn.relu), Dense(50, activation=tf.nn.relu), Dense(25, activation=tf.nn.relu), Dense(3, activation=tf.nn.softmax), ] ) # Compile Keras model optimizer = tf.keras.optimizers.RMSprop(lr=0.001) model.compile( loss="categorical_crossentropy", metrics=["accuracy"], optimizer=optimizer ) return model # Create the model with strategy.scope(): model = create_model(num_features=dataset_train._flat_shapes[0].dims[0].value) # Set up datasets NUM_WORKERS = strategy.num_replicas_in_sync # Here the batch size scales up by number of workers since # `tf.data.Dataset.batch` expects the global batch size. GLOBAL_BATCH_SIZE = args.batch_size * NUM_WORKERS dataset_train = dataset_train.batch(GLOBAL_BATCH_SIZE) dataset_validation = dataset_validation.batch(GLOBAL_BATCH_SIZE) # Train the model model.fit(dataset_train, epochs=args.epochs, validation_data=dataset_validation) tf.saved_model.save(model, os.getenv("AIP_MODEL_DIR")) df_test.head() """ Explanation: Training script In the next cell, write the contents of the training script, task.py. In summary, the script does the following: Loads the data from the BigQuery table using the BigQuery Python client library. Loads the pre-calculated mean and standard deviation from the Google Cloud Storage bucket. Builds a model using TF.Keras model API. Compiles the model (compile()). Sets a training distribution strategy according to the argument args.distribute. Trains the model (fit()) with epochs and batch size according to the arguments args.epochs and args.batch_size Gets the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service. Saves the trained model to the model directory. End of explanation """ job = aiplatform.CustomTrainingJob( display_name=JOB_NAME, script_path="task.py", container_uri=TRAIN_IMAGE, requirements=["google-cloud-bigquery>=2.20.0", "db-dtypes"], model_serving_container_image_uri=DEPLOY_IMAGE, ) MODEL_DISPLAY_NAME = "penguins-" + TIMESTAMP # Start the training if TRAIN_GPU: model = job.run( dataset=dataset, model_display_name=MODEL_DISPLAY_NAME, bigquery_destination=f"bq://{PROJECT_ID}", args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_type=TRAIN_GPU.name, accelerator_count=TRAIN_NGPU, ) else: model = job.run( dataset=dataset, model_display_name=MODEL_DISPLAY_NAME, bigquery_destination=f"bq://{PROJECT_ID}", args=CMDARGS, replica_count=1, machine_type=TRAIN_COMPUTE, accelerator_count=0, ) """ Explanation: Train the model Define your custom TrainingPipeline on Vertex AI. Use the CustomTrainingJob class to define the TrainingPipeline. The class takes the following parameters: display_name: The user-defined name of this training pipeline. script_path: The local path to the training script. container_uri: The URI of the training container image. requirements: The list of Python package dependencies of the script. model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a pre-built container or a custom container. Use the run function to start training. The function takes the following parameters: args: The command line arguments to be passed to the Python script. replica_count: The number of worker replicas. model_display_name: The display name of the Model if the script produces a managed Model. machine_type: The type of machine to use for training. accelerator_type: The hardware accelerator type. accelerator_count: The number of accelerators to attach to a worker replica. The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object. End of explanation """ DEPLOYED_NAME = "penguins_deployed-" + TIMESTAMP TRAFFIC_SPLIT = {"0": 100} MIN_NODES = 1 MAX_NODES = 1 if DEPLOY_GPU: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_GPU.name, accelerator_count=DEPLOY_NGPU, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) else: endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_COMPUTE.name, accelerator_count=0, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) """ Explanation: Deploy the model Before you use your model to make predictions, you must deploy it to an Endpoint. You can do this by calling the deploy function on the Model resource. This will do two things: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. The function takes the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify { "0": percent, model_id: percent, ... }, where model_id is the ID of an existing DeployedModel on the endpoint. The percentages must add up to 100. machine_type: The type of machine to use for training. accelerator_type: The hardware accelerator type. accelerator_count: The number of accelerators to attach to a worker replica. starting_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. Traffic split The traffic_split parameter is specified as a Python dictionary. You can deploy more than one instance of your model to an endpoint, and then set the percentage of traffic that goes to each instance. You can use a traffic split to introduce a new model gradually into production. For example, if you had one existing model in production with 100% of the traffic, you could deploy a new model to the same endpoint, direct 10% of traffic to it, and reduce the original model's traffic to 90%. This allows you to monitor the new model's performance while minimizing the distruption to the majority of users. Compute instance scaling You can specify a single instance (or node) to serve your online prediction requests. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1. If you want to use multiple nodes to serve your online prediction requests, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes. Endpoint The method will block until the model is deployed and eventually return an Endpoint object. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources. End of explanation """ import pandas as pd from google.cloud import bigquery UNUSED_COLUMNS = [] LABEL_COLUMN = "species" # Possible categorical values SPECIES = [ "Adelie Penguin (Pygoscelis adeliae)", "Chinstrap penguin (Pygoscelis antarctica)", "Gentoo penguin (Pygoscelis papua)", ] ISLANDS = ["Dream", "Biscoe", "Torgersen"] SEXES = ["FEMALE", "MALE"] _CATEGORICAL_TYPES = { "island": pd.api.types.CategoricalDtype(categories=ISLANDS), "species": pd.api.types.CategoricalDtype(categories=SPECIES), "sex": pd.api.types.CategoricalDtype(categories=SEXES), } def standardize(df, mean_and_std): """Scales numerical columns using their means and standard deviation to get z-scores: the mean of each numerical column becomes 0, and the standard deviation becomes 1. This can help the model converge during training. Args: df: Pandas df Returns: Input df with the numerical columns scaled to z-scores """ dtypes = list(zip(df.dtypes.index, map(str, df.dtypes))) # Normalize numeric columns. for column, dtype in dtypes: if dtype == "float32": df[column] -= mean_and_std[column]["mean"] df[column] /= mean_and_std[column]["std"] return df def preprocess(df, mean_and_std): """Converts categorical features to numeric. Removes unused columns. Args: df: Pandas df with raw data Returns: df with preprocessed data """ df = df.drop(columns=UNUSED_COLUMNS) # Drop rows with NaN's df = df.dropna() # Convert integer valued (numeric) columns to floating point numeric_columns = df.select_dtypes(["int32", "float32", "float64"]).columns df[numeric_columns] = df[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = df.select_dtypes(["object"]).columns df[cat_columns] = df[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name]) ) df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) return df def convert_dataframe_to_list(df, mean_and_std): df = preprocess(df, mean_and_std) df_x, df_y = df, df.pop(LABEL_COLUMN) # Normalize on overall means and standard deviations. df = standardize(df, mean_and_std) y = np.asarray(df_y).astype("float32") # Convert to numpy representation x = np.asarray(df_x) # Convert to one-hot representation return x.tolist(), y.tolist() x_test, y_test = convert_dataframe_to_list(dataframe, mean_and_std) """ Explanation: Make an online prediction request Send an online prediction request to your deployed model. Prepare test data Prepare test data by normalizing it and converting categorical values to numeric values. You must normalize these values in the same way that your normalized training data. In this example, perform testing with the same dataset that you used for training. In practice, you generally want to use a separate test dataset to verify your results. End of explanation """ predictions = endpoint.predict(instances=x_test) y_predicted = np.argmax(predictions.predictions, axis=1) correct = sum(y_predicted == np.array(y_test)) accuracy = len(y_predicted) print( f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}" ) """ Explanation: Send the prediction request Now that you have test data, you can use it to send a prediction request. Use the Endpoint object's predict function, which takes the following parameters: instances: A list of penguin measurement instances. According to your custom model, each instance should be an array of numbers. You prepared this list in the previous step. The predict function returns a list, where each element in the list corresponds to the an instance in the request. In the output for each prediction, you will see the following: Confidence level for the prediction (predictions), between 0 and 1, for each of the ten classes. You can then run a quick evaluation on the prediction results: 1. np.argmax: Convert each list of confidence levels to a label 2. Compare the predicted labels to the actual labels 3. Calculate accuracy as correct/total End of explanation """ deployed_model_id = endpoint.list_models()[0].id endpoint.undeploy(deployed_model_id=deployed_model_id) """ Explanation: Undeploy the model To undeploy your Model resource from the serving Endpoint resource, use the endpoint's undeploy method with the following parameter: deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed. You can retrieve the deployed models using the endpoint's deployed_models property. Since this is the only deployed model on the Endpoint resource, you can omit traffic_split. End of explanation """ # Warning: Setting this to true will delete everything in your bucket delete_bucket = False # Delete the training job job.delete() # Delete the model model.delete() # Delete the endpoint endpoint.delete() if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -r $BUCKET_URI """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Training Job Model Endpoint Cloud Storage Bucket End of explanation """
certik/climate
CO2 temperature analysis.ipynb
mit
%pylab inline import urllib """ Explanation: Since the current concentrations $N$ of $CO_2$ in the atmosphere is so high, the direct dependence of the surface temperature $T$ on $N$ should be given approximately by $$ T = T_0 + \Delta T {\log{N \over N_0} \over \log 2}\quad\quad\quad\text{(1)} $$ Here $T_0$ is a reference temperature, say the temperature in 1980 and $N_0$ is the corresponding atmospheric concentration of $CO_2$, say 340 ppm at Mauna Loa in 1980. From (1) we see that doubling the CO2 concentration (letting $N = 2N_0$) will increase T by the temperature sensitivity $\Delta T$. Let's express $\Delta T$: $$ \Delta T = (T - T_0) {\log 2 \over \log (N/N_0)}\quad\quad\quad\text{(2)} $$ $CO_2$ Data End of explanation """ # Only execute this if you want to regenerate the downloaded file open("data/co2_mm_mlo.txt", "wb").write(urllib.request.urlopen("ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_mm_mlo.txt").read()) D = loadtxt("data/co2_mm_mlo.txt") years = D[:, 2] average = D[:, 3] interpolated = D[:, 4] trend = D[:, 5] """ Explanation: Let's fetch the raw data of $CO_2$ measurements at Mauna Loa from the noaa.gov website: End of explanation """ plot(years, interpolated, "r-", lw=1.5, label="monthly average") plot(years, trend, "k-", label="trend") xlabel("Year") ylabel("$CO_2$ concentration [ppm]") title("Atmospheric $CO_2$ concentrations at Mauna Loa") legend(loc="upper left"); """ Explanation: As explained in the file co2_mm_mlo.txt, the average column are the raw data of $CO_2$ values averaged over a month, and some months are missing. The trend then subtracts "seasonal cycle" computed over a 7 year window. The missing values in trend are then linearly interpolated. Finally, the average column then contains the trend value plus average seasonal cycle (i.e. average and interpolated contain the same values except for missing months, which are "intelligently" interpolated). We should do this analysis ourselves in the notebook directly from the average data only, but for now let's reuse this analysis. End of explanation """ idx1980 = sum(years < 1980) idx2010 = sum(years < 2010) N0 = trend[idx1980] N = trend[idx2010] print("N0 = %.2f ppm (year %.3f)" % (N0, years[idx1980])) print("N = %.2f ppm (year %.3f)" % (N, years[idx2010])) """ Explanation: Let's get numbers for $N$ (year 2010) and $N_0$ (year 1980) by reading it off the $y$-axis: End of explanation """ dTdt = 0.24995728742972512 # C / decade """ Explanation: Temperature changes Warming from Berkeley Earth in the last 30 years (see a separate notebook for this calculation): End of explanation """ dTdt = 0.13764588789937693 # C / decade """ Explanation: Warming over the past 30 years using satellite measuremenets (see a separate notebook for this calculation): End of explanation """ dT = dTdt * 3 # 3 decades dT """ Explanation: We'll use the satellite measurements, as arguably they have less systematic errors. Temperature difference: End of explanation """ from math import log deltaT = dT * log(2) / log(1.0*N/N0) print("∆T = ", deltaT, "C") """ Explanation: Calculation of temperature sensitivity From equation (2) we then directly calculate: End of explanation """
tpin3694/tpin3694.github.io
machine-learning/bag_of_words.ipynb
mit
# Load library import numpy as np from sklearn.feature_extraction.text import CountVectorizer import pandas as pd """ Explanation: Title: Bag Of Words Slug: bag_of_words Summary: How to encode unstructured text data as bags of words for machine learning in Python. Date: 2017-09-09 12:00 Category: Machine Learning Tags: Preprocessing Text Authors: Chris Albon <a alt="Bag Of Words" href="https://machinelearningflashcards.com"> <img src="bag_of_words/Bag_Of_Words_print.png" class="flashcard center-block"> </a> Preliminaries End of explanation """ # Create text text_data = np.array(['I love Brazil. Brazil!', 'Sweden is best', 'Germany beats both']) """ Explanation: Create Text Data End of explanation """ # Create the bag of words feature matrix count = CountVectorizer() bag_of_words = count.fit_transform(text_data) # Show feature matrix bag_of_words.toarray() """ Explanation: Create Bag Of Words End of explanation """ # Get feature names feature_names = count.get_feature_names() # View feature names feature_names """ Explanation: View Bag Of Words Matrix Column Headers End of explanation """ # Create data frame pd.DataFrame(bag_of_words.toarray(), columns=feature_names) """ Explanation: View As A Data Frame End of explanation """
mne-tools/mne-tools.github.io
stable/_downloads/33d5dd5786fed13908838e94d55ac785/90_compute_covariance.ipynb
bsd-3-clause
import os.path as op import mne from mne.datasets import sample """ Explanation: Computing a covariance matrix Many methods in MNE, including source estimation and some classification algorithms, require covariance estimations from the recordings. In this tutorial we cover the basics of sensor covariance computations and construct a noise covariance matrix that can be used when computing the minimum-norm inverse solution. For more information, see minimum_norm_estimates. End of explanation """ data_path = sample.data_path() raw_empty_room_fname = op.join( data_path, 'MEG', 'sample', 'ernoise_raw.fif') raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname) raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(raw_fname) raw.set_eeg_reference('average', projection=True) raw.info['bads'] += ['EEG 053'] # bads + 1 more """ Explanation: Source estimation method such as MNE require a noise estimations from the recordings. In this tutorial we cover the basics of noise covariance and construct a noise covariance matrix that can be used when computing the inverse solution. For more information, see minimum_norm_estimates. End of explanation """ raw_empty_room.info['bads'] = [ bb for bb in raw.info['bads'] if 'EEG' not in bb] raw_empty_room.add_proj( [pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']]) noise_cov = mne.compute_raw_covariance( raw_empty_room, tmin=0, tmax=None) """ Explanation: The definition of noise depends on the paradigm. In MEG it is quite common to use empty room measurements for the estimation of sensor noise. However if you are dealing with evoked responses, you might want to also consider resting state brain activity as noise. First we compute the noise using empty room recording. Note that you can also use only a part of the recording with tmin and tmax arguments. That can be useful if you use resting state as a noise baseline. Here we use the whole empty room recording to compute the noise covariance (tmax=None is the same as the end of the recording, see :func:mne.compute_raw_covariance). Keep in mind that you want to match your empty room dataset to your actual MEG data, processing-wise. Ensure that filters are all the same and if you use ICA, apply it to your empty-room and subject data equivalently. In this case we did not filter the data and we don't use ICA. However, we do have bad channels and projections in the MEG data, and, hence, we want to make sure they get stored in the covariance object. End of explanation """ events = mne.find_events(raw) epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5, baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed verbose='error') # and ignore the warning about aliasing """ Explanation: Now that you have the covariance matrix in an MNE-Python object you can save it to a file with :func:mne.write_cov. Later you can read it back using :func:mne.read_cov. You can also use the pre-stimulus baseline to estimate the noise covariance. First we have to construct the epochs. When computing the covariance, you should use baseline correction when constructing the epochs. Otherwise the covariance matrix will be inaccurate. In MNE this is done by default, but just to be sure, we define it here manually. End of explanation """ noise_cov_baseline = mne.compute_covariance(epochs, tmax=0) """ Explanation: Note that this method also attenuates any activity in your source estimates that resemble the baseline, if you like it or not. End of explanation """ noise_cov.plot(raw_empty_room.info, proj=True) noise_cov_baseline.plot(epochs.info, proj=True) """ Explanation: Plot the covariance matrices Try setting proj to False to see the effect. Notice that the projectors in epochs are already applied, so proj parameter has no effect. End of explanation """ noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto', rank=None) """ Explanation: How should I regularize the covariance matrix? The estimated covariance can be numerically unstable and tends to induce correlations between estimated source amplitudes and the number of samples available. The MNE manual therefore suggests to regularize the noise covariance matrix (see cov_regularization_math), especially if only few samples are available. Unfortunately it is not easy to tell the effective number of samples, hence, to choose the appropriate regularization. In MNE-Python, regularization is done using advanced regularization methods described in :footcite:p:EngemannGramfort2015. For this the 'auto' option can be used. With this option cross-validation will be used to learn the optimal regularization: End of explanation """ evoked = epochs.average() evoked.plot_white(noise_cov_reg, time_unit='s') """ Explanation: This procedure evaluates the noise covariance quantitatively by how well it whitens the data using the negative log-likelihood of unseen data. The final result can also be visually inspected. Under the assumption that the baseline does not contain a systematic signal (time-locked to the event of interest), the whitened baseline signal should be follow a multivariate Gaussian distribution, i.e., whitened baseline signals should be between -1.96 and 1.96 at a given time sample. Based on the same reasoning, the expected value for the :term:global field power (GFP) &lt;GFP&gt; is 1 (calculation of the GFP should take into account the true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors): End of explanation """ noise_covs = mne.compute_covariance( epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True, rank=None) evoked.plot_white(noise_covs, time_unit='s') """ Explanation: This plot displays both, the whitened evoked signals for each channels and the whitened :term:GFP. The numbers in the GFP panel represent the estimated rank of the data, which amounts to the effective degrees of freedom by which the squared sum across sensors is divided when computing the whitened :term:GFP. The whitened :term:GFP also helps detecting spurious late evoked components which can be the consequence of over- or under-regularization. Note that if data have been processed using signal space separation (SSS) :footcite:TauluEtAl2005, gradiometers and magnetometers will be displayed jointly because both are reconstructed from the same SSS basis vectors with the same numerical rank. This also implies that both sensor types are not any longer statistically independent. These methods for evaluation can be used to assess model violations. Additional introductory materials can be found here. For expert use cases or debugging the alternative estimators can also be compared (see ex-evoked-whitening): End of explanation """ evoked_meg = evoked.copy().pick('meg') noise_cov['method'] = 'empty_room' noise_cov_baseline['method'] = 'baseline' evoked_meg.plot_white([noise_cov_baseline, noise_cov], time_unit='s') """ Explanation: This will plot the whitened evoked for the optimal estimator and display the :term:GFP for all estimators as separate lines in the related panel. Finally, let's have a look at the difference between empty room and event related covariance, hacking the "method" option so that their types are shown in the legend of the plot. End of explanation """
sraejones/phys202-2015-work
assignments/assignment10/ODEsEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.integrate import odeint from IPython.html.widgets import interact, fixed from IPython.html.widgets import interact, interactive, fixed """ Explanation: Ordinary Differential Equations Exercise 1 Imports End of explanation """ def lorentz_derivs(yvec, t, sigma, rho, beta): """Compute the the derivatives for the Lorentz system at yvec(t).""" # YOUR CODE HERE # raise NotImplementedError() x = yvec[0] y = yvec[1] z = yvec[2] dx = sigma*(y-x) dy = x*(rho-z)-y dz = x*y - beta*z return np.array([dx,dy,dz]) assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0]) """ Explanation: Lorenz system The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read: $$ \frac{dx}{dt} = \sigma(y-x) $$ $$ \frac{dy}{dt} = x(\rho-z) - y $$ $$ \frac{dz}{dt} = xy - \beta z $$ The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions. Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system. End of explanation """ def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Solve the Lorenz system for a single initial condition. Parameters ---------- ic : array, list, tuple Initial conditions [x,y,z]. max_time: float The max time to use. Integrate with 250 points per time unit. sigma, rho, beta: float Parameters of the differential equation. Returns ------- soln : np.ndarray The array of the solution. Each row will be the solution vector at that time. t : np.ndarray The array of time points used. """ # YOUR CODE HERE # raise NotImplementedError() t = np.linspace(0, max_time, 250*max_time) soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho,beta), atol=1e-9, rtol=1e-9) return np.array(soln), np.array(t) assert True # leave this to grade solve_lorenz """ Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array. End of explanation """ N = 5 colors = plt.cm.hot(np.linspace(0,1,N)) for i in range(N): # To use these colors with plt.plot, pass them as the color argument print(colors[i]) def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Plot [x(t),z(t)] for the Lorenz system. Parameters ---------- N : int Number of initial conditions and trajectories to plot. max_time: float Maximum time to use. sigma, rho, beta: float Parameters of the differential equation. """ # YOUR CODE HERE # raise NotImplementedError() plt.figure(figsize=(15,8)) np.random.seed(1) r = [] for i in range(0,10): data = (np.random.random(3)-0.5)*30.0 r.append(solve_lorentz(data, max_time, sigma, rho, beta)) for j in r: x = [p[0] for p in j[0]] z = [p[2] for p in j[0]] color = plt.cm.summer((x[0] + z[0]/60.0 - 0.5)) plt.plot(x, z, color=color) plt.xlabel('$x(t)$') plt.ylabel('$z(t)$') plt.title('Lorentz system') plot_lorentz() assert True # leave this to grade the plot_lorenz function """ Explanation: Write a function plot_lorentz that: Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time. Plot $[x(t),z(t)]$ using a line to show each trajectory. Color each line using the hot colormap from Matplotlib. Label your plot and choose an appropriate x and y limit. The following cell shows how to generate colors that can be used for the lines: End of explanation """ # YOUR CODE HERE # raise NotImplementedError() w = interactive(plot_lorentz, max_time = (1,10,1), N = (1,50,1), sigma = (0.0,50.0,0.1), rho = (0.0, 50.0, 0.1), bata = fixed(8.0/3.0)); w """ Explanation: Use interact to explore your plot_lorenz function with: max_time an integer slider over the interval $[1,10]$. N an integer slider over the interval $[1,50]$. sigma a float slider over the interval $[0.0,50.0]$. rho a float slider over the interval $[0.0,50.0]$. beta fixed at a value of $8/3$. End of explanation """
chinapnr/python_study
Python 基础课程/Python Basic Lesson 05 - 字典 dict, 元组 tuple.ipynb
gpl-3.0
# 定义字典 # 访问字典中的 key-value d = {'Tom': 95, 'Mary': 90, 'Tracy': 92} print(d) print(d['Tom']) # 字典增加元素,直接定义值即可 d['Hugo'] = 85 print(d) # 修改字典元素的值 d['Tom'] = 97 print(d) # 字典是否存在某个 key print('Tom' in d) # 如果要获得不存在的 key 的 value,可以设置默认值 print(d.get('Tommy',80)) # 去获得不存在的 key 的 value,会报错 print(d['Tommy']) # 字典删除 key d = {'Tom': 95, 'Mary': 90, 'Tracy': 92} print(d) # 删除 key d.pop('Tom') print(d) # 获得字典的长度 # 定义一个空的字典 d = {} # 造一些字典内容 # str 函数将整数转换为字符串 for i in range(30): d['id_'+str(i)] = i*3 print(d) print(len(d)) """ Explanation: Lesson 5 v1.0.0, 2016.12 by David Yi v1.0.1, 2017.02 modified by Yimeng Zhang v1.1, 2020.4 edit by David Yi 本次内容要点 字典dict 用法 元组tuple 用法 思考:中文分词 字典dict 用法 字典是另一种可变容器模型,可存储任意类型对象。 字典的每个键值(key=>value)对用冒号(:)分割,每个对之间用逗号(,)分割,整个字典包括在花括号({})中,格式如 d = {key1 : value1, key2 : value2 } 字典中的 key 值不可以重复(第一次定义后不能重复定义)。 字典的几个特点: 1. 查找和插入的速度极快,不会随着key的增加而变慢; 2. 需要占用大量的内存,内存浪费多。 而 list 相反: 1. 查找和插入的时间随着元素的增加而增加; 2. 占用空间小,浪费内存很少。 字典的基本操作: * 创建字典 * 访问字典中的 key-value * 修改字典中的 key-value * 获得字典中指定 key 的 value * 删除字典中的 key(value 也就消失了) End of explanation """ # 创建元组,用小括号 t = ('Tom', 'Jerry', 'Mary') print(t) # 访问元组的元素 print(t[1]) # 元组创建后是不能修改的 # 会报错 t.append('Someone') # 元组创建后是不能修改的 # 像列表一样去定义值也会报错 # 'tuple' object does not support item assignment t[1] = 'aaa' # 看看 tuple 有什么方法,你会发现很少 print(dir(tuple)) # 创建复杂一点的元组 t1 = ['A', 'B', 'C'] t2 =(t1, 100, 200) print(t2) # 变通的实现"可变"元组内容 t1 = ['A', 'B', 'C'] t2 =(t1, 100, 200) print(t1) print(t2) # tuple的每个元素,指向永远不变,但指向的元素本身是可变的 t1.append('D') print(t1) print(t2) # 创建只有1个元素的元组 # 下面这样是不行的 t = (1) # l成了一个整数,因为这里的括号有歧义,被认作数学计算里的小括号 print(type(t)) # 1个元素的元组必须加逗号来消除歧义 t = (1,) print(type(t)) print(t) """ Explanation: 元组Tuple 用法 Tuple 也是一种有序列表,在存储数据方面和列表很相似,为了区分,我们叫它元组。 Tuple 一旦内容存储后,就不能修改;这样的好处是数据很安全。 应用范围:在我们需要使用列表功能的时候,但是又不需要改变这个列表的内容,用元组 Tuple 功能会很安全,不用担心程序中不小心修改了其内容。Python 在向函数传递多个参数的时候,就是采用元组,保证参数在被调用的过程中的安全。 End of explanation """ !pip install jieba import jieba # 全模式 # 把句子中所有的可以称此的词语都扫描出来,速度非常快,但是不能解决歧义 seg_list = jieba.cut("今天上海的天气怎么样", cut_all = True) print("Full Mode: " + "/ ".join(seg_list)) # 精确模式 # 试图将句子最精确的切开,适合文本分析 seg_list = jieba.cut("明天纽约下雨么", cut_all = False) print("Default Mode: " + "/ ".join(seg_list)) # 默认是精确模式 seg_list = jieba.cut("现在天气怎么样") print(", ".join(seg_list)) # 默认是精确模式 seg_list = jieba.cut("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") print(", ".join(seg_list)) # 搜索引擎模式 # 在精确模式的基础上,对长词再次切分,提高召回率,适合用于搜索引擎分词 seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") print(", ".join(seg_list)) # 看看网络上的段子,分词带来的烦恼 seg_list = jieba.cut_for_search("黑夜总会过去") print(", ".join(seg_list)) seg_list = jieba.cut("黑夜总会过去", cut_all = True) print(", ".join(seg_list)) # 默认是精确模式 seg_list = jieba.cut("2016年第一季度支付事业部交易量报表") print(','.join(seg_list)) # 默认是精确模式 seg_list = jieba.cut("2016年第一季度支付事业部交易量报表") for i in seg_list: print(i) import jieba.posseg as pseg words = pseg.cut("我爱北京天安门") for word, flag in words: print('%s %s' % (word, flag)) """ Explanation: 思考 中文分词,是一个很有趣的话题,也是机器学习中关于语言处理的最基本的概念。今天我们看看 Python 怎么处理中文分词; 需要先安装 jieba 这个 Python 的库,使用 pip install jieba; 在 notebook 中执行命令可以在命令前面价格 !来进行; 导入 jieba 之后,第一次运行会有一个初始化过程,稍稍产生一些影响。 End of explanation """ print('\n'.join([''.join([('ILOVEYOU'[(x-y)%8]if((x*0.05)**2+(y*0.1)**2-1)**3-(x*0.05)**2*(y*0.1)**3<=0 else' ')for x in range(-30,30)])for y in range(15,-15,-1)])) """ Explanation: 词性 北大词性标注集 Ag     形语素     形容词性语素。形容词代码为a,语素代码g前面置以A。 a       形容词      取英语形容词adjective的第1个字母。 ad 副形词 直接作状语的形容词。形容词代码a和副词代码d并在一起。 an 名形词 具有名词功能的形容词。形容词代码a和名词代码n并在一起。 b       区别词      取汉字“别”的声母。 c       连词        取英语连词conjunction的第1个字母。 Dg     副语素     副词性语素。副词代码为d,语素代码g前面置以D。 d       副词     取adverb的第2个字母,因其第1个字母已用于形容词。 e       叹词     取英语叹词exclamation的第1个字母。 f        方位词      取汉字“方”的声母。 g  语素    绝大多数语素都能作为合成词的“词根”,取汉字“根”的声母。 h       前接成分   取英语head的第1个字母。 i        成语        取英语成语idiom的第1个字母。 j        简称略语  取汉字“简”的声母。 k       后接成分 l        习用语     习用语尚未成为成语,有点“临时性”,取“临”的声母。 m       数词     取英语numeral的第3个字母,n,u已有他用。 Ng     名语素     名词性语素。名词代码为n,语素代码g前面置以N。 n   名词        取英语名词noun的第1个字母。 nr  人名        名词代码n和“人(ren)”的声母并在一起。 ns      地名     名词代码n和处所词代码s并在一起。 nt      机构团体    “团”的声母为t,名词代码n和t并在一起。 nz     其他专名    “专”的声母的第1个字母为z,名词代码n和z并在一起。  o       拟声词     取英语拟声词onomatopoeia的第1个字母。 p       介词     取英语介词prepositional的第1个字母。 q       量词        取英语quantity的第1个字母。 r       代词        取英语代词pronoun的第2个字母,因p已用于介词。 s       处所词     取英语space的第1个字母。 Tg     时语素      时间词性语素。时间词代码为t,在语素的代码g前面置以T。 t     时间词      取英语time的第1个字母。 u       助词        取英语助词auxiliary 的第2个字母,因a已用于形容词。 Vg     动语素      动词性语素。动词代码为v。在语素的代码g前面置以V。 v       动词        取英语动词verb的第一个字母。 vd     副动词      直接作状语的动词。动词和副词的代码并在一起。 vn     名动词      指具有名词功能的动词。动词和名词的代码并在一起。 w      标点符号    x       非语素字    非语素字只是一个符号,字母x通常用于代表未知数、符号。 y       语气词      取汉字“语”的声母。 z      状态词      取汉字“状”的声母的前一个字母。 延展思考: 能够根据分词和词性的情况,做一个简单的机器人聊天软件? 彩蛋: End of explanation """
drvinceknight/cfm
docs/_static/example-coursework/main.ipynb
mit
### BEGIN SOLUTION import sympy as sym x = sym.Symbol("x") y = 2 * x * (x - 3) * (x - 5) sym.diff(y, x) ### END SOLUTION q1_a_answer = _ feedback_text = """Your output is not a symbolic expression. You are expected to use sympy for this question. """ try: assert q1_a_answer.is_algebraic_expr(), feedback_text except AttributeError: print(feedback_text) x = sym.Symbol("x") expected_answer = 2 * x * (x - 5) + 2 * x * (x - 3) + (x - 3) * (2 * x - 10) feedback_text = f"""Your answer is not correct. The expected answer is {expected_answer}.""" assert sym.simplify(q1_a_answer - expected_answer) == 0, feedback_text """ Explanation: Computing for Mathematics - Example individual coursework This jupyter notebook contains questions that will resemble the questions in your individual coursework. Important Do not delete the cells containing: ``` BEGIN SOLUTION END SOLUTION ``` write your solution attempts in those cells. Note that this notebook also includes some cells that check your answers. Question 1 For each of the following functions output \(\frac{dy}{dx}\). a. \(y= 2 x (x - 3) (x - 5)\) End of explanation """ ### BEGIN SOLUTION y = (3 * x ** 3 + 6 * sym.sqrt(x) + 3) / (3 * x ** (sym.S(1) / 4)) sym.diff(y, x) ### END SOLUTION q1_b_answer = _ feedback_text = """Your output is not a symbolic expression. You are expected to use sympy for this question. """ try: assert q1_b_answer.is_algebraic_expr(), feedback_text except AttributeError: print(feedback_text) x = sym.Symbol("x") expected_answer = sym.diff((3 * x ** 3 + 6 * sym.sqrt(x) + 3) / (3 * x ** (sym.S(1) / 4)), x) feedback_text = f"""Your answer is not correct. The expected answer is {expected_answer}.""" assert sym.simplify(q1_b_answer - expected_answer) == 0, feedback_text """ Explanation: b. y =\(\frac{3x ^ 3 + 6 \sqrt{x} + 3) }{ (3 x ^{(1 / 4)})}\) End of explanation """ ### BEGIN SOLUTION y = 2 * x * (x - 3) * (x - 5) sym.diff(y, x) ### END SOLUTION q1_c_answer = _ feedback_text = """Your output is not a symbolic expression. You are expected to use sympy for this question. """ try: assert q1_c_answer.is_algebraic_expr(), feedback_text except AttributeError: print(feedback_text) x = sym.Symbol("x") expected_answer = sym.diff(2 * x * (x - 3) * (x - 5), x) feedback_text = f"""Your answer is not correct. The expected answer is {expected_answer}.""" assert sym.simplify(q1_c_answer - expected_answer) == 0, feedback_text """ Explanation: \(y=2 x (x - 3) (x - 5)\) End of explanation """ ### BEGIN SOLUTION f = - x ** 3 + 2 * x ** 2 + 3 * x f_dash = sym.diff(f, x) turning_points = sym.solveset(f_dash, x) ### END SOLUTION feedback_text = """ Your output is not a sympy Finite Set which is what is expected here. """ try: assert type(turning_points) is sym.FiniteSet except NameError: print("You did not create a variable called `turning_points`") expected_answer = sym.solveset(- 3 * x ** 2 + 4 * x + 3, x) feedback_text = f"""Your answer is not correct. The expected answer is obtained by equating the derivative of f to 0 which gives: {expected_answer} """ assert set(turning_points) == set(expected_answer), feedback_text """ Explanation: Question 2 Consider the functions \(f(x)=-x^3+2x^2+3x\) and \(g(x)=-x^3+3x^2-x+3\). a. Create a variable turning_points which has value the turning points of \(f(x)\). End of explanation """ ### BEGIN SOLUTION g = - x ** 3 + 3 * x ** 2 - x + 3 intersection_points = sym.solveset(sym.Eq(f, g), x) ### END SOLUTION feedback_text = """ Your output is not a sympy Finite Set which is what is expected here. """ try: assert type(intersection_points) is sym.FiniteSet except NameError: print("You did not create a variable called `turning_points`") expected_answer = sym.FiniteSet(1, 3) feedback_text = f"""Your answer is not correct. The expected answer is obtained by equating the f and g: {expected_answer} """ assert set(intersection_points) == set(expected_answer), feedback_text """ Explanation: b. Create a variable intersection_points which has value the points where \(f(x)\) and \(g(x)\) intersect. End of explanation """ ### BEGIN SOLUTION area_of_shaded_region = abs(sym.integrate(f, (x, 1, 3)) - sym.integrate(g, (x, 1, 3))) area_of_shaded_region ### END SOLUTION feedback_text = "Your output is not a sympy rational which is expected here" try: assert type(area_of_shaded_region) is sym.Rational, feedback_text except NameError: print("You did not create a variable called `area_of_shaded_region`") expected_answer = sym.S(4) / 3 feedback_text = f"""The expected answer is {expected_answer} which is obtained by calculating the definite integral of |f - g| between 1 and 3. """ assert float(area_of_shaded_region) == float(expected_answer), expected_answer """ Explanation: c. Using your answers to parts b., calculate the area of the region between \(f\) and \(g\). Assign this value to a variable area_of_shaded_region. End of explanation """ import itertools ### BEGIN SOLUTION letters = "MONGOLIA" words = list(itertools.combinations(letters, 4)) number_of_selections = len(words) ### END SOLUTION expected_answer = 70 feedback_text = f"""The expected number of selections is {expected_answer}. This is obtained using the `itertools.combinations` function.""" try: assert number_of_selections == expected_answer, feedback_text except NameError: print("You did not create a variable `number_of_selections`") """ Explanation: Question 3 Three letters are selected at random from the 8 letters of the word MONGOLIA, without regard to order. a. Create a variable number_of_selections with value the number of possible selections of 4 letters. End of explanation """ ### BEGIN SOLUTION probability_of_selecting_N = sum('N' in word for word in words) / len(words) ### END SOLUTION expected_answer = 35 / 70 feedback_text = f"""The expected probability is {expected_answer}. This is obtained using a conditional summation over the generated words.""" try: assert float(probability_of_selecting_N) == float(expected_answer), feedback_text except NameError: print("You did not create a variable `probability_of_selecting_P`") """ Explanation: b. Create a variable probability_of_selecting_N with value the probability that the letter N is included in the selection. End of explanation """ ### BEGIN SOLUTION words = list(itertools.permutations(letters, 4)) probability_of_selecting_GOAL = sum((["A", "G", "L", "O"] == sorted(word)) for word in words) / len(words) ### END SOLUTION expected_answer = 48 / 1680 feedback_text = f"""The expected probability is {expected_answer}. This is obtained using a conditional summation over the words generated using `itertools.permutations`.""" try: assert np.isclose(probability_of_selecting_GOAL, expected_answer), feedback_text except NameError: print("You did not create a variable `probability_of_selecting_TOP`") """ Explanation: c. letters are now selected at random, one at a time, from the 8 letters of the word MONGOLIA, and are placed in order in a line. Create a variable probability_of_selecting_GOAL with value the probability that the 4 letters can form the word GOAL. End of explanation """ def generate_x(n, p): ### BEGIN SOLUTION """ Gives the nth term of the sequence for a given value of p """ if n == 1: return 1 previous_x = generate_x(n - 1, p) return previous_x * (p + previous_x) ### END SOLUTION feedback_text = """You did not include a docstring. This is important to help document your code. It is done using triple quotation marks. For example: def get_remainder(m, n): \"\"\" This function returns the remainder of m when dividing by n \"\"\" ... Using that it's possible to access the docstring, one way to do this is to type: `get_remainder?` (which only works in Jupyter) or help(get_remainder). We can also comment code using `#` but this is completely ignored by Python so cannot be accessed in the same way. """ try: assert generate_x.__doc__ is not None, feedback_text except NameError: print("You did not create a variable called `area_of_shaded_region`") try: assert generate_x(n=1, p=2) == 1, f"Your function did not give the expected answer for n=1, p=2" assert generate_x(n=1, p=1) == 1, f"Your function did not give the expected answer for n=1, p=2" assert generate_x(n=5, p=1) == 1806, f"Your function did not give the expected answer for n=5, p=1" assert generate_x(n=5, p=2) == 65535, f"Your function did not give the expected answer for n=5, p=2" except NameError: print("You did not create a variable called `area_of_shaded_region`") """ Explanation: Question 4 A sequence is given by: \[ x_1 = 1\ x_{n + 1}= x_n(p + x_n) \] for \(p\ne0\). a. Define a python function generate_x(n, p) that returns the terms of that sequence. End of explanation """ ### BEGIN SOLUTION p = sym.Symbol("p") generate_x(n=2, p=p) ### END SOLUTION q4_a_answer = _ feedback_text = """Your output is not a symbolic expression. You are expected to use sympy for this question. """ try: assert q4_a_answer.is_algebraic_expr(), feedback_text except AttributeError: print(feedback_text) p = sym.Symbol("p") expected_answer = p + 1 feedback_text = f"""Your answer is not correct. The expected answer is {expected_answer}.""" assert sym.simplify(q4_a_answer - expected_answer) == 0, feedback_text """ Explanation: b. Output an expression for \(x_2\) in terms of \(p\). End of explanation """ ### BEGIN SOLUTION generate_x(3, p=p) ### END SOLUTION q4_b_answer = _ feedback_text = """Your output is not a symbolic expression. You are expected to use sympy for this question. """ try: assert q4_b_answer.is_algebraic_expr(), feedback_text except AttributeError: print(feedback_text) p = sym.Symbol("p") expected_answer = 2 * p ** 2 + 3 * p + 1 feedback_text = f"""Your answer is not correct. The expected answer is {expected_answer}.""" assert sym.simplify(q4_b_answer - expected_answer) == 0, feedback_text """ Explanation: b. Output an expression for \(x_3\) in terms of \(p\) End of explanation """ ### BEGIN SOLUTION sym.solveset(sym.Eq(generate_x(3, p=p), 1), p) ### END SOLUTION q4_c_answer = _ feedback_text = """ Your output is not a sympy Finite Set which is what is expected here. """ assert type(q4_c_answer) is sym.FiniteSet, feedback_text """ Explanation: c. Output the values of \(p\) for which \(x_3=1\). End of explanation """ values_of_n = [1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 200, 300] ### BEGIN SOLUTION values = [generate_x(n=n, p=-sym.S(3) / 2) for n in values_of_n] range_of_values = min(values), max(values) ### END SOLUTION import numpy as np expected_answer = (- sym.S(1) / 2, 1) expected_answer = np.array(sorted(expected_answer), dtype=np.float64) range_of_values = np.array(sorted(range_of_values), dtype=np.float64) feedback_text = f"""Your answer is not correct, the expected answer is: {expected_answer} """ try: assert np.allclose(expected_answer, range_of_values), feedback_text except NameError: print("You did not create a variable called `all_values_of_n`") """ Explanation: d. Using the non zero value of \(p\) calculated in the previous question calculate \(x_n\) for \(n\in{1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 200, 300}\) and create a variable range_of_values with minimum and maximum value or \(x_n\). End of explanation """
chengsoonong/crowdastro
notebooks/50_yan_rgz.ipynb
mit
from pprint import pprint import sys from astropy.coordinates import SkyCoord import h5py import numpy import sklearn.neighbors import seaborn sys.path.insert(1, '..') import crowdastro.active_learning.active_crowd as active_crowd import crowdastro.active_learning.passive_crowd as passive_crowd import crowdastro.active_learning.active_crowd_scalar as active_crowd_scalar CROWDASTRO_H5_PATH = '../data/crowdastro.h5' TRAINING_H5_PATH = '../data/training.h5' NORRIS_DAT_PATH = '../data/norris_2006_atlas_classifications_ra_dec_only.dat' # Load Norris labels. with h5py.File(TRAINING_H5_PATH, 'r') as training_f: ir_positions = training_f['positions'].value ir_tree = sklearn.neighbors.KDTree(ir_positions) with open(NORRIS_DAT_PATH, 'r') as norris_dat: norris_coords = [r.strip().split('|') for r in norris_dat] norris_labels = numpy.zeros((len(ir_positions))) for ra, dec in norris_coords: # Find a neighbour. skycoord = SkyCoord(ra=ra, dec=dec, unit=('hourangle', 'deg')) ra = skycoord.ra.degree dec = skycoord.dec.degree ((dist,),), ((ir,),) = ir_tree.query([(ra, dec)]) if dist < 0.1: norris_labels[ir] = 1 """ Explanation: Testing Yan et al. (2010, 2011) on the Radio Galaxy Zoo Let's run the crowd learning algorithm on the Radio Galaxy Zoo. End of explanation """ with h5py.File(CROWDASTRO_H5_PATH) as f_h5: print(sum(1 for i in f_h5['/atlas/cdfs/']['classification_usernames'] if not i) / len(f_h5['/atlas/cdfs/']['classification_usernames'])) """ Explanation: How many annotators do we have? How many labels are anonymously contributed? At the moment, I can only use the algorithm for non-anonymous users. How many are there? End of explanation """ with h5py.File(CROWDASTRO_H5_PATH) as f_h5: print(len({i for i in f_h5['/atlas/cdfs/']['classification_usernames'] if i})) """ Explanation: Only 15% of labels are contributed by anonymous users! That's great for the algorithm. How many users are there? End of explanation """ with h5py.File(CROWDASTRO_H5_PATH) as f_h5: annotators = sorted({i for i in f_h5['/atlas/cdfs/classification_usernames'] if i}) n_annotators = len(annotators) annotator_to_index = {j:i for i, j in enumerate(annotators)} n_examples = f_h5['/wise/cdfs/numeric'].shape[0] ir_tree = sklearn.neighbors.KDTree(f_h5['/wise/cdfs/numeric'][:, :2], metric='chebyshev') with h5py.File(CROWDASTRO_H5_PATH) as f_h5: labels = numpy.ma.MaskedArray(numpy.zeros((n_annotators, n_examples)), mask=numpy.ones((n_annotators, n_examples))) for (atlas_idx, ra, dec), c_user in zip( f_h5['/atlas/cdfs/classification_positions'], f_h5['/atlas/cdfs/classification_usernames'], ): if not c_user: continue t = annotator_to_index[c_user] atlas_ra, atlas_dec = f_h5['/atlas/cdfs/numeric'][atlas_idx, :2] # t has seen this ATLAS subject, so unmask everything within 1' Chebyshev distance (the radius of an RGZ subject). nearby = ir_tree.query_radius([[atlas_ra, atlas_dec]], 1 / 60)[0] labels.mask[t, nearby] = 0 # Label the point nearest the classification as 1. # (The others are 0 by default.) if numpy.isnan(ra) or numpy.isnan(dec): continue point = ir_tree.query([[ra, dec]], return_distance=False)[0] labels[t, point] = 1 import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(15, 10)) plt.subplot(1, 2, 1) plt.hist((~labels.mask).sum(axis=0)) plt.xlabel('Number of viewings') plt.ylabel('IR objects') plt.subplot(1, 2, 2) plt.hist((~labels.mask).sum(axis=1), bins=numpy.linspace(0, 25000, 200)) plt.xlim((0, 1000)) plt.xlabel('Number of annotations') plt.ylabel('Annotators') """ Explanation: There are 1193 labellers. That's big but hopefully my code can handle it (and if not I'll have to change my methodology a bit). Retrieving labels Let's pull out some labels. This involves matching each IR object to a label for each annotator. If a IR object never appears in a subject that the annotator has labelled, then it should be masked. End of explanation """ import sklearn.metrics accuracies = [] annotator_to_accuracy = {} for t in range(n_annotators): mask = labels[t].mask cm = sklearn.metrics.confusion_matrix(norris_labels[~mask], labels[t, ~mask]).astype(float) if cm.shape == (1, 1): continue tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] if not (n and p): continue ba = (tp / p + tn / n) / 2 accuracies.append(ba) annotator_to_accuracy[t] = ba print('{:.02%} of labellers have a balanced accuracy.'.format(len(accuracies) / n_annotators)) plt.hist(accuracies, color='grey', bins=20) plt.xlabel('Balanced accuracy') plt.ylabel('Number of annotators') plt.show() print('Average: ({:.02f} +- {:.02f})%'.format(numpy.mean(accuracies) * 100, numpy.std(accuracies) * 100)) """ Explanation: How good are the annotators? #127 What is the distribution of balanced accuracies for each annotator? Can we estimate $p(y_i^{(t)} | x_i, z_i)$? End of explanation """ experts = ("42jkb", "ivywong", "stasmanian", "klmasters", "Kevin", "akapinska", "enno.middelberg", "xDocR", "DocR", "vrooje", "KWillett") print([expert for expert in experts if expert.encode('ascii') in annotators]) """ Explanation: How many annotators are experts? End of explanation """ counts = [(numpy.ma.sum(labels[t] == 1), t) for t in range(n_annotators)] counts.sort() pprint([(annotator_to_accuracy[t], t, count) for count, t in counts[-10:]]) top_10 = sorted([t for _, t in counts[-10:]]) for annotator, count in [(annotators[t], count) for count, t in reversed(counts)]: print(annotator.decode('utf-8'), '\t', count) """ Explanation: How many positive examples have the top 10 annotators labelled? End of explanation """ top_labels = labels[top_10] positive_bool = numpy.any(top_labels, axis=0) positives = numpy.arange(top_labels.shape[1])[positive_bool] non_positives = numpy.arange(top_labels.shape[1])[~positive_bool] while positives.shape[0] < non_positives.shape[0]: new_positives = positives[:] numpy.random.shuffle(new_positives) positives = numpy.concatenate([positives, new_positives]) positives = positives[:non_positives.shape[0]] upsampled = numpy.concatenate([positives, non_positives]) upsampled.sort() upsampled_train, upsampled_test = sklearn.cross_validation.train_test_split(downsampled) upsampled_train.sort() upsampled_test.sort() """ Explanation: Running the algorithm on the top 10 annotators Let's throw in just the top 10 annotators #126 and see how it goes. First, I'll upsample the positive examples. I'll count a "negative" example as anything that doesn't have any positive classifications. End of explanation """ print(upsampled_train.shape, upsampled_test.shape) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][upsampled_train, :] res = passive_crowd.train(x, top_labels.astype(bool)[:, upsampled_train], lr_init=True) import sklearn.metrics with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], x[upsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[upsampled_test], pred) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print(ba) print(cm) import seaborn, matplotlib.pyplot as plt %matplotlib inline with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.logistic_regression(res[0], res[1], x[upsampled_test, :]) pos_pred = pred[norris_labels[upsampled_test] == 1] neg_pred = pred[norris_labels[upsampled_test] == 0] assert pos_pred.shape[0] + neg_pred.shape[0] == pred.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_pred, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_pred, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((0, 1)) plt.show() """ Explanation: Now I can run the algorithm. End of explanation """ simulated_norris_labels = numpy.ma.MaskedArray(numpy.tile(norris_labels, (2, 1)), False) # mask=numpy.random.binomial(1, 0.5, size=(5, 24140))) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][upsampled_train, :] res = passive_crowd.train(x, simulated_norris_labels.astype(bool)[:, upsampled_train], lr_init=True) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], x[upsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[upsampled_test], pred) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print(ba) print(cm) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(x[upsampled_test, :].T) + res[1] pos_score = score[norris_labels[upsampled_test] == 1] neg_score = score[norris_labels[upsampled_test] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.show() with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(x[upsampled_train, :].T) + res[1] pos_score = score[norris_labels[upsampled_train] == 1] neg_score = score[norris_labels[upsampled_train] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((-10, 25)) plt.show() """ Explanation: Running the algorithm on simulated labellers, no noise I'll use the Norris labels to generate true labels for a fully observed crowd labelling scenario. End of explanation """ import sklearn.linear_model lr = sklearn.linear_model.LogisticRegression(C=100) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] lr.fit(x[downsampled_train, :], norris_labels[downsampled_train]) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = lr.predict(x[downsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[downsampled_test], pred) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print(ba) print(cm) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = lr.decision_function(x[downsampled_test, :]) pos_score = score[norris_labels[downsampled_test] == 1] neg_score = score[norris_labels[downsampled_test] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.show() with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = lr.decision_function(x[downsampled_train, :]) pos_score = score[norris_labels[downsampled_train] == 1] neg_score = score[norris_labels[downsampled_train] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.show() """ Explanation: Norris Baseline Just a quick comparison with the Norris labels, fully observed and one annotator. End of explanation """ with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][downsampled_train, :] res = active_crowd_scalar.train(x, top_labels.astype(bool)[:, downsampled_train], lr_init=True) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], x[downsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[downsampled_test], pred) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print(ba) print(cm) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(x[downsampled_test, :].T) + res[1] pos_score = score[norris_labels[downsampled_test] == 1] neg_score = score[norris_labels[downsampled_test] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((-20, 20)) plt.show() with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(x[downsampled_train, :].T) + res[1] pos_score = score[norris_labels[downsampled_train] == 1] neg_score = score[norris_labels[downsampled_train] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((-20, 20)) plt.show() res[2] """ Explanation: Top 10 annotators with scalar $\eta_t$ End of explanation """ cov = numpy.ma.cov(labels) print(cov, cov.shape) plt.imshow(cov, interpolation='None') """ Explanation: Clustering annotators First, let's make a covariance matrix. End of explanation """ import sklearn.cluster, collections kmc = sklearn.cluster.KMeans(5) kmc.fit(cov) clusters = kmc.predict(cov) """ Explanation: As expected, lots of unknowns. We'll press on nevertheless! End of explanation """ cluster_labels = numpy.ma.MaskedArray(numpy.zeros((5, labels.shape[1])), mask=numpy.zeros((5, labels.shape[1]))) for c in range(5): for i in range(labels.shape[1]): this_cluster_labels = labels[clusters == c, i] # Compute the majority vote. counter = collections.Counter(this_cluster_labels[~this_cluster_labels.mask]) if counter: cluster_labels[c, i] = max(counter, key=counter.get) else: cluster_labels.mask[c, i] = True """ Explanation: Now, we'll do a majority vote over these clusters. End of explanation """ def balanced_accuracy(y_true, y_pred): try: cm = sklearn.metrics.confusion_matrix(y_true[~y_pred.mask], y_pred[~y_pred.mask]) except AttributeError: cm = sklearn.metrics.confusion_matrix(y_true, y_pred) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 return ba i_tr, i_te = sklearn.cross_validation.train_test_split(numpy.arange(labels.shape[1])) with h5py.File(TRAINING_H5_PATH) as f_h5: features = f_h5['features'].value for c in range(5): labels_ = cluster_labels[c] lr = sklearn.linear_model.LogisticRegression(class_weight='balanced') lr.fit(features[i_tr], labels_[i_tr]) print(balanced_accuracy(norris_labels[i_te], lr.predict(features[i_te]))) """ Explanation: Now let's try a basic logistic regression on each of them. End of explanation """ with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][downsampled_train, :] res = active_crowd_scalar.train(x, cluster_labels.astype(bool)[:, downsampled_train], lr_init=True) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], x[downsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[downsampled_test], pred) ba = balanced_accuracy(norris_labels[downsampled_test], pred) print(cm) print(ba) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(x[downsampled_test, :].T) + res[1] pos_score = score[norris_labels[downsampled_test] == 1] neg_score = score[norris_labels[downsampled_test] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((-20, 20)) plt.show() """ Explanation: Next, we'll put it into the scalar $\eta_t$ implementation. End of explanation """ with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][downsampled_train, :] res = active_crowd.train(x, cluster_labels.astype(bool)[:, downsampled_train], lr_init=True) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], x[downsampled_test, :]) cm = sklearn.metrics.confusion_matrix(norris_labels[downsampled_test], pred) ba = balanced_accuracy(norris_labels[downsampled_test], pred) print(cm) print(ba) plt.plot(res[2].T) plt.xscale('log') plt.legend(range(5)) print(res[3]) """ Explanation: Now let's try the full algorithm. End of explanation """ import sklearn.decomposition pca = sklearn.decomposition.PCA(n_components=10) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][downsampled_train, :] pca.fit(x) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'][downsampled_train, :] res = active_crowd.train(pca.transform(x), labels.astype(bool)[:, downsampled_train], lr_init=True) seaborn.distplot(res[3]) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] pred = passive_crowd.predict(res[0], res[1], pca.transform(x[downsampled_test, :])) cm = sklearn.metrics.confusion_matrix(norris_labels[downsampled_test], pred) ba = balanced_accuracy(norris_labels[downsampled_test], pred) print(cm) print(ba) with h5py.File(TRAINING_H5_PATH) as f_h5: x = f_h5['features'] score = res[0].dot(pca.transform(x[downsampled_test, :]).T) + res[1] pos_score = score[norris_labels[downsampled_test] == 1] neg_score = score[norris_labels[downsampled_test] == 0] assert pos_score.shape[0] + neg_score.shape[0] == score.shape[0] plt.figure(figsize=(10, 5)) seaborn.distplot(pos_score, rug=True, hist=False, color='green', rug_kws={'alpha': 0.1}) seaborn.distplot(neg_score, rug=True, hist=False, color='red', rug_kws={'alpha': 0.1}) plt.xlim((-100, 100)) plt.show() """ Explanation: PCA on $X$? End of explanation """
planetlabs/notebooks
jupyter-notebooks/analytics/case_study_syria_idp_camps.ipynb
apache-2.0
import os # if your Planet API Key is not set as an environment variable, you can paste it below if os.environ.get('PL_API_KEY', ''): API_KEY = os.environ.get('PL_API_KEY', '') else: API_KEY = 'PASTE YOUR API KEY HERE' # construct auth tuple for use in the requests library BASIC_AUTH = (API_KEY, '') """ Explanation: Planet Analytics API Tutorial <h1 style="margin-top:10px;">Case Study: Flood and Displacement Mapping in Syria</h1> </div> <div class="content-block"> ## Overview 1. [Setup](#1.-Setup) 2. [Case Study](#2.-Case-Study) 3. [Mapping our Area of Interest](#3.-Mapping-AOI) 4. [Working with Collections](#4.-Working-with-Collections) 5. [Parsing Results](#5.-Parse-Results-Links) 6. [Segmentation Results](#6.-Segmentation-Results) 7. [Quantifying Change](#7.-Quantifying-Change) 8. [Line Charts](#8.-Line-Charts) </div> 1. Setup To run through this notebook, you will need access to the following: - A Planet account and Planet API Key - Access to the Analytics API End of explanation """ BASE_URL = "https://api.planet.com/analytics/" """ Explanation: Set the base url for the Planet Analytic Feeds product See the Analytics API Docs for more details. End of explanation """ import requests feed_list_url = BASE_URL + 'feeds' resp = requests.get(feed_list_url, auth=BASIC_AUTH, params={'limit': 1}) if resp.status_code == 200: print('Yay, you can access the Analytics API') else: print('Something is wrong:', resp.content) """ Explanation: Check API Connection End of explanation """ buildings_sub_id = '76d06ec1-8507-4035-97cd-b3ea87b5b699' roads_sub_id = '6696da5c-88b8-49c2-a423-c936c0f386a5' """ Explanation: 2. Case Study Near the end of 2018, Syria experienced severe rainfall and flooding across much of the Northern latitudes of the country. This flooding had devastating impacts to several Internally Displaced Persons (IDP) camps across the region, as reported in several small media outlets. Today, we are interested using Planet's Analytic Feeds to explore both the development of an IDP Camp south of Al Hasakah, and the impacts of the flooding of this area in subsequent years. We will use the following buildings and roads suscription IDs to collect our data. Note: If you do not have access to these subscriptions, please get in touch. End of explanation """ subscriptions_url = BASE_URL + 'subscriptions/' syria_buildings = requests.get(subscriptions_url + buildings_sub_id, auth=BASIC_AUTH).json() syria_roads = requests.get(subscriptions_url + roads_sub_id, auth=BASIC_AUTH) .json() """ Explanation: Let's create a new url to request the subscriptions endpoint. End of explanation """ from pprint import pprint pprint(syria_buildings) print('') pprint(syria_roads) """ Explanation: We can use the pprint library to structure our json responses. End of explanation """ if syria_buildings['geometry'] == syria_roads['geometry']: print('The geometries are the same!') aoi = syria_buildings['geometry'] from ipyleaflet import Map, GeoJSON, LocalTileLayer, LayersControl, SplitMapControl, WidgetControl from ipywidgets import SelectionSlider, VBox from numpy import mean """ Explanation: 3. Mapping AOI Inspecting subscription details Subscriptions have a spatial area of interest described by a geojson geometry. We can visualize the area of interest for a subscription on a map. First, let's just confirm that the geometries are the same for both roads and buildings subscriptions. End of explanation """ aoi min_lat = min(coord[1] for coord in aoi['coordinates'][0]) max_lat = max(coord[1] for coord in aoi['coordinates'][0]) min_lon = min(coord[0] for coord in aoi['coordinates'][0]) max_lon = max(coord[0] for coord in aoi['coordinates'][0]) map_center = (mean([min_lat, max_lat]), mean([min_lon, max_lon])) print(map_center) # make a map, and draw the subscription geometry m = Map(center=map_center, zoom=12) # convert to leaflet GeoJSON object map_AOI = GeoJSON( data = aoi, style = {'color': 'blue', 'opacity':0.5, 'weight':1.5, 'dashArray':'5', 'fillOpacity':0.1} ) m.add_layer(map_AOI) m """ Explanation: First, let's center the map at the centroid of our geometry. End of explanation """ buildings_collection_endpoint = [link['href'] for link in syria_buildings['links'] if link['rel'] == 'results'][0] roads_collection_endpoint = [link['href'] for link in syria_roads['links'] if link['rel'] == 'results'][0] print('Buildings Collection URL: {}'.format(buildings_collection_endpoint)) print('Roads Collection URL: {}'.format(roads_collection_endpoint)) """ Explanation: Now that we have a sense of our study AOI, let's inspect Planet's source imagery and Analytic Feeds results. 4. Working with Collections As should now be familiar, Planet Analytic Feeds results can be accessed via the collections endpoint. We can find the results for our particular subscriptions in the links property from the results of our last subscriptions requests. End of explanation """ building_results = requests.get(buildings_collection_endpoint, auth=BASIC_AUTH).json() roads_results = requests.get(roads_collection_endpoint, auth=BASIC_AUTH).json() print('We got {} buildings results!'.format(len(building_results['features']))) print('We got {} roads results!'.format(len(roads_results['features']))) """ Explanation: Request the Collections API End of explanation """ import pandas as pd """ Explanation: 5. Parse Results Links Our results come back nicely wrapped as GeoJSON FeatureCollection. We can easily put them into a Pandas Dataframe. End of explanation """ # Convert links and properties to dataframe buildings = pd.json_normalize(building_results['features']).loc[:, ['links', 'properties.observed']] # extract links buildings['source_tiles'] = buildings['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'source-tiles'][0]) buildings['buildings_tiles'] = buildings['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'target-tiles'][0]).map(lambda x: x + '&exp=bincat:0|39039e') buildings['b_source_quad'] = buildings['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'source-quad'][0]) buildings['buildings_quad'] = buildings['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'target-quad'][0]) # drop links column buildings.drop(labels=['links'], axis=1, inplace=True) # change column names buildings.rename(columns = {'properties.observed': 'date'}, inplace=True) buildings['date'] = buildings['date'].map(lambda x: x.split('T')[0]) buildings.head() """ Explanation: Parse Buildings Results End of explanation """ roads = pd.json_normalize(roads_results['features']).loc[:, ['links', 'properties.observed']] # extract links roads['source_tiles'] = roads['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'source-tiles'][0]) roads['roads_tiles'] = roads['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'target-tiles'][0]).map(lambda x: x + '&exp=bincat:0|d65a45') roads['r_source_quad'] = roads['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'source-quad'][0]) roads['roads_quad'] = roads['links'].map(lambda links: [link['href'] for link in links if link['rel'] == 'target-quad'][0]) # drop links column roads.drop(labels=['links'], axis=1, inplace=True) # change column names roads.rename(columns = {'properties.observed': 'date'}, inplace=True) roads['date'] = roads['date'].map(lambda x: x.split('T')[0]) roads.head() """ Explanation: Now, let's do the same as above for our Roads Segmentation results. End of explanation """ rnb = buildings.merge(roads, on=['date', 'source_tiles']) rnb.head() """ Explanation: Next, let's combine our roads and buildings results into a single feature dataframe. End of explanation """ tiles = rnb.loc[:, ['date', 'source_tiles', 'buildings_tiles', 'roads_tiles']].drop_duplicates() # Sort dataframe by time tiles = tiles.sort_values(by='date').reset_index(drop=True) tiles.head() """ Explanation: Finally, let's create separate dataframes for tiles and quads. We'll use the tiles to visualize our AOI in a few web maps, and the quads for a later analysis. End of explanation """ quads = rnb.loc[:, ['date', 'r_source_quad', 'b_source_quad', 'roads_quad', 'buildings_quad']] # Sort dataframe by time quads.sort_values(by='date', inplace=True) quads.reset_index(drop=True, inplace=True) quads.head() """ Explanation: Quads dataframe End of explanation """ times = tiles['date'].unique() print('First date: {}'.format(times[0])) m = Map(center=map_center, zoom=13) building_mask = LocalTileLayer( path=tiles.loc[0, 'buildings_tiles'], name=f"Buildings: {tiles.loc[0, 'date']}" ) road_mask = LocalTileLayer( path=tiles.loc[0, 'roads_tiles'], name=f"Roads: {tiles.loc[0, 'date']}" ) basemap = LocalTileLayer( path=tiles.loc[0, 'source_tiles'], name='Source image' ) m.add_layer(basemap) m.add_layer(road_mask) m.add_layer(building_mask) m.add_control(LayersControl(position='topright')) m """ Explanation: 6. Segmentation Results First, let's take a look at our results for the first date in our subscription. End of explanation """ m = Map(center=map_center, zoom=13) # first time-point building_mask_1 = LocalTileLayer( path=tiles.loc[16, 'buildings_tiles'], name=f"Buildings: {tiles.loc[16, 'date']}") road_mask_1 = LocalTileLayer( path=tiles.loc[16, 'roads_tiles'], name=f"Roads: {tiles.loc[16, 'date']}") basemap_1 = LocalTileLayer( path=tiles.loc[16, 'source_tiles'], name=tiles.loc[17, 'date']) # add second time series building_mask_2 = LocalTileLayer( path=tiles.loc[18, 'buildings_tiles'], name=f"Buildings: {tiles.loc[18, 'date']}") road_mask_2 = LocalTileLayer( path=tiles.loc[18, 'roads_tiles'], name=f"Roads: {tiles.loc[18, 'date']}") basemap_2 = LocalTileLayer( path=tiles.loc[18, 'source_tiles'], name=tiles.loc[18, 'date']) # add layers m.add_layer(road_mask_1) m.add_layer(building_mask_1) m.add_layer(road_mask_2) m.add_layer(building_mask_2) splitter = SplitMapControl(left_layer=[building_mask_1, road_mask_1, basemap_1], right_layer=[building_mask_2, road_mask_2, basemap_2]) # add controls m.add_control(LayersControl(position='topright')) m.add_control(splitter) m """ Explanation: Split Map Using the SplitMapControl, we can easily swipe between pre- and post-flood imagery and building overlays. End of explanation """ m = Map(center=map_center, zoom=13) date_slider = SelectionSlider(description='Time:', options=times) current_date = '2017-07-01' # create and add initial layers source_layer=LocalTileLayer(path=tiles.loc[tiles['date'] == current_date, 'source_tiles'].iloc[0]) roads_layer=LocalTileLayer(path=tiles.loc[tiles['date'] == current_date, 'roads_tiles'].iloc[0]) buildings_layer=LocalTileLayer(path=tiles.loc[tiles['date'] == current_date, 'buildings_tiles'].iloc[0]) m.add_layer(source_layer) m.add_layer(roads_layer) m.add_layer(buildings_layer) def get_source_url(change): global tiles source_url = tiles.loc[tiles['date'] == change, 'source_tiles'].iloc[0] return source_url def get_road_url(change): global tiles roads_url = tiles.loc[tiles['date'] == change, 'roads_tiles'].iloc[0] return roads_url def get_building_url(change): global tiles buildings_url = tiles.loc[tiles['date'] == change, 'buildings_tiles'].iloc[0] return buildings_url def display_tiles(change): global source_layer global roads_layer global buildings_layer global current_date if current_date != date_slider.value: current_date = date_slider.value # update source imagery source_layer = LocalTileLayer(path=get_source_url(current_date)) # update roads mask roads_layer= LocalTileLayer(path=get_road_url(current_date)) # update buildings mask buildings_layer = LocalTileLayer(path=get_building_url(current_date)) # add new layers m.add_layer(source_layer) m.add_layer(roads_layer) m.add_layer(buildings_layer) # link date_slider.observe(display_tiles, 'value') """ Explanation: Date Slider We can toggle through our time points with greater granularity using a Date Slider widget. End of explanation """ VBox([date_slider, m]) """ Explanation: Display Map End of explanation """ new_aoi = {'type': 'Polygon', 'coordinates': [[[40.746897, 36.270941], [40.746897, 36.294431], [40.790485, 36.294431], [40.790485, 36.270941], [40.746897, 36.270941]]] } """ Explanation: 7. Quantifying Change Calculating Buildings and Roads Pixels The above visualizations allow for a great qualitative analysis of our AOI, with the roads and buildings masks drawing our attention to the changing features over time. Now, let's use some raster tools to quantitatively measure the changes. Specify New AOI We'll use a condensed polygon centered around the region of development and flooding we observed above. End of explanation """ import rasterio as rio from rasterio.warp import transform_geom from shapely.geometry import shape from numpy import count_nonzero # transform AOI transformed_aoi = rio.warp.transform_geom(src_crs='EPSG:4326', dst_crs='EPSG:3857', geom=new_aoi, precision=6) pprint(transformed_aoi) """ Explanation: Note: Coordinate Reference Systems Leaflet Maps and Planet's Tile Services/ Quads use a different coordinate reference system. So, we'll need to transform the above geometry to properly align our AOI and imagery. Leaflet CRS: EPSG 4326 Planet CRS: EPSG 3857 End of explanation """ def get_download_url(quad_url, auth): """ utility function to get the target-quad download url This enables reading Planet Quads using Rasterio without local download! """ resp = requests.get(quad_url, auth=auth, allow_redirects=False) assert resp.status_code == 302 return resp.headers['Location'] def get_geometry_bounds(aoi): """ Converts GeoJSON-like feature to Shapely geometry object. Returns bounds of object """ geo_to_shape = shape(aoi) xmin, ymin, xmax, ymax = geo_to_shape.bounds return xmin, ymin, xmax, ymax def read_window(dataset, xmin, ymin, xmax, ymax): """ Performs a windowed read of a GeoTiff using the geometry bounds of an AOI Returns the window as a 1 dimensional numpy array. """ windarray = dataset.read( indexes=1, # only reads the binary segmentation mask band window=rio.windows.from_bounds( xmin, ymin, xmax, ymax, transform=dataset.transform)) return windarray def get_pixel_counts(windarray): """ Calculates the sum of non-zero (e.g. roads, buildings) pixels in an array """ return count_nonzero(windarray) """ Explanation: Helper functions End of explanation """ first_quad_url = get_download_url(quads.loc[0, 'roads_quad'], auth=BASIC_AUTH) ex_quad = rio.open(first_quad_url) print(ex_quad.read().shape) """ Explanation: As a proof of concept, let's read in one Quad to get a sense of it's shape. End of explanation """ from tqdm import tqdm pixel_counts = {time: {'building_px': 0, 'road_px':0, 'total_px':0} for time in times} # Get bounds of our AOI xmin, ymin, xmax, ymax = get_geometry_bounds(transformed_aoi) # this will take a little while... for time in tqdm(times): building_pixels = 0 roads_pixels = 0 for idx, row in quads.loc[quads['date'] == time, :].iterrows(): # get quad items urls buildings_quad = row['buildings_quad'] roads_quad = row['roads_quad'] # get quad download urls buildings_url = get_download_url(buildings_quad, auth=BASIC_AUTH) roads_url = get_download_url(roads_quad, auth=BASIC_AUTH) # read quad segmentation mask band buildings_data = rio.open(buildings_url) roads_data = rio.open(roads_url) buildings_window = read_window(buildings_data, xmin, ymin, xmax, ymax) roads_window = read_window(roads_data, xmin, ymin, xmax, ymax) # count buildings and road pixels building_count = count_nonzero(buildings_window) roads_count = count_nonzero(roads_window) # store in dictionary under time key building_pixels += building_count roads_pixels += roads_count pixel_counts[time]['building_px'] = building_pixels pixel_counts[time]['road_px'] = roads_pixels pixel_counts[time]['total_px'] = building_pixels + roads_pixels """ Explanation: As per the docs, raster results are in the form of a two band GEOTIFF. The first band contains the binary mask data (this is what we are interested in!). The second band represents valid or invalid source imagery pixels. Now, let's read all of our Quads and store their pixel counts in a pixel_counts dictionary. Our AOI intersects two source quads, and the segmentation results overlay those quads for each of our time points. So, we'll be reading data from four quads for every point in our time series. End of explanation """ pixel_counts = pd.DataFrame(data=pixel_counts.values(), index=pixel_counts.keys()) pixel_counts.head() """ Explanation: Converting pixel_counts to a dataframe will make plotting a breeze! End of explanation """ import matplotlib.pyplot as plt # plot building pixels over time plt.figure(figsize=(10, 5)) plt.title('Building Pixels over Time', fontdict={'size':18}) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2017-09-01'][0], ymin=pixel_counts['building_px'].min(), ymax=pixel_counts['building_px'].max(), colors='r', linestyles='dashed', alpha=0.5) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2018-11-01'][0], ymin=pixel_counts['building_px'].min(), ymax=pixel_counts['building_px'].max(), colors='r', linestyles='dashed', alpha=0.5) pixel_counts['building_px'].plot(); """ Explanation: 8. Line Charts End of explanation """ plt.figure(figsize=(10, 5)) plt.title('Road Pixels over Time', fontdict={'size':18}) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2017-09-01'][0], ymin=pixel_counts['road_px'].min(), ymax=pixel_counts['road_px'].max(), colors='r', linestyles='dashed', alpha=0.5) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2018-11-01'][0], ymin=pixel_counts['road_px'].min(), ymax=pixel_counts['road_px'].max(), colors='r', linestyles='dashed', alpha=0.5) pixel_counts['road_px'].plot(); """ Explanation: Looking at the above plot for building pixels, we can broadly quantify the change in buildings in our AOI over the time of our subscription. As indicated on the plot, we see a sudden spike in buildings from September to Novemeber 2017, as the IDP camp settles on the same penninsula. Then - despite some noise in the pixel counts - we can detect the drop in buildings at the end of 2018 as severe flooding alters the camp and surrounding landscape. Next, let's look at the pattern for roads. End of explanation """ plt.figure(figsize=(10, 5)) plt.title('Roads & Buildings Pixels over Time', fontdict={'size':18}) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2017-09-01'][0], ymin=pixel_counts['total_px'].min(), ymax=pixel_counts['total_px'].max(), colors='r', linestyles='dashed', alpha=0.5) plt.vlines(x=[idx for idx, time in enumerate(times) if time == '2018-11-01'][0], ymin=pixel_counts['total_px'].min(), ymax=pixel_counts['total_px'].max(), colors='r', linestyles='dashed', alpha=0.5) pixel_counts['total_px'].plot(); """ Explanation: Our plot of road pixels over time shows a different trend. While also prone to some month-to-month noise, we can see early that the high number of road pixels is consistent with the construction of the roads scaffolding as early as August, 2017. After the flooding, we see a slight change in road pixels, too. However, this coincides with new road construction in an adjacent camp in our AOI. Finally, we can visualize the count of combined roads and buildings pixels. End of explanation """
probml/pyprobml
notebooks/book1/14/densenet_jax.ipynb
mit
import jax import jax.numpy as jnp # JAX NumPy from jax import lax import matplotlib.pyplot as plt import math from IPython import display try: from flax import linen as nn # The Linen API except ModuleNotFoundError: %pip install -qq flax from flax import linen as nn # The Linen API from flax.training import train_state # Useful dataclass to keep train state import numpy as np # Ordinary NumPy try: import optax # Optimizers except ModuleNotFoundError: %pip install -qq optax import optax # Optimizers try: import torchvision except ModuleNotFoundError: %pip install -qq torchvision import torchvision try: import torch except ModuleNotFoundError: %pip install -qq torch import torch from torch.utils import data from torchvision import transforms try: import tensorflow_datasets as tfds # TFDS for MNIST except ModuleNotFoundError: %pip install -qq tensorflow tensorflow_datasets import tensorflow_datasets as tfds # TFDS for MNIST import random import os import time from typing import Any, Callable, Sequence, Tuple from functools import partial rng = jax.random.PRNGKey(0) !mkdir figures # for saving plots ModuleDef = Any """ Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/densenet_torch.ipynb <a href="https://colab.research.google.com/github/codeboy5/probml-notebooks/blob/add-densenet-jax/notebooks-d2l/densenet_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Dense networks We implement DenseNet. Based on 7.7 of http://d2l.ai/chapter_convolutional-modern/densenet.html End of explanation """ class ConvBlock(nn.Module): filters: int norm: ModuleDef @nn.compact def __call__(self, x): x = self.norm()(x) x = nn.relu(x) x = nn.Conv(self.filters, (3, 3), padding=[(1, 1), (1, 1)], dtype=jnp.float32)(x) return x """ Explanation: Dense blocks A conv block uses BN-activation-conv in order. End of explanation """ class DenseBlock(nn.Module): filters: int num_convs: int norm: ModuleDef @nn.compact def __call__(self, x): for _ in range(self.num_convs): y = ConvBlock(self.filters, self.norm)(x) # Concatenate the input and output of each block on the channel dimension. x = jnp.concatenate(arrays=[x, y], axis=-1) return x """ Explanation: A DenseBlock is a sequence of conv-blocks, each consuming as input all previous outputs. End of explanation """ train = False norm = partial(nn.BatchNorm, use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=jnp.float32) model = DenseBlock(10, 2, norm) batch = jnp.ones((4, 8, 8, 3)) # (N, H, W, C) format variables = model.init(jax.random.PRNGKey(0), batch) output = model.apply(variables, batch) output.shape """ Explanation: Example: we start with 3 channels, make a DenseBlock with 2 conv-blocks each with 10 channels, to get an output with 23 channels. End of explanation """ class TransitionBlock(nn.Module): filters: int norm: ModuleDef @nn.compact def __call__(self, x): x = self.norm()(x) x = nn.relu(x) x = nn.Conv(self.filters, (1, 1), padding=[(0, 0), (0, 0)], dtype=jnp.float32)(x) x = nn.avg_pool(x, (2, 2), (2, 2), padding=[(0, 0), (0, 0)]) return x """ Explanation: Transition layers To prevent the number of channels exploding, we can add a transition layer, that uses 1x1 convolution. We can also reduce the spatial resolution using stride 2 average pooling. End of explanation """ transition_model = TransitionBlock(10, norm) batch = jnp.ones((4, 8, 8, 23)) # (N, H, W, C) format variables = transition_model.init(jax.random.PRNGKey(0), batch) output = transition_model.apply(variables, batch) output.shape """ Explanation: Below we show an example where we map the 23 channels back down to 10, and halve the spatial dimensions. End of explanation """ class DenseNet(nn.Module): @nn.compact def __call__(self, x, train: bool = True): norm = partial(nn.BatchNorm, use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=jnp.float32) # The first part of the model is similar to resnet. x = nn.Conv(64, (7, 7), (2, 2), [(3, 3), (3, 3)], dtype=jnp.float32)(x) x = nn.BatchNorm(use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=jnp.float32)(x) x = nn.relu(x) x = nn.max_pool(x, (3, 3), (2, 2), [(1, 1), (1, 1)]) num_channels = 64 growth_rate = 32 num_convs_in_dense_blocks = [4, 4, 4, 4] for i, num_convs in enumerate(num_convs_in_dense_blocks): x = DenseBlock(growth_rate, num_convs, norm)(x) # This is the number of output channels in the previous dense block num_channels += num_convs * growth_rate # A transition layer that halves the number of channels is added between the dense blocks. if i != len(num_convs_in_dense_blocks) - 1: x = TransitionBlock(num_channels // 2, norm)(x) num_channels = num_channels // 2 x = norm()(x) x = nn.relu(x) x = jnp.mean(x, axis=(1, 2)) # Works as adaptive avg pooling x = nn.Dense(10, dtype=jnp.float32)(x) x = jnp.asarray(x, np.float32) return x model = DenseNet() batch = jnp.ones((1, 224, 224, 1)) # (N, H, W, C) format variables = model.init(jax.random.PRNGKey(0), batch) output = model.apply(variables, batch, False) output.shape model = DenseNet() batch = jnp.ones((1, 96, 96, 1)) # (N, H, W, C) format variables = model.init(jax.random.PRNGKey(0), batch) output = model.apply(variables, batch, False) output.shape """ Explanation: Full model End of explanation """ def load_data_fashion_mnist(batch_size, resize=None): """Download the Fashion-MNIST dataset and then load it into memory.""" trans = [transforms.ToTensor()] if resize: trans.insert(0, transforms.Resize(resize)) trans = transforms.Compose(trans) mnist_train = torchvision.datasets.FashionMNIST(root="../data", train=True, transform=trans, download=True) mnist_test = torchvision.datasets.FashionMNIST(root="../data", train=False, transform=trans, download=True) return ( data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=2), data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=2), ) """ Explanation: Training We fit the model to Fashion-MNIST. We rescale images from 28x28 to 96x96, so that the input to the final average pooling layer has size 3x3. We notice that the training speed is much less than for ResNet. End of explanation """ class TrainState(train_state.TrainState): batch_stats: Any def create_train_state(rng, learning_rate, momentum): cnn = DenseNet() variables = cnn.init(rng, jnp.ones([1, 96, 96, 1], jnp.float32)) params, batch_stats = variables["params"], variables["batch_stats"] tx = optax.sgd(learning_rate, momentum) state = TrainState.create(apply_fn=cnn.apply, params=params, tx=tx, batch_stats=batch_stats) return state """ Explanation: Create train state End of explanation """ def compute_metrics(*, logits, labels): one_hot = jax.nn.one_hot(labels, num_classes=10) loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot)) accuracy = jnp.mean(jnp.argmax(logits, -1) == labels) numcorrect = jnp.sum(jnp.argmax(logits, -1) == labels, dtype=jnp.float32) metrics = {"loss": loss, "accuracy": accuracy, "numcorrect": numcorrect} return metrics """ Explanation: Metric computation End of explanation """ @jax.jit def train_step(state, batch): """Train for a single step.""" def loss_fn(params): logits, new_model_state = state.apply_fn( {"params": params, "batch_stats": state.batch_stats}, batch["image"], mutable=["batch_stats"] ) one_hot = jax.nn.one_hot(batch["label"], num_classes=10) loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot)) return loss, (new_model_state, logits) grad_fn = jax.value_and_grad(loss_fn, has_aux=True) aux, grads = grad_fn(state.params) # grads = lax.pmean(grads, axis_name='batch') new_model_state, logits = aux[1] metrics = compute_metrics(logits=logits, labels=batch["label"]) new_state = state.apply_gradients(grads=grads, batch_stats=new_model_state["batch_stats"]) return new_state, metrics def eval_step(state, batch): variables = {"params": state.params, "batch_stats": state.batch_stats} logits = state.apply_fn(variables, batch["image"], train=False, mutable=False) return compute_metrics(logits=logits, labels=batch["label"]) def eval_model(state, test_iter): batch_metrics = [] for i, (X, y) in enumerate(test_iter): batch = {} batch["image"] = jnp.reshape(jnp.float32(X), (-1, 96, 96, 1)) batch["label"] = jnp.float32(y) metrics = eval_step(state, batch) batch_metrics.append(metrics) # compute mean of metrics across each batch in epoch. batch_metrics_np = jax.device_get(batch_metrics) epoch_metrics_np = {k: np.mean([metrics[k] for metrics in batch_metrics_np]) for k in batch_metrics_np[0]} return epoch_metrics_np["accuracy"] """ Explanation: Training step End of explanation """ class Animator: """For plotting data in animation.""" def __init__( self, xlabel=None, ylabel=None, legend=None, xlim=None, ylim=None, xscale="linear", yscale="linear", fmts=("-", "m--", "g-.", "r:"), nrows=1, ncols=1, figsize=(3.5, 2.5), ): # Incrementally plot multiple lines if legend is None: legend = [] display.set_matplotlib_formats("svg") self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [ self.axes, ] # Use a lambda function to capture arguments self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): # Add multiple data points into the figure if not hasattr(y, "__len__"): y = [y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a, b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() display.display(self.fig) display.clear_output(wait=True) class Timer: """Record multiple running times.""" def __init__(self): self.times = [] self.start() def start(self): """Start the timer.""" self.tik = time.time() def stop(self): """Stop the timer and record the time in a list.""" self.times.append(time.time() - self.tik) return self.times[-1] def avg(self): """Return the average time.""" return sum(self.times) / len(self.times) def sum(self): """Return the sum of time.""" return sum(self.times) def cumsum(self): """Return the accumulated time.""" return np.array(self.times).cumsum().tolist() class Accumulator: """For accumulating sums over `n` variables.""" def __init__(self, n): self.data = [0.0] * n def add(self, *args): self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend): """Set the axes for matplotlib.""" axes.set_xlabel(xlabel) axes.set_ylabel(ylabel) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() """ Explanation: Animator and Timer End of explanation """ train_iter, test_iter = load_data_fashion_mnist(512, resize=96) rng, init_rng = jax.random.split(rng) learning_rate = 0.1 momentum = 0.9 state = create_train_state(init_rng, learning_rate, momentum) del init_rng # Must not be used anymore. num_epochs = 10 timer = Timer() animator = Animator(xlabel="epoch", xlim=[1, num_epochs], legend=["train loss", "train acc", "test acc"]) num_batches = len(train_iter) device = torch.device(f"cuda:{0}") for epoch in range(num_epochs): # Sum of training loss, sum of training accuracy, no. of examples metric = Accumulator(3) for i, (X, y) in enumerate(train_iter): timer.start() batch = {} batch["image"] = jnp.reshape(jnp.float32(X), (-1, 96, 96, 1)) batch["label"] = jnp.float32(y) state, metrics = train_step(state, batch) metric.add(metrics["loss"] * X.shape[0], metrics["numcorrect"], X.shape[0]) timer.stop() train_l = metric[0] / metric[2] train_acc = metric[1] / metric[2] if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1: animator.add(epoch + (i + 1) / num_batches, (train_l, train_acc, None)) test_acc = eval_model(state, test_iter) animator.add(epoch + 1, (None, None, test_acc)) print(f"{metric[2] * num_epochs / timer.sum():.1f} examples/sec " f"on {str(device)}") print(f"loss {train_l:.3f}, train acc {train_acc:.3f}, " f"test acc {test_acc:.3f}") """ Explanation: Training and Evaluating the Model End of explanation """
li-xirong/jingwei
samples/tag-assignment-by-tagvote.ipynb
mit
from instance_based.tagvote import TagVoteTagger trainCollection = 'train10k' annotationName = 'concepts130.txt' feature = 'vgg-verydeep-16-fc7relu' tagger = TagVoteTagger(collection=trainCollection, annotationName=annotationName, feature=feature, distance='cosine') """ Explanation: Image tag assignment by TagVote An example showing how to do tag assignment by the TagVote method, using train10k as training data and mirflickr08 as test data. Prepare Download vggnet features cd $HOME/VisualSearch wget http://lixirong.net/data/csur2016/train10k-vggnet16-fc7relu.tar.gz wget http://lixirong.net/data/csur2016/mirflickr08-vggnet16-fc7relu.tar.gz Download tag data of train10k wget http://lixirong.net/data/csur2016/train10k-tag.tar.gz Download annotation files of mirflickr08 wget http://lixirong.net/data/csur2016/mirflickr08-anno.tar.gz Code Create a TagVote instance End of explanation """ from basic.constant import ROOT_PATH from util.simpleknn.bigfile import BigFile import os rootpath = ROOT_PATH testCollection = 'mirflickr08' feat_dir = os.path.join(rootpath, testCollection, 'FeatureData', feature) feat_file = BigFile(feat_dir) """ Explanation: Open feature file of mirflickr08 End of explanation """ # load image ids of mirflickr08 from basic.util import readImageSet testimset = readImageSet(testCollection) # load a subset of 200 images for test import random testimset = random.sample(testimset, 200) renamed, vectors = feat_file.read(testimset) """ Explanation: Load image ids of mirflickr08 End of explanation """ import time s_time = time.time() results = [tagger.predict(vec) for vec in vectors] timespan = time.time() - s_time print ('processing %d images took %g seconds' % (len(renamed), timespan)) """ Explanation: Perform tag relevance learning on the test set End of explanation """ from basic.annotationtable import readConcepts, readAnnotationsFrom testAnnotationName = 'conceptsmir14.txt' concepts = readConcepts(testCollection, testAnnotationName) nr_of_concepts = len(concepts) label2imset = {} im2labelset = {} for i,concept in enumerate(concepts): names,labels = readAnnotationsFrom(testCollection, testAnnotationName, concept) pos_set = [x[0] for x in zip(names,labels) if x[1]>0] print ('%s has %d positives' % (concept, len(pos_set))) for im in pos_set: label2imset.setdefault(concept, set()).add(im) im2labelset.setdefault(im, set()).add(concept) """ Explanation: Evaluation First, we need to load ground-truth of mirflickr08, which is provided at the folder $HOME/VisualSearch/mirflickr08/Annotations: End of explanation """ # sort images to compute AP scores per concept ranklists = {} for _id, res in zip(renamed,results): for tag,score in res: ranklists.setdefault(tag, []).append((_id, score)) from basic.metric import getScorer scorer = getScorer('AP') mean_ap = 0.0 for i,concept in enumerate(concepts): pos_set = label2imset[concept] ranklist = ranklists[concept] ranklist.sort(key=lambda v:(v[1], v[0]), reverse=True) # sort images by scores in descending order sorted_labels = [2*int(x[0] in pos_set)-1 for x in ranklist] perf = scorer.score(sorted_labels) print ('%s %.3f' % (concept, perf)) mean_ap += perf mean_ap /= len(concepts) print ('meanAP %.3f' % mean_ap) """ Explanation: Compute map based on image ranking results For each test concept, sort the test images in descending order according to their relevance scores with respect to the concept Compute Average Precision of the concept Compute mean Average Precision, by averaging AP scores of the concepts. End of explanation """ # compute iAP per image miap = 0.0 for _id, res in zip(renamed,results): pos_set = im2labelset.get(_id, set()) # some images might be negatives to all the 14 concepts ranklist = [x for x in res if x[0] in label2imset] # evaluate only concepts with ground truth sorted_labels = [2*int(x[0] in pos_set)-1 for x in ranklist] perf = scorer.score(sorted_labels) miap += perf miap /= len(renamed) print ('miap %.3f' % miap) """ Explanation: Compute miap based on tag ranking results End of explanation """
mrustl/flopy
examples/Notebooks/flopy3_external_file_handling.ipynb
bsd-3-clause
import os import shutil import flopy import numpy as np # make a model nlay,nrow,ncol = 10,20,5 model_ws = os.path.join("data","external_demo") if os.path.exists(model_ws): shutil.rmtree(model_ws) # the place for all of your hand made and costly model inputs array_dir = os.path.join("data","array_dir") if os.path.exists(array_dir): shutil.rmtree(array_dir) os.mkdir(array_dir) ml = flopy.modflow.Modflow(model_ws=model_ws) dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2) """ Explanation: FloPy Quick demo on how FloPy handles external files for arrays End of explanation """ hk = np.zeros((nlay,nrow,ncol)) + 5.0 vka = np.zeros_like(hk) fnames = [] for i,h in enumerate(hk): fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1)) fnames.append(fname) np.savetxt(fname,h) vka[i] = i+1 lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka) """ Explanation: make an hk and vka array. We'll save hk to files - pretent that you spent months making this important model property. Then make an lpf End of explanation """ warmup_recharge = np.ones((nrow,ncol)) important_recharge = np.random.random((nrow,ncol)) fname = os.path.join(array_dir,"important_recharge.ref") np.savetxt(fname,important_recharge) rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname}) ml.write_input() """ Explanation: Let's also have some recharge with mixed args as well. Pretend the recharge in the second stress period is very important and precise End of explanation """ print("model_ws:",ml.model_ws) print('\n'.join(os.listdir(ml.model_ws))) """ Explanation: Let's look at the files that were created End of explanation """ open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20] """ Explanation: We see that a copy of the hk files as well as the important recharge file were made in the model_ws.Let's looks at the lpf file End of explanation """ ml.array_free_format """ Explanation: We see that the open/close approach was used - this is because ml.array_free_format is True. Notice that vka is written internally End of explanation """ print(ml.model_ws) ml.model_ws = os.path.join("data","new_external_demo_dir") """ Explanation: Now change model_ws End of explanation """ ml.write_input() # list the files in model_ws that have 'hk' in the name print('\n'.join([name for name in os.listdir(ml.model_ws) if "hk" in name or "impor" in name])) """ Explanation: Now when we call write_input(), a copy of external files are made in the current model_ws End of explanation """ # make a model - same code as before except for the model constructor nlay,nrow,ncol = 10,20,5 model_ws = os.path.join("data","external_demo") if os.path.exists(model_ws): shutil.rmtree(model_ws) # the place for all of your hand made and costly model inputs array_dir = os.path.join("data","array_dir") if os.path.exists(array_dir): shutil.rmtree(array_dir) os.mkdir(array_dir) # lets make an external path relative to the model_ws ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref") dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2) hk = np.zeros((nlay,nrow,ncol)) + 5.0 vka = np.zeros_like(hk) fnames = [] for i,h in enumerate(hk): fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1)) fnames.append(fname) np.savetxt(fname,h) vka[i] = i+1 lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka) warmup_recharge = np.ones((nrow,ncol)) important_recharge = np.random.random((nrow,ncol)) fname = os.path.join(array_dir,"important_recharge.ref") np.savetxt(fname,important_recharge) rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname}) """ Explanation: Now we see that the external files were copied to the new model_ws Using external_path It is sometimes useful when first building a model to write the model arrays as external files for processing and parameter estimation. The model attribute external_path triggers this behavior End of explanation """ os.listdir(ml.model_ws) """ Explanation: We can see that the model constructor created both model_ws and external_path which is relative to the model_ws End of explanation """ ml.write_input() open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20] """ Explanation: Now, when we call write_input(), any array properties that were specified as np.ndarray will be written externally. If a scalar was passed as the argument, the value remains internal to the model input files End of explanation """ ml.lpf.ss.how = "internal" ml.write_input() open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20] print('\n'.join(os.listdir(os.path.join(ml.model_ws,ml.external_path)))) """ Explanation: Now, vka was also written externally, but not the storage properties.Let's verify the contents of the external path directory. We see our hard-fought hk and important_recharge arrays, as well as the vka arrays. End of explanation """ # make a model - same code as before except for the model constructor nlay,nrow,ncol = 10,20,5 model_ws = os.path.join("data","external_demo") if os.path.exists(model_ws): shutil.rmtree(model_ws) # the place for all of your hand made and costly model inputs array_dir = os.path.join("data","array_dir") if os.path.exists(array_dir): shutil.rmtree(array_dir) os.mkdir(array_dir) # lets make an external path relative to the model_ws ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref") # explicitly reset the free_format flag BEFORE ANY PACKAGES ARE MADE!!! ml.array_free_format = False dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2) hk = np.zeros((nlay,nrow,ncol)) + 5.0 vka = np.zeros_like(hk) fnames = [] for i,h in enumerate(hk): fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1)) fnames.append(fname) np.savetxt(fname,h) vka[i] = i+1 lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka) ml.lpf.ss.how = "internal" warmup_recharge = np.ones((nrow,ncol)) important_recharge = np.random.random((nrow,ncol)) fname = os.path.join(array_dir,"important_recharge.ref") np.savetxt(fname,important_recharge) rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname}) ml.write_input() """ Explanation: Fixed format All of this behavior also works for fixed-format type models (really, really old models - I mean OLD!) End of explanation """ open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines() """ Explanation: We see that now the external arrays are being handled through the name file. Let's look at the name file End of explanation """ ml.dis.botm[0].format.binary = True ml.write_input() open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines() open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines() """ Explanation: "free" and "binary" format End of explanation """ ml.lpf.hk[0].how """ Explanation: The .how attribute Util2d includes a .how attribute that gives finer grained control of how arrays will written End of explanation """ ml.lpf.hk[0].how = "openclose" ml.lpf.hk[0].how ml.write_input() """ Explanation: This will raise an error since our model does not support free format... End of explanation """ ml.lpf.hk[0].how = "external" ml.lpf.hk[0].how ml.dis.top.how = "external" ml.write_input() open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines() open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines() open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines() """ Explanation: So let's reset hk layer 1 back to external... End of explanation """
astarostin/MachineLearningSpecializationCoursera
course6/week5/PageParsing.ipynb
apache-2.0
import requests req = requests.get('http://zadolba.li/20160417') print req print type(req) print req.text """ Explanation: Пример парсинга страницы сайта Requests Для того, чтобы получить html-код страницы нам потребуется библиотека requests: End of explanation """ import bs4 """ Explanation: Beautiful Soup Теперь нам нужно как-то обрабатывать этот html-код. Для этого подойдет библиотека Beautiful Soup 4: End of explanation """ parser = bs4.BeautifulSoup(req.text, 'lxml') print type(parser) print parser """ Explanation: У bs4 весьма несложный интерфейс, хотя обращаться к документации на первых порах все же придется. End of explanation """ print parser.find('div', attrs={'class':'text'}) x = parser.find('div', attrs={'class':'text'}) print type(x) print x.text """ Explanation: Выделим первый тег div, атрибут class у которого имеет значение 'text': End of explanation """ y = parser.findAll('div', attrs={'class':'text'}) print type(y) for result in y: print result.text print "\n------\n" """ Explanation: Выделим тексты всех историй со страницы: End of explanation """ %%writefile parse_zadolbali.py import requests import bs4 from multiprocessing import Pool import codecs def parse_page(url): text = requests.get(url).text parser = bs4.BeautifulSoup(text, 'lxml') x = parser.findAll('div', attrs={'class':'text'}) return [res.text for res in x] p = Pool(10) url_list = ['http://zadolba.li/201604' + '0' * int(n < 10) + str(n) for n in range(1, 18)] if __name__ == '__main__': map_results = p.map(parse_page, url_list) reduce_results = reduce(lambda x,y: x + y, map_results) with codecs.open('parsing_results.txt', 'w', 'utf-8') as output_file: print >> output_file, u'\n'.join(reduce_results) """ Explanation: Multiprocessing Уже рассмотренных простых действий достаточно для того, чтобы кое-как парсить сайт с известной вам структурой. Но если вы попробуете таким образом распарсить более одной страницы, скорее всего заметите, что это происходит очень медленно. Можно существенно ускориться, воспользовавшись библиотекой multiprocessing, чтобы параллельно парсить несколько страниц. Ниже приводится пример такого кода: End of explanation """
iRipVanWinkle/ml
mlcourse_open[solutions]/practice/lesson1_practice_pandas_titanic.ipynb
mit
import numpy as np import pandas as pd %matplotlib inline """ Explanation: <center> <img src="../../img/ods_stickers.jpg"> Открытый курс по машинному обучению. Сессия № 2 </center> Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала. <center>Тема 1. Первичный анализ данных с Pandas</center> <center>Практическое задание. Анализ данных пассажиров "Титаника"</center> Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме. End of explanation """ data = pd.read_csv('../../data/titanic_train.csv', index_col='PassengerId') """ Explanation: Считаем данные из файла в память в виде объекта Pandas.DataFrame End of explanation """ data.head(5) data.describe() """ Explanation: Данные представлены в виде таблицы. Посмотрим на первые 5 строк: End of explanation """ data[(data['Embarked'] == 'C') & (data.Fare > 200)].head() """ Explanation: Для примера отберем пассажиров, которые сели в Cherbourg (Embarked=C) и заплатили более 200 у.е. за билет (fare > 200). Убедитесь, что Вы понимаете, как эта конструкция работает. <br> Если нет – посмотрите, как вычисляется выражение в квадратных в скобках. End of explanation """ data[(data['Embarked'] == 'C') & (data['Fare'] > 200)].sort_values(by='Fare', ascending=False).head() """ Explanation: Можно отсортировать этих людей по убыванию платы за билет. End of explanation """ def age_category(age): ''' < 30 -> 1 >= 30, <55 -> 2 >= 55 -> 3 ''' if age < 30: return 1 elif age < 55: return 2 else: return 3 age_categories = [age_category(age) for age in data.Age] data['Age_category'] = age_categories """ Explanation: Пример создания признака. End of explanation """ data['Age_category'] = data['Age'].apply(age_category) """ Explanation: Другой способ – через apply. End of explanation """ data['Sex'].value_counts() """ Explanation: 1. Сколько мужчин / женщин находилось на борту? - 412 мужчин и 479 женщин - 314 мужчин и 577 женщин - 479 мужчин и 412 женщин - 577 мужчин и 314 женщин End of explanation """ data[data['Sex'] == 'male']['Pclass'].value_counts() """ Explanation: 2. Выведите распределение переменной Pclass (социально-экономический статус) и это же распределение, только для мужчин / женщин по отдельности. Сколько было мужчин 2-го класса? - 104 - 108 - 112 - 125 End of explanation """ print( "Медиана – {0}, стандартное отклонение – {1}".format( round(data['Fare'].median(), 2), round(data['Fare'].std(), 2) ) ) """ Explanation: 3. Каковы медиана и стандартное отклонение платежей (Fare)? Округлите до 2 десятичных знаков. - Медиана – 14.45, стандартное отклонение – 49.69 - Медиана – 15.1, стандартное отклонение – 12.15 - Медиана – 13.15, стандартное отклонение – 35.3 - Медиана – 17.43, стандартное отклонение – 39.1 End of explanation """ yang = data[data['Age'] < 30] old = data[data['Age'] > 60] old['Survived'].value_counts(normalize=True) """ Explanation: 4. Правда ли, что люди моложе 30 лет выживали чаще, чем люди старше 60 лет? Каковы доли выживших в обеих группах? - 22.7% среди молодых и 40.6% среди старых - 40.6% среди молодых и 22.7% среди старых - 35.3% среди молодых и 27.4% среди старых - 27.4% среди молодых и 35.3% среди старых End of explanation """ female = data[data['Sex'] == 'female'] male = data[data['Sex'] == 'male'] male['Survived'].value_counts(normalize=True) """ Explanation: 5. Правда ли, что женщины выживали чаще мужчин? Каковы доли выживших в обеих группах? - 30.2% среди мужчин и 46.2% среди женщин - 35.7% среди мужчин и 74.2% среди женщин - 21.1% среди мужчин и 46.2% среди женщин - 18.9% среди мужчин и 74.2% среди женщин End of explanation """ # Ваш код здесь """ Explanation: 6. Найдите самое популярное имя среди пассажиров Титаника мужского пола? - Charles - Thomas - William - John 7. Сравните графически распределение стоимости билетов и возраста у спасенных и у погибших. Средний возраст погибших выше, верно? - Да - Нет End of explanation """ # Ваш код здесь """ Explanation: 8. Как отличается средний возраст мужчин / женщин в зависимости от класса обслуживания? Выберите верные утверждения: - В среднем мужчины 1-го класса старше 40 лет - В среднем женщины 1-го класса старше 40 лет - Мужчины всех классов в среднем старше женщин того же класса - В среднем люди в 1 классе старше, чем во 2-ом, а те старше представителей 3-го класса End of explanation """
Upward-Spiral-Science/team1
code/data_modeling.ipynb
apache-2.0
import matplotlib.pyplot as plt %matplotlib inline import numpy as np import urllib2 np.random.seed(1) url = ('https://raw.githubusercontent.com/Upward-Spiral-Science' '/data/master/syn-density/output.csv') data = urllib2.urlopen(url) csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels) # chopping data based on thresholds on x and y coordinates x_bounds = (409, 3529) y_bounds = (1564, 3124) def check_in_bounds(row, x_bounds, y_bounds): if row[0] < x_bounds[0] or row[0] > x_bounds[1]: return False if row[1] < y_bounds[0] or row[1] > y_bounds[1]: return False if row[3] == 0: return False return True indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds)) data_thresholded = csv[indices_in_bound] n = data_thresholded.shape[0] true_data = data_thresholded """ Explanation: preliminary setup End of explanation """ # simulates data under a model derived from true_data # made function flexible with different distributions # since binomial, poisson, and gaussian are all plausible def run_simulation(true_data, distrib, **kwargs): sim_data = np.copy(true_data) kwargs['size'] = true_data[:,4].shape new_synapse_vals = distrib.rvs(**kwargs) sim_data[:, 4] = new_synapse_vals return sim_data # to make sure function works and that poisson is reasonable... from scipy.stats import poisson # lambda = np when approximating a binomial # E[Bin(n, p)] = np, where n=unmasked voxels, and p is the probability lambdas = true_data[:, 4] sim_data = run_simulation(true_data, poisson, mu=lambdas) print sim_data print true_data print np.average(sim_data[:, 4]), np.average(true_data[:, 4]) print np.median(sim_data[:, 4]), np.median(true_data[:, 4]) print np.max(sim_data[:, 4]), np.max(true_data[:, 4]) """ Explanation: 1) Write code to generate simulated data using a probability distribution model of our data. The model will be as follows: for each block of space (that is row of data) the number of synapses follows a binomial distribution with parameters p=synapses/unmasked and n=unmasked, so data generated by this model will have the same set of coordinates and unmasked values as the true data but number of synapses will differ. Since the number of 'trials' (unmasked voxels) is so high, it may be necessary to approximate the binomial distributions with Gaussians or Poissons, since I'd think that computing binomial random variables would be much more computationally expensive than Gaussian or Poisson random variates. Furthermore, the Poisson distribution seems especially fitting here since all probabilities are extremely small (even the largest being less than 1%.) End of explanation """ from scipy.stats import norm, binom n = true_data[:, 3] p = np.apply_along_axis(lambda row : row[4]/row[3], 1, true_data) binom_var = lambdas*(np.ones(p.shape)-p) sim_data = run_simulation(true_data, norm, loc=lambdas, scale=np.sqrt(binom_var)) print sim_data print np.average(sim_data[:, 4]), np.average(true_data[:, 4]) print np.median(sim_data[:, 4]), np.median(true_data[:, 4]) print np.max(sim_data[:, 4]), np.max(true_data[:, 4]) sim_data = run_simulation(true_data, binom, n=n.astype('int'), p=p) print sim_data print np.average(sim_data[:, 4]), np.average(true_data[:, 4]) print np.median(sim_data[:, 4]), np.median(true_data[:, 4]) print np.max(sim_data[:, 4]), np.max(true_data[:, 4]) """ Explanation: As expected, Poisson simulated data seems to be pretty accurate. Out of curiosity, lets try with gaussians and binomials End of explanation """ observed_mean = 0.00115002980202 observed_median = 0.00119726911912 alpha = .01 n = 10000 # samples of simulated data to obtain simulations = np.empty((n, true_data.shape[0], true_data.shape[1])) sims_density = np.empty((n, true_data.shape[0])) # this will take a long time to run for i in xrange(n): sim_data = run_simulation(true_data, poisson, mu=lambdas) simulations[i,:,:]=sim_data density_vector = np.apply_along_axis(lambda x:x[4]/x[3], 1, sim_data) sims_density[i, :] = density_vector print simulations[-1, :, :] print sims_density[-1, :] avg_density_per_sim = np.empty((n)) median_per_sim = np.empty((n)) for i, dens_vec in enumerate(sims_density[:,]): avg_density_per_sim[i] = np.average(dens_vec) median_per_sim[i] = np.median(dens_vec) deltas_mean = avg_density_per_sim - np.ones(avg_density_per_sim.shape)*observed_mean deltas_mean = np.sort(deltas_mean) deltas_median = median_per_sim - np.ones(median_per_sim.shape)*observed_median deltas_median = np.sort(deltas_median) """ Explanation: Based on these preliminary simulations, it seems that all 3 distributions behave pretty similarly (as expected). For now I'm going to stick with using Poisson distributions because they're nicer mathematically than the binomial and a Poisson is discrete, unlike a Gaussian. Since we are simulating discrete data, i.e. number of synapses, Poisson seems more appropriate. 2) Run the simulation many times and then use the results to construct confidence interval for the mean and median synaptic density. From original data, we have that the mean synaptic density is 0.00115002980202, the median is 0.00119726911912. Let the signifigance level, alpha, be .01. End of explanation """ critical_995 = deltas_mean[4] critical_005 = deltas_mean[-5] ci_mean = (observed_mean-critical_005, observed_mean-critical_995) critical_995 = deltas_median[4] critical_005 = deltas_median[-5] ci_median = (observed_median-critical_005, observed_median-critical_995) print "observed mean:", observed_mean print "confidence interval: ", ci_mean print "observed median:", observed_median print "confidence interval: ", ci_median """ Explanation: Now we have a sorted list of delta values. At alpha=.01, the fifth and fifth to last in the list are our relevant critical values (b/c (10,000)(alpha/2)=5), thus a confidence interval can be constructed. End of explanation """ print np.any(median_per_sim < observed_median) """ Explanation: Confidence interval for the mean looks good, but note that the observed median falls outside of the confidence interval. This means that all the delta values for the median were negative. In other words for all simulations, the median density was less than the one observed from real data. Let's quickly verify this. End of explanation """ std_per_sim = np.empty(sims_density.shape) for i, sim in enumerate(sims_density[:,]): std_per_sim[i] = np.std(sim) print np.average(std_per_sim) print np.median(std_per_sim) print np.min(std_per_sim), np.max(std_per_sim) """ Explanation: This means that our theoretical distribution has less left-skewness than the real data. Recall the 'spike' in the histogram of synaptic density on the real data. This spike is at a higher value than the mean, so the real data has a clearly observable left skew; perhaps our the theoretical model was unable to capture that 'spike'? 3) Use data from previous problem to investigate variance. First let's consider the standard deviation of density that we estimated from the true data. It is 0.000406563246763. Let's compute this for each simulated sample. End of explanation """ sample_std_dev = 0.000406563246763 # the std deviation computed from true data std_dev_of_mean = sample_std_dev/np.sqrt(true_data.shape[0]) print std_dev_of_mean """ Explanation: While the standard deviations for the simulated data are close to the observed, they are all slightly larger. One possible explanation for this could be due to using Poisson distributions. A Poisson RV has variance equal to $\lambda=np$, while a binomial RV has variance $np(1-p)$. Since $p$ is quite small, the difference is slight, but we can see that the Poisson approximation theoretically has more variance. We can also use this data to investigate the variance of the observed mean. One way to estimate the std dev of the observed mean is as follows: End of explanation """ var_theoretical = true_data[:, 4]/np.square(true_data[:, 3]) var_mean_theoretical = np.sum(var_theoretical)/np.square(true_data.shape[0]) print var_mean_theoretical print np.sqrt(var_mean_theoretical) """ Explanation: Under our theoretical model, the variances are known, so using an estimate is not needed. That is, we have that $$ Var(\bar x)=Var(\frac{1}{N} \sum x) $$ $$ =\frac{1}{N^2}\sum Var(x) $$ we know the theoretical variance of all of our random variables, since each $S_i \sim \textrm{Pois}(s_i)$, where $S_i$ corresponds to synapses and $s_i$ observed synapses. Thus the variance of the density will be $s_i/(u_i)^2$ where $u_i$ is unmasked. End of explanation """ from sklearn.cluster import KMeans scale_factor = 10**9 data_scaled = np.copy(true_data) data_scaled[:, 3] = (data_scaled[:, 4]/data_scaled[:, 3])*scale_factor data_scaled = data_scaled[:, [0, 1, 2, 3]] kmeans = KMeans(4) labels = kmeans.fit_predict(data_scaled) clusters = [[] for i in range(4)] for data, label in zip(data_scaled[:, ], labels): clusters[label].append(data) for cluster in clusters: cluster = np.array(cluster) cluster[:, -1] = cluster[:, -1]/scale_factor print cluster.shape print np.mean(cluster[:, -1]) print np.std(cluster[:, -1]) print np.min(cluster[:, -1]), np.max(cluster[:, -1]) """ Explanation: 4) Constructing a simpler model The originally proposed model is obviously quite complicated, since we treat each bin as a seperate distribution. Now let's try to make a simpler, yet hopefully still accurate, model. We'll try k-means clustering on the data, but scale the densities up so that they impact the clustering more. End of explanation """ clusters = [[] for i in range(4)] for data, label in zip(true_data[:, ], labels): clusters[label].append(data) # estimate lambda for each cluster by averaging synapses syn_avgs = [] for cluster in clusters: cluster = np.array(cluster) syn_avgs.append(np.average(cluster[:, -1])) lambdas = np.empty((true_data.shape[0])) for i, label in enumerate(labels): lambdas[i] = syn_avgs[label] sim_data = run_simulation(true_data, poisson, mu=lambdas) print np.average(sim_data[:, 4]), np.average(true_data[:, 4]) print np.median(sim_data[:, 4]), np.median(true_data[:, 4]) print np.max(sim_data[:, 4]), np.max(true_data[:, 4]) print np.std(sim_data[:, 4]), np.std(true_data[:, 4]) """ Explanation: Looking at the max and min values for the clusters of the density, we can see that the clusters formed a pretty solid partition of the densities ranges. The first cluster occupies the upper range from 0.0014.... to 0.0033..., the third cluster occupies the lowest range, from 0 to 0.0006..., the second goes from approximately where the third's range ended, 0.0006..., to 0.0011..., and similarly the 4th cluster picks up at the end of the third clusters range and ends at the beginning of the first's. Let's try and use 4 poissons to model the data, based on this clustering. End of explanation """ from scipy.stats import chisquare #true_data[:, 4] += 5 #sim_data[:, 4] += 5 # sim_data1[:, 4] += 5 #print chisquare(true_data[:, 4], sim_data[:, 4]) #print chisquare(true_data[:, 4], sim_data1[:, 4]) a = np.where(true_data[:, 3] >= np.average(true_data[:, 3])) true_data = true_data[a] sim_data = sim_data[a] sim_data1 = sim_data1[a] print chisquare(true_data[:, 4], sim_data[:, 4]) """ Explanation: Mean, median, and std dev of the simulated data are quite close to the true data, although the max value is not. This not terrible, necessarily though, because the max value in the true data is likely to be an outlier. 5) Tests Comparing Models and True Data Now we'll compare the two models with the true data as well as with themselves using a chisquared test. End of explanation """
mbuchove/analysis-tools-m
pyROOT/CalcCombUL-GrMethod-v2.ipynb
mit
if plot: fig = plt.figure(figsize=(12, 12)) fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,1), hdu=5) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,2), hdu=7) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,3), hdu=9) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,4), hdu=11) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,5), hdu=13) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,6), hdu=15) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") """ Explanation: Plot RBM maps to start, just plot the RBM maps, shows what is going on here End of explanation """ onCounts = 0 onAcc = 0 nbinsOn = 0 onC = [] onA = [] totCounts = 0 grAcc = [] ptAcc = np.zeros([6, nTestReg]) for group in range(6): #counts setup extname = 'RawOnMap'+str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') totCounts += np.nansum(onData) #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) gAcc = 0 for i in range(nTestReg): Pt = SkyCoord(ULpos['col1'][i], ULpos['col2'][i], unit='deg', frame='icrs') onSep = Pt.separation(onPos) cnts = np.nansum(onData[onSep.deg<sepDist]) accSep = Pt.separation(onPos) acc = np.nansum(accData[onSep.deg<sepDist])*np.nansum(onData) gAcc += acc ptAcc[group, i] = acc if group == 0: onC.append(cnts) onA.append(acc) else: onC[i] += cnts onA[i] += acc grAcc.append(gAcc) #print i+1, ULpos['col1'][i], ULpos['col2'][i], counts, acc print onC print onA onCounts = np.sum(onC) onAcc = np.sum(onA) print "Total", onCounts, onAcc, totCounts print grAcc """ Explanation: Calculate On Counts and Acceptance This is done by reading in the on counts and integral acceptance maps and using them to determin the total counts and acceptance within each of the test regions. Since we are dealing with circular regions we can simply use the standard caclulation for separation from astropy End of explanation """ onCountsE = 0 onAccE = 0 nbinsOnE = 0 onCE = 0 onAE = 0 totCountsE = 0 grAccE = 0 inElcent1E = SkyCoord(11.5, 42.00, unit='deg', frame='icrs') inElcent2E = SkyCoord(10.0, 40.55, unit='deg', frame='icrs') inElDistE = 1.8 for group in range(6): #counts setup extname = 'RawOnMap' + str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') totCountsE += np.nansum(onData) #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) gAccE = 0 onSep = ((inElcent1E.separation(onPos).deg + inElcent2E.separation(onPos).deg) > inElDistE) onCE += np.nansum(onData[onSep < inElDistE]) accSep = Pt.separation(onPos) onAE += np.nansum(accData[onSep < inElDistE])*np.nansum(onData) gAccE += acc print onCE print onAE print grAcc """ Explanation: Just because I can, I am going to see what the integrated counts are within an ellipse to see if the stats are there to say anything End of explanation """ inElcent1 = SkyCoord(11.5, 42.00, unit='deg', frame='icrs') inElcent2 = SkyCoord(10.0, 40.55, unit='deg', frame='icrs') outElcent1 = inElcent1 outElcent2 = inElcent2 inElDist = 2.2 outElDist = 3.9 inEl = ((inElcent1.separation(onPos).deg + inElcent2.separation(onPos).deg) > inElDist) outEl = ((outElcent1.separation(onPos).deg + outElcent2.separation(onPos).deg) < outElDist) El = np.logical_and(inEl, outEl) nuAndrom = SkyCoord(12.4535, 41.0790, unit='deg', frame='icrs') sepnuAn = (nuAndrom.separation(onPos).deg > 0.4) El = np.logical_and(El, sepnuAn) ! rm BGReg.fits hdu = fits.PrimaryHDU(El*1., header=onHeader) hdu.writeto('BGReg.fits') if plot: fig = plt.figure(figsize=(8, 8)) fig1 = aplpy.FITSFigure(fitsF, figure=fig, hdu=1) fig1.show_colorscale(vmin=-5,vmax=5,cmap=cx1) standard_setup(fig1) fig1.set_title("Significance") fig1.show_contour("BGReg.fits", colors='k') for i in range(nTestReg): fig1.show_circles(ULpos['col1'][i], ULpos['col2'][i], sepDist, color='purple', linewidth=2, zorder=5) fig1.add_label(ULpos['col1'][i], ULpos['col2'][i], ULpos['col3'][i], size=16, weight='bold', color='purple') if plot: fig = plt.figure(figsize=(10, 10)) fig1 = aplpy.FITSFigure("M31_IRIS_smoothed.fits", figure=fig) fig1.show_colorscale(cmap='Blues',vmin=0, vmax=7e3) fig1.recenter(10.6847, 41.2687, width=4, height=4) fig1.ticks.show() fig1.ticks.set_color('black') fig1.tick_labels.set_xformat('dd.dd') fig1.tick_labels.set_yformat('dd.dd') fig1.ticks.set_xspacing(1) # degrees fig1.set_frame_color('black') fig1.set_tick_labels_font(size='14') fig1.set_axis_labels_font(size='16') fig1.show_grid() fig1.set_grid_color('k') fig1.add_label(12.4535, 41.09, r'$\nu$' + '-Andromedae', size=10, weight='demi', color='black') #fig1.show_contour("BGReg.fits", colors='k') fig1.show_contour("BGReg.fits", lw=0.5, filled=True, hatches=[None,'/'], colors='none') fig1.show_contour("BGReg.fits", linewidths=1., filled=False, colors='k', levels=1) plt.savefig("Plots/M31BgReg.pdf") """ Explanation: Calculate Background Counts and Acceptance The background counts are harder, we are trying to use an ellipse in camera coordinates (that is a flat coord scheme) whereas the data is saved in a spherical coord scheme, to overcome this we cheat a bit by defining the eclipses and then saving them as fits files with a header copied from the data, this means that the projections etc are correct. We can now check this ellipse is okay by drawing it (as a contour) over the skymap and checking that the correct regions are included/excluded. The background will be integrated between the two ellipses (less the bit cut out for nuAndromadae) End of explanation """ bgC = [] bgA = [] ptAlpha = np.empty([6, nTestReg]) for group in range(6): #counts setup extname = 'RawOnMap'+str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) bgC.append(np.nansum(onData[El])) bgA.append(np.nansum(accData[El]))#*np.nansum(onData)) ptAlpha[group,] = ptAcc[group,] / np.nansum(accData[El])/ totCounts #ptAlpha = ptAcc[group, :] / np.nansum(accData[El])/ totCounts grAlpha = np.array(grAcc) / np.array(bgA)/ totCounts ptAlpha = np.sum(ptAlpha, axis=0) alpha = np.sum(grAlpha) bgCounts = np.sum(bgC) excess = onCounts - bgCounts * alpha print "Total:", onCounts, bgCounts, excess, alpha print "Point Alpha:", ptAlpha """ Explanation: Sum Background counts and acceptance Note: - I have corrected for the varying bin size my multiplying the acc by the bin area End of explanation """ stats.significance_on_off(onCE, bgCounts, ) if plot: bins = np.linspace(-4.5, 4.5, 100) sigData, sigHeader = fits.getdata(fitsF, header=True, extname="SignificanceMap") fig = plt.figure(figsize=(3, 3)) fig, ax = plt.subplots(1) hist = plt.hist(sigData[(~np.isnan(sigData)) & El], bins=bins, histtype="step") plt.semilogy() hist, bins2 = np.histogram(sigData[(~np.isnan(sigData)) & El], bins = bins) (xf, yf), params, err, chi = fit.fit(fit.gaus, (bins2[0:-1] + bins2[1:])/2, hist) plt.plot(xf, yf, 'r-', label='Fit') textstr1 = '$\mu = %.2f $' % (params[1]) textstr2 = '$ %.3f$\n$\sigma = %.2f$' % (err[1], params[2]) textstr3 = '$ %.3f$' % (err[2]) textstr = textstr1 + u"\u00B1" + textstr2 + u"\u00B1" + textstr3 #textstr = textstr1 + textstr2 + textstr3 props = dict(boxstyle='square', alpha=0.5, fc="white") ax.text(0.95, 0.95, textstr, transform=ax.transAxes, fontsize=14, verticalalignment='top', horizontalalignment='right', bbox=props) plt.ylim(ymin=1e0) """ Explanation: Again, just for fun, checking counts within elipse End of explanation """ def RUL(on, off, alpha): rolke = TRolke(rolkeUL) rolke.SetBounding(True) rolke.SetPoissonBkgKnownEff(int(on), int(off), 1./(alpha), 1.) return rolke.GetUpperLimit() def FCUL(on, off, alpha): fc = TFeldmanCousins(rolkeUL) return fc.CalculateUpperLimit((on), (off) * alpha) if False: for i in range(nTestReg): ULCounts = RUL((onC[i]), (bgCounts), ptAlpha[i]) excess = onC[i] - bgCounts * ptAlpha[i] print "Point", i print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onC[i], bgCounts, ptAlpha[i]) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onC[i], bgCounts, ptAlpha[i])) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' excess = onCounts - bgCounts * alpha ULCounts = RUL((onCounts), (bgCounts), (alpha)) print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onCounts, bgCounts, alpha) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onCounts, bgCounts, alpha)) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' """ Explanation: Calculate the UL on Counts This is done using TRolke, since it is not in python we have to import it from Root, fortunately that is easy enough End of explanation """ pointData = np.copy(onData) pointData.fill(0) pointData[pointData.shape[0]/2., pointData.shape[1]/2.] = 1000 pointData1 = ndimage.gaussian_filter(pointData, sigma=(-sigma/onHeader['CDELT1'], sigma/onHeader['CDELT2']), order=0) wcs_transformation = wcs.WCS(onHeader) initPos = wcs_transformation.wcs_pix2world(pointData.shape[0]/2., pointData.shape[1]/2., 0) pointSourceCor = sumInRegion(pointData1, onHeader, initPos[0], initPos[1], sepDist)/np.sum(pointData1) IRISdata, IRISheader = fits.getdata("M31_IRIS_cropped_ds9.fits", header=True) IRISdata2 = ndimage.gaussian_filter(IRISdata, sigma=(-sigma/IRISheader['CDELT1'], sigma/IRISheader['CDELT2']), order=0) M31total = np.sum(IRISdata2) """ Explanation: Effective Area Needto sum all of the Effective Areas from each of the test positions and then work out the expected flux given a test spectrum Issue:- This is using a point source EA, we dont have a point source. What we need to correct each EA by the difference in the flux distribtuion in its test region. To do this we use the following relation: $\frac{Frac\; Region\; Flux\; in\; thetaSq}{Frac\; Point\; Source\; Flux\; in\; thetaSq}$ For the bottom bit I take the point source and convolve with PSF, work out the fraction of counts that remain within the thetaSq For the top bit, I take the model, convolve with the PSF and work out the fraction of counts before to after Logic: think, not all counts fall within thetaSq, thus the EA is slightly under estimate, since Flux * EA = counts. Thus we need to undo this for the point source and redo this for the extended source. The smoothing factor is put in such that 68\% of the flux falls within a 0.1deg region (this is the standard quoted number) for a point source. I would like to check this with hard cuts etc for sims, but that should be a secondary effect End of explanation """ %%rootprint nPts = 100 En = np.linspace(-1, 2, num=nPts) Sp1 = (10**En)**index EA = np.empty([nPts]) EA1 = np.empty([nPts]) minSafeE = 0 #this is the minimum safe energy, I will quote the spectrum here decorE = 0 EstCounts1 = 0 for j in range(nTestReg): fName = "rootdir/St6/All" + cuts + theta + clean + str(j+1) + "s6.root" f = TFile(fName, "read") UL = f.Get("UpperLimit/VAUpperLimit") g = UL.GetEffectiveArea() if UL.GetEnergy() > minSafeE: minSafeE = UL.GetEnergy() if UL.GetEdecorr() > decorE: decorE = UL.GetEdecorr() # Weight EAs by expected flux from that region irisReg = sumInRegion(IRISdata, IRISheader, ULpos['col1'][j], ULpos['col2'][j], sepDist) regW = irisReg / M31total # Correct for PSF effects M31RegCor = sumInRegion(IRISdata2, IRISheader, ULpos['col1'][j], ULpos['col2'][j], sepDist) / irisReg print j, regW, M31RegCor for i, xval in np.ndenumerate(En): EA[i] = g.Eval(xval) / pointSourceCor * M31RegCor * regW EA1[i] += g.Eval(xval) / pointSourceCor * M31RegCor * regW Fl1 = Sp1 * EA EstCounts1 += np.trapz(Fl1, 10**En)*livetime FluxULReg1 = ULCounts / EstCounts1 print FluxULReg1 """ Explanation: We will do the calculation of the spectrum $\times$ the EA within the loop, not sure if it makes any difference but better safe than sorry Remeber, EA is in $m^2$, spectrum is in TeV and live time is in seconds - so flux will be $m^{-2} s^{-1} TeV^{-1}$ End of explanation """ FluxULM31 = FluxULReg1 FluxULM31_eMin = FluxULM31 * minSafeE **index FluxULM31_eDec = FluxULM31 * decorE **index intULM31_eMin = gammapy.spectrum.powerlaw.power_law_integral_flux(FluxULM31, index, 1, minSafeE, 30) intULM31_eDec = gammapy.spectrum.powerlaw.power_law_integral_flux(FluxULM31, index, 1, decorE, 30) intULM31_eMin_pcCrab = intULM31_eMin /(gammapy.spectrum.crab_integral_flux(minSafeE, 30, 'hess_pl')[0] *1e2) intULM31_eDec_pcCrab = intULM31_eDec /(gammapy.spectrum.crab_integral_flux(decorE, 30, 'hess_pl')[0] *1e2) print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onCounts, bgCounts, alpha) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onCounts, bgCounts, alpha)) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' print "Differential UL @ 1TeV = {0:.3e}".format(FluxULM31) print "Differential UL @ min safe E ({0:.1f}GeV) = {1:.3e}".format(minSafeE*1e3, FluxULM31_eMin) print "Differential UL @ decorrelation ({0:.1f}GeV) = {1:.3e}".format(decorE*1e3, FluxULM31_eDec) print "Differential UL units = TeV-1 m-2 s-1" print "" print "Integral UL between min safe energy and 30TeV = {0:.3e}".format(intULM31_eMin) print "Integral UL between decorrel energy and 30TeV = {0:.3e}".format(intULM31_eDec) print "Integral UL units = m-2 s-1" print "" print "Integral UL between min safe energy and 30TeV = {0:.3f} %Crab".format(intULM31_eMin_pcCrab) print "Integral UL between decorrel energy and 30TeV = {0:.3f} %Crab".format(intULM31_eDec_pcCrab) print 244. / 5.4, 244. / 0.67, 244. / 17.3 print 382. / 7.9, 382. / 3.90, 382. / 25.1 print 65. / 2.4, 65. / 0.30, 65. / 7.75 print 137. / 6.0, 137. / 3.00, 137. / 19.1 """ Explanation: Flux UL from M31 Total UL End of explanation """
VirtualWatershed/vw-py
examples/isnobal_netcdf/generate_isnobal_nc.ipynb
bsd-2-clause
# first, define our isnobal spatiotemporal parameters isnobal_params = dict( # generate a 10x8x(n_timesteps) grid for each variable nlines=10, nsamps=8, # with a resolution of 1.0m each; samp is north-south, so it's negative dline=1.0, dsamp=-1.0, # set base fake origin (easting, northing) = (442, 88) bline=442, bsamp=88, # enter start time and timestep; janky, but need to use '01' and '00' # to get proper ISO 8601 formatting if < 10 year=2008, month=10, day='22', hour='05', dt='hours' ) # now generate our netcdf import sys, os sys.path.append('../../') if os.path.exists('test.nc'): os.remove('test.nc') from vwpy.netcdf import ncgen_from_template # don't need path to the template, that's already set to be 'vwpy/cdl' nc = ncgen_from_template('ipw_in_template.cdl', 'test.nc', **isnobal_params) print isnobal_params """ Explanation: Initialize an iSNOBAL- and VW-ready dataset We'll use vw-py's utilities to initialize a new isnobal- and vw-ready dataset. In order to build our dataset we need some information: Start/end date The extreme northing and easting values (samps and bands in IPW speak) The resolution in northing and easting directions (dsamp/dline) The number of cells in both northing and easting driections The example below is adapted from the vw-py unit tests. We'll use the function ncgen_from_template which can be found in the vwpy.netcdf module. Briefly, this works by first building a CDL file from a Jinja2 template. See CDL Syntax and the ncgen man page for more CDL info. Then ncgen_from_template calls the command line ncgen function and loads the newly-generated dataset into Python. End of explanation """ # first, let's inspect what variables are available to us nc.variables # we can get metadata on any variable by inspecting the ncattrs time = nc.variables['time'] time.ncattrs() # this is where the date went time.getncattr('standard_name') """ Explanation: Next: populate the netCDF with data We'll insert data into just a few variables for an example. We can see what variables are now available either by inspecting the CDL template we just used, or we can inspect the Dataset's variable attribute, as shown below. One of the powerful features of netCDF is to transparently store data of any dimension. Some of our variables (time, easting, northing) are 1D. Others, including z, or altitude, are 2D variables; that is, they are spatially dependent, but time-independent. Finally, the climate variables that vary every hour and are spatially distributed are 3D arrays. End of explanation """ # let's create a fake DEM with some random data import numpy as np dem = abs(np.random.rand(isnobal_params['nlines'], isnobal_params['nsamps'])) %matplotlib inline import matplotlib.pyplot as plt plt.matshow(dem) np.shape(dem) np.shape(nc.variables['z']) # use [:] to unpack and assign te values from the dem nc.variables['z'][:] = dem z = nc.variables['z'][:] plt.matshow(z) """ Explanation: The standard name above refers to the CF Conventions standard name. By using this, other netCDF software tools can interpret the time variable, which unfortunately can only be represented as an integer index. Moving on, we'll now create fake elevation and atmospheric temperature data and insert it into our netcdf. End of explanation """ fake_ta = abs(np.random.rand(5, isnobal_params['nlines'], isnobal_params['nsamps'])) ta = nc.variables['T_a'] # np.shape(ta) == (0, 100, 10) at this point # ta is a reference to the actual netcdf variable, so we can assign to it ta[:] = fake_ta # now np.shape(ta) == (5, 100, 10) # double click the image below to enlarge and check they are different f, axs = plt.subplots(1, 5, figsize=(15, 10)) for idx, ax in enumerate(axs): ax.matshow(ta[idx]) ax.set_title('t = ' + str(idx)) """ Explanation: We see that our fake dem is now contained in the z variable of the netcdf. You can optionally also insert the elevation data into the alt variable, but mostly the alt variable remains unused and is not required for iSNOBAL. Now we'll create a series of 5 (100, 10) random arrays to simulate five timesteps worth of data. We'll assign this to be fake atmospheric temperature. End of explanation """
Dans-labs/dariah
static/tools/country_compose/.ipynb_checkpoints/countries-checkpoint.ipynb
mit
EU_FILE = 'europe_countries.csv' GEO_DIR = 'geojson' COUNTRIES = 'all_countries.json' OUTFILE = '../../../client/src/js/helpers/europe.geo.js' CENTER_PRECISION = 1 import sys, collections, json """ Explanation: Building the country information files The DARIAH app contains a visualization of the number of member country contribution on a map. We show the map using Leaflet, which loads files containing the boundaries. These files are in geojson format. Here we bundle all the necessary information of all European countries in one file. Per country that is: country code (ISO 2 letter) latitude and longitude (the place where to put markers or other features) geojson polygons, representing the boundaries We have obtained data from the github repo mledoze/countries. We use these files: dist/countries_unescaped.json data/ccc.geo.json (where ccc is the three letter code of a country) We have compiled manually a selection of European countries from dist/countries.csv and transformed it to the file europe_countries.csv (with only the name, the 2 letter and 3 letter codes of the country) The bundle we are producing will be a geojson file with as little information as needed. We also will round the coordinates and weed out duplicate points, in order to reduce the file size. NB: For Kosovo we have made manual adjustments: We downloaded a geojson file from elsewhere used KOS as a temporary three letter code End of explanation """ eu_countries = {} with open(EU_FILE) as f: for line in f: if line[0] == '#': continue fields = line.strip().split(';') if len(fields) == 3: (name, iso2, iso3) = fields eu_countries[iso2] = dict(iso3=iso3, name=name) for (i, (iso2, info)) in enumerate(sorted(eu_countries.items())): print('{:>2} {} {} {}'.format(i+1, iso2, info['iso3'], info['name'])) """ Explanation: Read the list of European countries End of explanation """ with open(COUNTRIES) as f: countries = json.load(f) print('Total number of countries: {}'.format(len(countries))) i = 0 coord_fmt = '{{:>{}.{}f}}'.format(4+CENTER_PRECISION, CENTER_PRECISION) pair_fmt = '({}, {})'.format(coord_fmt, coord_fmt) line_fmt = '{{:>2}} {{}} {} {{}}'.format(pair_fmt) for country in countries: iso2 = country['cca2'] if iso2 in eu_countries: i += 1 (lat, lng) = country['latlng'] info = eu_countries[iso2] info['lat'] = round(lat, CENTER_PRECISION) info['lng'] = round(lng, CENTER_PRECISION) print('Found info for {} European countries'.format(i)) for (i, (iso2, info)) in enumerate(sorted(eu_countries.items())): print(line_fmt.format( i+1, iso2, info['lat'], info['lng'], info['name'], )) """ Explanation: Read and filter the country file End of explanation """ def n_points(tp, data): if tp == 'll': return len(data) if tp == 'Polygon': return sum(len(ll) for ll in data) if tp == 'MultiPolygon': return sum(sum(len(ll) for ll in poly) for poly in data) return -1 def n_ll(tp, data): if tp == 'Polygon': return len(data) if tp == 'MultiPolygon': return sum(len(poly) for poly in data) return -1 for iso2 in eu_countries: info = eu_countries[iso2] with open('{}/{}.geo.json'.format(GEO_DIR, info['iso3'])) as f: geoinfo = json.load(f) geometry = geoinfo['features'][0]['geometry'] info['geometry'] = geometry total_ng = 0 total_nl = 0 total_np = 0 for (i, (iso2, info)) in enumerate(sorted(eu_countries.items())): geo = info['geometry'] shape = geo['type'] data = geo['coordinates'] ng = 1 if shape == 'Polygon' else len(data) np = n_points(shape, data) nl = n_ll(shape, data) total_ng += ng total_nl += nl total_np += np print('{:>2} {} {:<25} {:<15} {:>2} poly, {:>3} linear ring, {:>5} point'.format( i+1, iso2, info['name'], shape, ng, nl, np, )) print('{:<47}{:>2} poly, {:>3} linear ring, {:>5} point'.format( 'TOTAL', total_ng, total_nl, total_np, )) """ Explanation: Gather the boundary information End of explanation """ # maximal GEO_PRECISION = 3 # number of digits in coordinates of shapes MIN_POINTS = 1 # minimum number of points in a linear ring MAX_POINTS = 500 # maximum number of points in a linear ring MAX_POLY = 100 # maximum number of polygons in a multipolygon # minimal GEO_PRECISION = 1 # number of digits in coordinates of shapes MIN_POINTS = 10 # minimum number of points in a linear ring MAX_POINTS = 12 # maximum number of points in a linear ring MAX_POLY = 5 # maximum number of polygons in a multipolygon # medium GEO_PRECISION = 1 # number of digits in coordinates of shapes MIN_POINTS = 15 # minimum number of points in a linear ring MAX_POINTS = 60 # maximum number of points in a linear ring MAX_POLY = 7 # maximum number of polygons in a multipolygon def weed_ll(ll): new_ll = tuple(collections.OrderedDict( ((round(lng, GEO_PRECISION), round(lat, GEO_PRECISION)), None) for (lng, lat) in ll ).keys()) if len(new_ll) > MAX_POINTS: new_ll = new_ll[::(int(len(new_ll) / MAX_POINTS) + 1)] return new_ll + (new_ll[0],) def weed_poly(poly): new_poly = tuple(weed_ll(ll) for ll in poly) return tuple(ll for ll in new_poly if len(ll) >= MIN_POINTS) def weed_multi(multi): new_multi = tuple(weed_poly(poly) for poly in multi) return tuple(sorted(new_multi, key=lambda poly: -n_points('Polygon', poly))[0:MAX_POLY]) def weed(tp, data): if tp == 'll': return weed_ll(data) if tp == 'Polygon': return weed_poly(data) if tp == 'MultiPolygon': return weed_multi(data) ll = [ [8.710255,47.696808], [8.709721,47.70694], [8.708332,47.710548], [8.705,47.713051], [8.698889,47.713608], [8.675278,47.712494], [8.670555,47.711105], [8.670277,47.707497], [8.673298,47.701771], [8.675554,47.697495], [8.678595,47.693344], [8.710255,47.696808], ] ll2 = [ [8.710255,47.696808], [9.709721,47.70694], [10.708332,47.710548], [11.705,47.713051], [12.698889,47.713608], [13.675278,47.712494], [14.670555,47.711105], [15.670277,47.707497], [16.673298,47.701771], [17.675554,47.697495], [18.678595,47.693344], [19.710255,47.696808], [20.710255,47.696808], [8.710255,47.696808], ] poly = [ll, ll2] print(weed_ll(ll)) print('=====') print(weed_ll(ll2)) print('=====') print(weed_poly(poly)) wtotal_ng = 0 wtotal_nl = 0 wtotal_np = 0 for (i, (iso2, info)) in enumerate(sorted(eu_countries.items())): geo = info['geometry'] shape = geo['type'] data = geo['coordinates'] new_data = weed(shape, data) geo['coordinates'] = new_data data = new_data ng = 1 if shape == 'Polygon' else len(data) np = n_points(shape, data) nl = n_ll(shape, data) wtotal_ng += ng wtotal_nl += nl wtotal_np += np print('{:>2} {} {:<25} {:<15} {:>2} poly, {:>3} linear ring, {:>5} point'.format( i+1, iso2, info['name'], shape, ng, nl, np, )) print('{:<47}{:>2} poly, {:>3} linear ring, {:>5} point'.format( 'TOTAL after weeding', wtotal_ng, wtotal_nl, wtotal_np, )) print('{:<47}{:>2} poly, {:>3} linear ring, {:>5} point'.format( 'TOTAL', total_ng, total_nl, total_np, )) print('{:<47}{:>2} poly, {:>3} linear ring, {:>5} point'.format( 'IMPROVEMENT', total_ng - wtotal_ng, total_nl - wtotal_nl, total_np - wtotal_np, )) """ Explanation: Condense coordinates We are going to reduce the information in the boundaries in a number of ways. A shape is organized as follows: Multipolygon: a set of Polygons Polygon: a set of linear rings Linear rings: a list of coordinates, of which the last is equal to the first Coordinate: a longitude and a latitude GEO_PRECISION For coordinates we use a resolution of GEO_PRECISION digits behind the decimal point. We round the coordinates. This may cause repetition of identical points in a shape. We weed those out. We must take care that we do not weed out the first and last points. MIN_POINTS If a linear ring has too few points, we just ignore it. That is, a linear ring must have at least MIN_POINTS in order to pass. MAX_POINTS If a linear ring has too many points, we weed them out, until there are MAX_POINTS left. MAX_MULTI If a multipolygon has too many polygons, we retain only MAX_MULTI of them. We order the polygons by the number of points they contain, and we retain the richest ones. End of explanation """ features = dict( type='FeatureCollection', features=[], ) for (iso2, info) in sorted(eu_countries.items()): feature = dict( type='Feature', properties=dict( iso2=iso2, lng=info['lng'], lat=info['lat'], ), geometry=info['geometry'], ) features['features'].append(feature) with open(OUTFILE, 'w') as f: f.write(''' /** * European country borders * * @module europe_geo_js */ /** * Contains low resulution geographical coordinates of borders of European countries. * These coordinates can be drawn on a map, e.g. by [Leaflet](http://leafletjs.com). * * More information, and the computation itself is in * [countries.ipynb](/api/file/tools/country_compose/countries.html) * a Jupyer notebook that you can run for yourself, if you want to tweak the * resolution and precision of the border coordinates. */ ''') f.write('export const countryBorders = ') json.dump(features, f) """ Explanation: Produce geojson file End of explanation """
MIT-LCP/mimic-workshop
temp/02-example-patient-sepsis.ipynb
mit
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sqlite3 %matplotlib inline """ Explanation: Exploring the trajectory of a single patient Import Python libraries We first need to import some tools for working with data in Python. - NumPy is for working with numbers - Pandas is for analysing data - MatPlotLib is for making plots - Sqlite3 to connect to the database End of explanation """ # Connect to the MIMIC database conn = sqlite3.connect('data/mimicdata.sqlite') # Create our test query test_query = """ SELECT subject_id, hadm_id, admittime, dischtime, admission_type, diagnosis FROM admissions LIMIT 10; """ # Run the query and assign the results to a variable test = pd.read_sql_query(test_query,conn) # Display the first few rows test.head() """ Explanation: Connect to the database We can use the sqlite3 library to connect to the MIMIC database Once the connection is established, we'll run a simple SQL query. End of explanation """ query = """ SELECT de.icustay_id , (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS , di.label , de.value , de.valuenum , de.uom FROM chartevents de INNER join d_items di ON de.itemid = di.itemid INNER join icustays ie ON de.icustay_id = ie.icustay_id WHERE de.subject_id = 40036 ORDER BY charttime; """ ce = pd.read_sql_query(query,conn) # Preview the data # Use 'head' to limit the number of rows returned ce.head() """ Explanation: Load the chartevents data The chartevents table contains data charted at the patient bedside. It includes variables such as heart rate, respiratory rate, temperature, and so on. We'll begin by loading the chartevents data for a single patient. End of explanation """ # Select a single column ce['LABEL'] """ Explanation: Review the patient's heart rate We can select individual columns using the column name. For example, if we want to select just the label column, we write ce.LABEL or alternatively ce['LABEL'] End of explanation """ # Select just the heart rate rows using an index ce[ce.LABEL=='Heart Rate'] """ Explanation: In a similar way, we can select rows from data using indexes. For example, to select rows where the label is equal to 'Heart Rate', we would create an index using [ce.LABEL=='Heart Rate'] End of explanation """ # Which time stamps have a corresponding heart rate measurement? print ce.index[ce.LABEL=='Heart Rate'] # Set x equal to the times x_hr = ce.HOURS[ce.LABEL=='Heart Rate'] # Set y equal to the heart rates y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate'] # Plot time against heart rate plt.figure(figsize=(14, 6)) plt.plot(x_hr,y_hr) plt.xlabel('Time',fontsize=16) plt.ylabel('Heart rate',fontsize=16) plt.title('Heart rate over time from admission to the intensive care unit') """ Explanation: Plot 1: How did the patients heart rate change over time? Using the methods described above to select our data of interest, we can create our x and y axis values to create a time series plot of heart rate. End of explanation """ # Exercise 1 here """ Explanation: Task 1 What is happening to this patient's heart rate? Plot respiratory rate over time for the patient. Is there anything unusual about the patient's respiratory rate? End of explanation """ plt.figure(figsize=(14, 6)) plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k+', markersize=10, linewidth=4) plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - High'], ce.VALUENUM[ce.LABEL=='Resp Alarm - High'], 'm--') plt.plot(ce.HOURS[ce.LABEL=='Resp Alarm - Low'], ce.VALUENUM[ce.LABEL=='Resp Alarm - Low'], 'm--') plt.xlabel('Time',fontsize=16) plt.ylabel('Respiratory rate',fontsize=16) plt.title('Respiratory rate over time from admission, with upper and lower alarm thresholds') plt.ylim(0,55) """ Explanation: Plot 2: Did the patient's vital signs breach any alarm thresholds? Alarm systems in the intensive care unit are commonly based on high and low thresholds defined by the carer. False alarms are often a problem and so thresholds may be set arbitrarily to reduce alarms. As a result, alarm settings carry limited information. End of explanation """ # Display the first few rows of the GCS eye response data ce[ce.LABEL=='GCS - Eye Opening'].head() # Prepare the size of the figure plt.figure(figsize=(14, 10)) # Set x equal to the times x_hr = ce.HOURS[ce.LABEL=='Heart Rate'] # Set y equal to the heart rates y_hr = ce.VALUENUM[ce.LABEL=='Heart Rate'] plt.plot(x_hr,y_hr) plt.plot(ce.HOURS[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k', markersize=6) # Add a text label to the y-axis plt.text(-4,155,'GCS - Eye Opening',fontsize=14) plt.text(-4,150,'GCS - Motor Response',fontsize=14) plt.text(-4,145,'GCS - Verbal Response',fontsize=14) # Iterate over list of GCS labels, plotting around 1 in 10 to avoid overlap for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Eye Opening'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Eye Opening'].values[i],155),fontsize=14) for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Motor Response'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Motor Response'].values[i],150),fontsize=14) for i, txt in enumerate(ce.VALUE[ce.LABEL=='GCS - Verbal Response'].values): if np.mod(i,6)==0 and i < 65: plt.annotate(txt, (ce.HOURS[ce.LABEL=='GCS - Verbal Response'].values[i],145),fontsize=14) plt.title('Vital signs and Glasgow Coma Scale over time from admission',fontsize=16) plt.xlabel('Time (hours)',fontsize=16) plt.ylabel('Heart rate or GCS',fontsize=16) plt.ylim(10,165) """ Explanation: Task 2 Based on the data, does it look like the alarms would have triggered for this patient? Plot 3: What is patient's level of consciousness? Glasgow Coma Scale (GCS) is a measure of consciousness. It is commonly used for monitoring patients in the intensive care unit. It consists of three components: eye response; verbal response; motor response. End of explanation """ # OPTION 1: load outputs from the patient query = """ select de.icustay_id , (strftime('%s',de.charttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS , di.label , de.value , de.valueuom from outputevents de inner join icustays ie on de.icustay_id = ie.icustay_id inner join d_items di on de.itemid = di.itemid where de.subject_id = 40036 order by charttime; """ oe = pd.read_sql_query(query,conn) oe.head() # Prepare the size of the figure plt.figure(figsize=(14, 10)) plt.title('Fluid output over time') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') plt.xlim(0,20) plt.ylim(0,2) plt.legend() """ Explanation: Task 3 How is the patient's consciousness changing over time? Stop here... Plot 2: What other data do we have on the patient? Using Pandas 'read_csv function' again, we'll now load the patient outputs data (for example, urine output, drains, dialysis). This data is contained in the outputevents data table. End of explanation """ # Load inputs given to the patient (usually intravenously) using the database connection query = """ select de.icustay_id , (strftime('%s',de.starttime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_START , (strftime('%s',de.endtime)-strftime('%s',ie.intime))/60.0/60.0 as HOURS_END , de.linkorderid , di.label , de.amount , de.amountuom , de.rate , de.rateuom from inputevents_mv de inner join icustays ie on de.icustay_id = ie.icustay_id inner join d_items di on de.itemid = di.itemid where de.subject_id = 40036 order by endtime; """ ie = pd.read_sql_query(query,conn) ie.head() """ Explanation: To provide context for this plot, it would help to include patient input data. This helps to determine the patient's fluid balance, a key indicator in patient health. End of explanation """ ie['LABEL'].unique() # Prepare the size of the figure plt.figure(figsize=(14, 10)) # Plot the cumulative input against the cumulative output plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'], ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000, 'go', markersize=8, label='Intake volume, L') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') plt.title('Fluid balance over time',fontsize=16) plt.xlabel('Hours',fontsize=16) plt.ylabel('Volume (litres)',fontsize=16) # plt.ylim(0,38) plt.legend() """ Explanation: Note that the column headers are different: we have "HOURS_START" and "HOURS_END". This is because inputs are administered over a fixed period of time. End of explanation """ plt.figure(figsize=(14, 10)) # Plot the cumulative input against the cumulative output plt.plot(ie.HOURS_END[ie.AMOUNTUOM=='mL'], ie.AMOUNT[ie.AMOUNTUOM=='mL'].cumsum()/1000, 'go', markersize=8, label='Intake volume, L') plt.plot(oe.HOURS, oe.VALUE.cumsum()/1000, 'ro', markersize=8, label='Output volume, L') # example on getting two columns from a dataframe: ie[['HOURS_START','HOURS_END']].head() for i, idx in enumerate(ie.index[ie.LABEL=='Furosemide (Lasix)']): plt.plot([ie.HOURS_START[ie.LABEL=='Furosemide (Lasix)'][idx], ie.HOURS_END[ie.LABEL=='Furosemide (Lasix)'][idx]], [ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx], ie.RATE[ie.LABEL=='Furosemide (Lasix)'][idx]], 'b-',linewidth=4) plt.title('Fluid balance over time',fontsize=16) plt.xlabel('Hours',fontsize=16) plt.ylabel('Volume (litres)',fontsize=16) # plt.ylim(0,38) plt.legend() ie['LABEL'].unique() """ Explanation: As the plot shows, the patient's intake tends to be above their output. There are however periods where input and output are almost one to one. One of the biggest challenges of working with ICU data is that context is everything, so let's look at a treatment (Furosemide/Lasix) which we know will affect this graph. End of explanation """ # Exercise 2 here """ Explanation: Exercise 2 Plot the alarms for the mean arterial pressure ('Arterial Blood Pressure mean') HINT: you can use ce.LABEL.unique() to find a list of variable names Were the alarm thresholds breached? End of explanation """ plt.figure(figsize=(14, 10)) plt.plot(ce.index[ce.LABEL=='Heart Rate'], ce.VALUENUM[ce.LABEL=='Heart Rate'], 'rx', markersize=8, label='HR') plt.plot(ce.index[ce.LABEL=='O2 saturation pulseoxymetry'], ce.VALUENUM[ce.LABEL=='O2 saturation pulseoxymetry'], 'g.', markersize=8, label='O2') plt.plot(ce.index[ce.LABEL=='Arterial Blood Pressure mean'], ce.VALUENUM[ce.LABEL=='Arterial Blood Pressure mean'], 'bv', markersize=8, label='MAP') plt.plot(ce.index[ce.LABEL=='Respiratory Rate'], ce.VALUENUM[ce.LABEL=='Respiratory Rate'], 'k+', markersize=8, label='RR') plt.title('Vital signs over time from admission') plt.ylim(0,130) plt.legend() """ Explanation: Plot 3: Were the patient's other vital signs stable? End of explanation """ # OPTION 1: load labevents data using the database connection query = """ SELECT de.subject_id , de.charttime , di.label, de.value, de.valuenum , de.uom FROM labevents de INNER JOIN d_labitems di ON de.itemid = di.itemid where de.subject_id = 40036 """ le = pd.read_sql_query(query,conn) # preview the labevents data le.head() # preview the ioevents data le[le.LABEL=='HEMOGLOBIN'] plt.figure(figsize=(14, 10)) plt.plot(le.index[le.LABEL=='HEMATOCRIT'], le.VALUENUM[le.LABEL=='HEMATOCRIT'], 'go', markersize=6, label='Haematocrit') plt.plot(le.index[le.LABEL=='HEMOGLOBIN'], le.VALUENUM[le.LABEL=='HEMOGLOBIN'], 'bv', markersize=8, label='Hemoglobin') plt.title('Laboratory measurements over time from admission') plt.ylim(0,38) plt.legend() """ Explanation: Plot 5: Laboratory measurements Using Pandas 'read_csv function' again, we'll now load the labevents data. This data corresponds to measurements made in a laboratory - usually on a sample of patient blood. End of explanation """ # load ioevents ioe = pd.read_csv('data/example_ioevents.csv',index_col='HOURSSINCEADMISSION_START') ioe.head() plt.figure(figsize=(14, 10)) plt.plot(ioe.index[ioe.LABEL=='Midazolam (Versed)'], ioe.RATE[ioe.LABEL=='Midazolam (Versed)'], 'go', markersize=6, label='Midazolam (Versed)') plt.plot(ioe.index[ioe.LABEL=='Propofol'], ioe.RATE[ioe.LABEL=='Propofol'], 'bv', markersize=8, label='Propofol') plt.plot(ioe.index[ioe.LABEL=='Fentanyl'], ioe.RATE[ioe.LABEL=='Fentanyl'], 'k+', markersize=8, label='Fentanyl') plt.title('IOevents over time from admission') plt.ylim(0,380) plt.legend() """ Explanation: Plot 5: intravenous medications Using the Pandas 'read_csv function' again, we'll now load the the ioevents dataset End of explanation """ plt.figure(figsize=(14, 10)) plt.plot(ioe.index[ioe.LABEL=='OR Cryoprecipitate Intake'], ioe.VALUENUM[ioe.LABEL=='OR Cryoprecipitate Intake'], 'go', markersize=6, label='OR Cryoprecipitate Intake') plt.plot(ioe.index[ioe.LABEL=='OR Crystalloid Intake'], ioe.VALUENUM[ioe.LABEL=='OR Crystalloid Intake'], 'bv', markersize=8, label='OR Crystalloid Intake') plt.plot(ioe.index[ioe.LABEL=='OR FFP Intake'], ioe.VALUENUM[ioe.LABEL=='OR FFP Intake'], 'k+', markersize=8, label='OR FFP Intake') plt.plot(ioe.index[ioe.LABEL=='OR Packed RBC Intake'], ioe.VALUENUM[ioe.LABEL=='OR Packed RBC Intake'], 'k+', markersize=8, label='OR Packed RBC Intake') plt.plot(ioe.index[ioe.LABEL=='OR Platelet Intake'], ioe.VALUENUM[ioe.LABEL=='OR Platelet Intake'], 'k+', markersize=8, label='OR Platelet Intake') plt.title('Blood products administered over time from admission') plt.legend() """ Explanation: Plot 6: blood products Using Pandas 'read_csv function' again, we'll now load the blood products data End of explanation """ # insert discharge summary here... """ Explanation: Discharge summary End of explanation """
johnpfay/environ859
06_WebGIS/Notebooks/Bird-Demo-Reuben.ipynb
gpl-3.0
#Import modules import requests from bs4 import BeautifulSoup #Example URL theURL = "https://www.hbw.com/species/brown-wood-owl-strix-leptogrammica" #Get content of the species web page response = requests.get(theURL) #Convert to a "soup" object, which BS4 is designed to work with soup = BeautifulSoup(response.text,'lxml') """ Explanation: Applied example of scraping the Handbook of Birds of the World to get a list of subspecies for a given bird species. End of explanation """ #Find all sections with the CSS class 'ds-ssp_comp' and get the first (only) item found div = soup.find_all('div',class_='ds-ssp_comp') section = div[0] """ Explanation: Introspection of the source HTML of the species web page reveals that the sub-species listings fall within a section (div in HTML lingo) labeled "&lt;div class="ds-ssp_comp&gt;" in the HTML. So we'll search the 'soup' for this section, which returns a list of one object, then we extract that one object to a variable named subSection. https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class End of explanation """ #Find all lines in the section with the tag 'em' subSpecies = section.find_all('em') """ Explanation: All the entries with the tag &lt;em&gt; are the subspecies entries. End of explanation """ #Extract to a variable for subSpp in subSpecies: print (subSpp.get_text()) """ Explanation: We can loop through each subspecies found and print its name End of explanation """
LimeeZ/phys292-2015-work
days/day08/Display.ipynb
mit
class Ball(object): pass b = Ball() b.__repr__() print(b) """ Explanation: Display of Rich Output In Python, objects can declare their textual representation using the __repr__ method. End of explanation """ class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) """ Explanation: Overriding the __repr__ method: End of explanation """ from IPython.display import display """ Explanation: IPython expands on this idea and allows objects to declare other, rich representations including: HTML JSON PNG JPEG SVG LaTeX A single object can declare some or all of these representations; all of them are handled by IPython's display system. . Basic display imports The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations. End of explanation """ from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) """ Explanation: A few points: Calling display on an object will send all possible representations to the Notebook. These representations are stored in the Notebook document. In general the Notebook will use the richest available representation. If you want to display a particular representation, there are specific functions for that: End of explanation """ from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) """ Explanation: Images To work with images (JPEG, PNG) use the Image class. End of explanation """ i """ Explanation: Returning an Image object from an expression will automatically display it: End of explanation """ Image(url='http://python.org/images/python-logo.gif') """ Explanation: An image can also be displayed from raw data or a URL. End of explanation """ from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) """ Explanation: HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. End of explanation """ %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> """ Explanation: You can also use the %%html cell magic to accomplish the same thing. End of explanation """ from IPython.display import Javascript """ Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. End of explanation """ js = Javascript('alert("hi")'); display(js) """ Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it. End of explanation """ %%javascript alert("hi"); """ Explanation: The same thing can be accomplished using the %%javascript cell magic: End of explanation """ Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); """ Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. End of explanation """ from IPython.display import Audio Audio("./scrubjay.mp3") """ Explanation: Audio IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. End of explanation """ import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) """ Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur: End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') """ Explanation: Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: End of explanation """ from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) """ Explanation: External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: End of explanation """ from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') """ Explanation: Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: End of explanation """ FileLinks('./') """ Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. End of explanation """
prakhar2b/Weekend-Projects
gensim/#691.ipynb
mit
index.output_prefix """ Explanation: '/home/prakhar/Documents/khg' -- we want user to provide a location inside which we will create a directory named "shard" in which everything will happen. So, just to demonstrate that the code will detect parent directory and create shard directory in it, we are using "../khg" use something like this - '/home/prakhar/Documents/khg' or '/home/prakhar/Documents/' . Trailing slash is important otherwise "shard" directory will be created here - '/home/prakhar/shard/' instead of '/home/prakhar/Documents/shard/' End of explanation """ index.save('prakhar') index[vec_lsi] index.output_prefix """ Explanation: Now, user needs to provide only the file name, as file will now be saved inside "shard" directory. End of explanation """ index2 = similarities.Similarity.load('/home/prakhar/Documents/test/shard/prakhar') index2.output_prefix index2[vec_lsi] #index2.output_prefix #index2.output_prefix = '/home/prakhar/Documents/gentestOLD/prakhar.x' #index2.check_moved() #index[vec_lsi] """ Explanation: Now, for portability the user needs to use the "shard" folder End of explanation """
UWPreMAP/PreMAP2015
Lessons/Python_Plotting.ipynb
mit
# we use matplotlib and specifically pyplot for basic plotting purposes # the convention is to import this as "plt" # also import things that will help use read in our data and perform other operations on it import matplotlib.pyplot as plt from astropy.io import ascii import numpy as np # I'm also using this "magic" function to make my plots appear in this notebook # you DO NOT need to do this in your code. only do this when working with notebooks # if you want plots to appear in the below the cells where you are running code %matplotlib inline x = np.arange(10) y = np.arange(10, 20) plt.plot(x, y) # basic plotting command; it takes x,y values as well as other arguments to customize the plot plt.show() # you can change the plot symbol, size, color, etc. plt.plot(x,y,'.',markersize=20, color='orange') plt.show() # instead of creating arrays you can plot a function, like a sine curve x = np.linspace(0,4*np.pi,50) y = np.sin(x) plt.plot(x,y, '--', linewidth=5) # add an argument to change the width of the plotted line, and linestyle plt.xlabel('Xlabel') # add labels to the axes plt.ylabel('Ylable') plt.title('Sine Curve') # set the plot title plt.show() x = np.arange(10) y = x**3 xerr_values = 0.2*np.sqrt(x) yerr_values = 5*np.sqrt(y) plt.errorbar(x,y, xerr=xerr_values, yerr=yerr_values) # adding errorbars to a plot plt.show() # for log plots there is the option of plt.loglog(), plt.semilogx(), plt.semilogy() x = np.linspace(0,20) y = np.exp(x) plt.semilogy(x,y) plt.show() # it is also simple to add legends to your plots in matplotlib # you simply need to include the "label" argument in your plot command # and then add "plt.legend()" xred = np.random.rand(100) yred = np.random.rand(100) xblue = np.random.rand(20) yblue = np.random.rand(20) plt.plot(xred, yred, '^', color='red', markersize=8, label='Red Points') plt.plot(xblue, yblue, '+', color='blue', markersize=12, markeredgewidth=3, label='Blue Points') plt.xlabel('Xaxis') plt.ylabel('Yaxis') plt.legend() # this has the optional argument "loc" to tell the legend where to go plt.show() # to save figures in python you just use "plt.savefig()" plt.plot(np.sin(np.linspace(0, 10))) plt.title('SINE CURVE') plt.xlabel('Xaxis') plt.ylabel('Yaxis') plt.savefig('sineplot.png') # just feed savefig the file name, or path to file name that you want to write plt.show() # plot of kepler's law a_AU = np.array([0.387, 0.723, 1. , 1.524, 5.203, 9.537, 19.191, 30.069, 39.482]) T_yr = np.array([0.24,0.62, 1., 1.88,11.86, 29.46, 84.01, 164.8, 247.7]) a_cm = a_AU*1.496e+13 T_s = T_yr*3.154e+7 G = 6.67e-8 Msun = 1.99e+33 plt.loglog(a_AU, T_yr, 'o') plt.xlabel('Semi-Major Axis [AU]') plt.ylabel('Period [yrs]') plt.show() # now plot a function over the data # as you work more in python you will learn how to actually fit models to your data def keplers_third_law(a,M): return np.sqrt((4*np.pi**2*a**3) / (G*M)) plt.loglog(a_cm, T_s, 'o') plt.loglog(a_cm, keplers_third_law(a_cm,Msun), '--', label='Keplers Third Law') # try swapping out Msun with something else and see what it looks like plt.xlabel('Semi-Major Axis [cm]') plt.ylabel('Period [s]') plt.legend(loc=2) plt.show() """ Explanation: Plotting in Python examples in this notebook are based on Nicholas Hunt-Walker's plotting tutorial and Jake VanderPlas' matplotlib tutorial Thus far we have learned how to do basic mathematical operations in python, read and write data files, write/use functions and modules, and write loops. What we have not learned is how to visualize our data. Making professional plots in python is actually relatively straightfoward. In this lesson we will cover some of the most basic plotting capabilities of python, but note that there is much, much more that you can do. For some examples check out the matplotlib gallery, seaborn gallery, or plotly for examples of how to make interactive plots. You can even make xkcd style plots with relative ease! In this notebook we will learn how to make basic plots like scatter plots, histograms and line plots in using matplotlib in python. Basic Plot Commands Some of the basic plotting commands include python plt.plot() plt.errorbar() plt.loglog(), plt.semilogx(), plt.semilogy() End of explanation """ # first let's read in some data to use for plotting galaxy_table = ascii.read('data/mygalaxy.dat') galaxy_table[:5] # simple scatter plot plt.scatter(galaxy_table['col1'], galaxy_table['col2']) plt.show() """ Explanation: Scatter Plots End of explanation """ plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') plt.show() # here would be the equivalent statement using plt.plot(), note that the syntax is a little different plt.plot(galaxy_table['col1'], galaxy_table['col2'], 'o', color='blue', markersize=1, markeredgecolor='None') plt.show() """ Explanation: SIDE NOTE: If you are running things in the IPython environment or from a script you would want to do something like the following to get your plots to show up in a new window: python plt.scatter(galaxy_table['col1'], galaxy_table['col2']) plt.show() In an IPython Notebook, you will see the plot outputs whether or not you call plt.show() because we've used the %matplotlib inline magic function. Let's break down these basic examples: - We are running functions called "plot" or "scatter" that take specific arguments. - The most basic arguments that these functions take are in the form of (x,y) values for the plot, and we get these from a data table. - We can use more specific arugments like 'o' to customize things like the plot symbol (marker) that we are using. With plt.scatter() you can change things like point color, point size, point edge color and point type. The argument syntax for adding these options are as follows: color = 'colorname'; could be 'b' for blue, 'k' for black, 'r' for red s = number; changes marker size markeredgecolor = None or 'colorname' marker = 'symbolname', i.e. 's' for square, 'o' for circle, '+' for cross, 'x' for x, '*' for star, '^' for triangle, etc. Let's do an example: End of explanation """ plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') plt.xlabel('Galactic Longitude (degrees)', fontweight='bold', size=16) plt.ylabel('Galactic Latitude (degrees)', fontweight='bold', size=16) plt.show() """ Explanation: The plot is starting to look better, but there is one really important thing that is missing: axis labels. These are very easy to put in in matplotlib using plt.xlabel() and plt.ylabel(). These functions take strings as their arguments for the labels, but can also take other arguments that case the text format: End of explanation """ plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') plt.xlabel('Galactic Longitude (degrees)', fontweight='bold', size=16) plt.ylabel('Galactic Latitude (degrees)', fontweight='bold', size=16) plt.xlim([-180,180]) plt.ylim([-90,90]) plt.show() """ Explanation: We can also change things like the axis limits with plt.xlim() and plt.ylim(). For these we just want to feed it a range of values for each axis: End of explanation """ plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') # Labels plt.xlabel('Galactic Longitude (degrees)', fontweight='bold', size=16) plt.ylabel('Galactic Latitude (degrees)', fontweight='bold', size=16) # Set limits plt.xlim([-180,180]) plt.ylim([-90,90]) # Choose axis ticks plt.xticks(range(-180,210,60), fontsize=16, fontweight='bold') # change tick spacing, font size and bold plt.yticks(range(-90,120,30), fontsize=16, fontweight='bold') # turn on minor tick marks plt.minorticks_on() plt.grid() # turn on a background grip to guide the eye plt.show() """ Explanation: The axis labels are easy to read, but the numbers and tick marks on the axis are pretty small. We can tweak lots of little things about how the tick marks look, how they are spaced, and if we want to have a grid to guide the reader's eyes. I will give just a couple of examples here: End of explanation """ plt.figure(figsize=(10,4)) # change figure size plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') # Labels plt.xlabel('Galactic Longitude (degrees)', fontweight='bold', size=16) plt.ylabel('Galactic Latitude (degrees)', fontweight='bold', size=16) # Set limits plt.xlim([-180,180]) plt.ylim([-90,90]) # Choose axis ticks plt.xticks(range(-180,210,60), fontsize=16, fontweight='bold') # change tick spacing, font size and bold plt.yticks(range(-90,120,30), fontsize=16, fontweight='bold') # turn on minor tick marks plt.minorticks_on() plt.grid() # turn on a background grip to guide the eye plt.show() """ Explanation: By default the figure is square, but maybe this is not the best way to represent our data. If this is the case we can change the size of the figure: End of explanation """ plt.figure(figsize=(10,4)) # change figure size plt.scatter(galaxy_table['col1'], galaxy_table['col2'], color='blue', s=1, edgecolor='None', marker='o') # the next three lines put text on the figure at the specified coordinates plt.text(-90, -50, 'LMC', fontsize=20) plt.text(-60, -60, 'SMC', fontsize=20) plt.text(0, -30, 'MW Bulge', fontsize=20) plt.xlabel('Galactic Longitude (degrees)', fontweight='bold', size=16) plt.ylabel('Galactic Latitude (degrees)', fontweight='bold', size=16) plt.xlim([-180,180]) plt.ylim([-90,90]) plt.xticks(range(-180,210,60), fontsize=16, fontweight='bold') # change tick spacing, font size and bold plt.yticks(range(-90,120,30), fontsize=16, fontweight='bold') plt.minorticks_on() # turn on minor tick marks plt.grid() # turn on a background grip to guide the eye plt.show() """ Explanation: The last thing I'll mention here is how to put text on plots. This too is simple as long as you specify (x,y) coordinates for the text. End of explanation """ # plots histogram where the y-axis is counts x = np.random.randn(10000) num, bins, patches = plt.hist(x, bins=50) plt.xlabel('Bins') plt.ylabel('Counts') plt.show() # plots histogram where the y-axis is a probability distribution plt.hist(x, bins=50, normed=True) plt.xlabel('Bins') plt.ylabel('Probability') plt.show() # plots a histogram where the y-axis is a fraction of the total weights = np.ones_like(x)/len(x) plt.hist(x, bins=50, weights=weights) plt.ylabel('Fraction') plt.xlabel('Bins') plt.show() # print out num and bins and see what they look like! what size is each array? # how would you plot this histogram using plt.plot? what is the x value and what is the y value? """ Explanation: Histograms Histograms can be a great way to visualize data, and they are (surprise) easy to make it python! The basic command is python num, bins, patches = plt.hist(array, bins=number) Num refers to the number of elements in each bin, and bins refers to each bin on the x-axis. Note that bins actually gives you bin EDGES, so there will always be num+1 number of bins. We can ignore patches for now. As arguments plt.hist() takes an array and the number of bins you would like (default is bins=10). Some other optional arguments for plt.hist are: range: lower and upper range of bins normed: set to 'True' or 'False.' If true it will return a normalized probability distribution instead of just raw number counts for the y-axis. histtype: can be step to something like 'step', 'stepfilled', or 'bar' for the histogram style. weights: an array of values that must be of the same size as the number of bins. It controls the factor by which the number counts are weighted, i.e. it makes your number counts into number_counts*weight. End of explanation """ # make two side by side plots x1 = np.linspace(0.0, 5.0) x2 = np.linspace(0.0, 2.0) y1 = np.cos(2 * np.pi * x1) * np.exp(-x1) y2 = np.cos(2 * np.pi * x2) plt.figure(figsize=[15,3]) plt.subplot(1,2,1) # 1 row, 2 columns, 1st figure plt.plot(x1,y1) plt.xlabel('Xlabel') plt.ylabel('Ylabel') plt.subplot(1,2,2) # 1 row, 2 columsn, 2nd figure plt.plot(x2,y2) plt.xlabel('Xlabel') plt.ylabel('Ylabel') plt.show() # stack two plots on top of one another plt.subplot(2,1,1) # 1 row, 2 columns, 1st figure plt.plot(x1,y1) plt.xlabel('Xlabel') plt.ylabel('Ylabel') plt.subplot(2,1,2) # 1 row, 2 columsn, 2nd figure plt.plot(x2,y2) plt.xlabel('Xlabel') plt.ylabel('Ylabel') plt.show() """ Explanation: Subplots Subplots are a way put multiple plots in what amounts to the same figure; think of subplots like an array of plots! The following picture is helpful for understanding how matplotlib places subplots based on row, column, and figure number: <img src="images/subplot-grid.png"> End of explanation """ # don't worry about this way to read in files right now import pandas as pd exoplanets = pd.read_csv('data/exoplanet.eu_catalog_1022.csv') # get rid of some rows with missing values to be safe exoplanets = exoplanets[np.isfinite(exoplanets['orbital_period'])] # let's see what the data table looks like exoplanets.head() # plot distance from host star versus mass (in jupiter masses) for each exoplanet plt.loglog(exoplanets['semi_major_axis'], exoplanets['mass'],'.') plt.annotate("Earth", xy=(1,1/317.), size=12) plt.annotate("Jupiter", xy=(5,1), size=12) plt.xlabel('Semi-Major Axis [AU]',size=20) plt.ylabel('Mass [M$_{Jup}$]', size=20) # let's try to find out if the blobs above separate out by detection type import seaborn as sns; sns.set() transits = exoplanets[exoplanets['detection_type'] == 'Primary Transit'] radial_vel = exoplanets[exoplanets['detection_type'] == 'Radial Velocity'] imaging = exoplanets[exoplanets['detection_type'] == 'Imaging'] ttv = exoplanets[exoplanets['detection_type'] == 'TTV'] plt.loglog(transits['semi_major_axis'], transits['mass'], '.', label='Transit',markersize=12) plt.loglog(radial_vel['semi_major_axis'], radial_vel['mass'], '.', label='Radial Vel', markersize=12) plt.loglog(imaging['semi_major_axis'], imaging['mass'], '.', label='Direct Imaging', markersize=16) plt.loglog(ttv['semi_major_axis'], ttv['mass'], '.', label='TTV', color='cyan', markersize=16) plt.annotate("Earth", xy=(1,1/317.), size=12) plt.annotate("Jupiter", xy=(5,1), size=12) plt.xlabel('Semi-Major Axis [AU]', size=20) plt.ylabel('Mass [M$_{Jup}$]', size=20) plt.legend(loc=4, prop={'size':16}) # and now just for fun an xkcd style plot! plt.xkcd() plt.scatter(exoplanets['discovered'], exoplanets['radius']*11) plt.xlabel('Year Discovered') plt.ylabel('Radius [R_Earth]') """ Explanation: You can do fancier things with subplots like have different plots share the same axis, put smaller plots as insets to larger plots, etc. Again, take a look at things like the matplotlib library for examples of different plots. Plotting Exoplanets Let's try to make some plots with a new dataset. The file that we'll use is taken from exoplanets.eu. End of explanation """
blua/deep-learning
weight-initialization/weight_initialization.ipynb
mit
%matplotlib inline import tensorflow as tf import helper from tensorflow.examples.tutorials.mnist import input_data print('Getting MNIST Dataset...') mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) print('Data Extracted.') """ Explanation: Weight Initialization In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. Testing Weights Dataset To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network. We'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset. End of explanation """ # Save the shapes of weights for each layer layer_1_weight_shape = (mnist.train.images.shape[1], 256) layer_2_weight_shape = (256, 128) layer_3_weight_shape = (128, mnist.train.labels.shape[1]) """ Explanation: Neural Network <img style="float: left" src="images/neural_network.png"/> For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers. End of explanation """ all_zero_weights = [ tf.Variable(tf.zeros(layer_1_weight_shape)), tf.Variable(tf.zeros(layer_2_weight_shape)), tf.Variable(tf.zeros(layer_3_weight_shape)) ] all_one_weights = [ tf.Variable(tf.ones(layer_1_weight_shape)), tf.Variable(tf.ones(layer_2_weight_shape)), tf.Variable(tf.ones(layer_3_weight_shape)) ] helper.compare_init_weights( mnist, 'All Zeros vs All Ones', [ (all_zero_weights, 'All Zeros'), (all_one_weights, 'All Ones')]) """ Explanation: Initialize Weights Let's start looking at some initial weights. All Zeros or Ones If you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case. With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust. Let's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start. Run the cell below to see the difference between weights of all zeros against all ones. End of explanation """ helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3)) """ Explanation: As you can see the accuracy is close to guessing for both zeros and ones, around 10%. The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run. A good solution for getting these random weights is to sample from a uniform distribution. Uniform Distribution A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution. tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None) Outputs random values from a uniform distribution. The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. minval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. maxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. dtype: The type of the output: float32, float64, int32, or int64. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). We can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3. End of explanation """ # Default for tf.random_uniform is minval=0 and maxval=1 basline_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape)), tf.Variable(tf.random_uniform(layer_2_weight_shape)), tf.Variable(tf.random_uniform(layer_3_weight_shape)) ] helper.compare_init_weights( mnist, 'Baseline', [(basline_weights, 'tf.random_uniform [0, 1)')]) """ Explanation: The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2. Now that you understand the tf.random_uniform function, let's apply it to some initial weights. Baseline Let's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0. End of explanation """ uniform_neg1to1_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1)) ] helper.compare_init_weights( mnist, '[0, 1) vs [-1, 1)', [ (basline_weights, 'tf.random_uniform [0, 1)'), (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')]) """ Explanation: The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction. General rule for setting weights The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron). Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1). End of explanation """ uniform_neg01to01_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1)) ] uniform_neg001to001_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01)) ] uniform_neg0001to0001_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)), tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)), tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001)) ] helper.compare_init_weights( mnist, '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)', [ (uniform_neg1to1_weights, '[-1, 1)'), (uniform_neg01to01_weights, '[-0.1, 0.1)'), (uniform_neg001to001_weights, '[-0.01, 0.01)'), (uniform_neg0001to0001_weights, '[-0.001, 0.001)')], plot_n_batches=None) """ Explanation: We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small? Too small Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot. End of explanation """ import numpy as np general_rule_weights = [ tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))), tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))), tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0]))) ] helper.compare_init_weights( mnist, '[-0.1, 0.1) vs General Rule', [ (uniform_neg01to01_weights, '[-0.1, 0.1)'), (general_rule_weights, 'General Rule')], plot_n_batches=None) """ Explanation: Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$. End of explanation """ helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000])) """ Explanation: The range we found and $y=1/\sqrt{n}$ are really close. Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution. Normal Distribution Unlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram. tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a normal distribution. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). End of explanation """ normal_01_weights = [ tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)), tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)), tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1)) ] helper.compare_init_weights( mnist, 'Uniform [-0.1, 0.1) vs Normal stddev 0.1', [ (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'), (normal_01_weights, 'Normal stddev 0.1')]) """ Explanation: Let's compare the normal distribution against the previous uniform distribution. End of explanation """ helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000])) """ Explanation: The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution. Truncated Normal Distribution tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Outputs random values from a truncated normal distribution. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. shape: A 1-D integer Tensor or Python array. The shape of the output tensor. mean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution. dtype: The type of the output. seed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior. name: A name for the operation (optional). End of explanation """ trunc_normal_01_weights = [ tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)), tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)), tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1)) ] helper.compare_init_weights( mnist, 'Normal vs Truncated Normal', [ (normal_01_weights, 'Normal'), (trunc_normal_01_weights, 'Truncated Normal')]) """ Explanation: Again, let's compare the previous results with the previous distribution. End of explanation """ helper.compare_init_weights( mnist, 'Baseline vs Truncated Normal', [ (basline_weights, 'Baseline'), (trunc_normal_01_weights, 'Truncated Normal')]) """ Explanation: There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations. We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now. End of explanation """
tpin3694/tpin3694.github.io
sql/merge_tables.ipynb
mit
# Ignore %load_ext sql %sql sqlite:// %config SqlMagic.feedback = False """ Explanation: Title: Merge Tables Slug: merge_tables Summary: Merge tables in SQL. Date: 2016-05-01 12:00 Category: SQL Tags: Basics Authors: Chris Albon Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax. For more, check out Learning SQL by Alan Beaulieu. End of explanation """ %%sql -- Create a table of criminals CREATE TABLE criminals (pid, name, age, sex, city, minor); INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1); INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0); INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0); INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1); INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0); INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0); INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0); INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0); -- Create a table of crimes CREATE TABLE crimes (cid, crime, city, pid_arrested, cash_stolen); INSERT INTO crimes VALUES (1, 'fraud', 'Santa Rosa', 412, 40000); INSERT INTO crimes VALUES (2, 'burglary', 'Petaluma', 234, 2000); INSERT INTO crimes VALUES (3, 'burglary', 'Santa Rosa', 632, 2000); INSERT INTO crimes VALUES (4, NULL, NULL, 621, 3500); INSERT INTO crimes VALUES (5, 'burglary', 'Santa Rosa', 162, 1000); INSERT INTO crimes VALUES (6, NULL, 'Petaluma', 901, 50000); INSERT INTO crimes VALUES (7, 'fraud', 'San Francisco', 412, 60000); INSERT INTO crimes VALUES (8, 'burglary', 'Santa Rosa', 512, 7000); INSERT INTO crimes VALUES (9, 'burglary', 'San Francisco', 411, 3000); INSERT INTO crimes VALUES (10, 'robbery', 'Santa Rosa', 632, 2500); INSERT INTO crimes VALUES (11, 'robbery', 'Santa Rosa', 512, 3000); """ Explanation: Create Two Tables, Criminals And Crimes End of explanation """ %%sql -- Select everything SELECT * -- Left table FROM criminals -- Right table INNER JOIN crimes -- Merged on `pid` in the criminals table and `pid_arrested` in the crimes table ON criminals.pid=crimes.pid_arrested; """ Explanation: Inner Join Returns all rows whose merge-on id appears in both tables. End of explanation """ %%sql -- Select everything SELECT * -- Left table FROM criminals -- Right table LEFT JOIN crimes -- Merged on `pid` in the criminals table and `pid_arrested` in the crimes table ON criminals.pid=crimes.pid_arrested; """ Explanation: Left Join Returns all rows from the left table but only the rows from the right left that match the left table. End of explanation """
ffpenaloza/AstroExp
tarea2-2/.ipynb_checkpoints/tarea2.2-checkpoint.ipynb
gpl-3.0
import numpy as np from scipy.signal import medfilt import matplotlib.pyplot as plt import kplr %matplotlib inline client = kplr.API() koi = client.koi(1274.01) lcs = koi.get_light_curves(short_cadence=True) p = 704.2 time, flux, ferr, med = [], [], [], [] for lc in lcs: with lc.open() as f: # The lightcurve data are in the first FITS HDU. hdu_data = f[1].data time.append(hdu_data["time"][~np.isnan(hdu_data["pdcsap_flux"])]) flux.append(hdu_data["pdcsap_flux"][~np.isnan(hdu_data["pdcsap_flux"])]) ferr.append(hdu_data["pdcsap_flux_err"][~np.isnan(hdu_data["pdcsap_flux"])]) # Ignora los NaN al hacer append normFlux, normFerr, phase = flux, ferr, time for i in range(0,len(flux)): med.append(np.median(flux[i])) prom = np.mean(med) for i in range(0,len(flux)): normFlux[i] = normFlux[i] - (med[i] - prom) normFlux[i] = medfilt(normFlux[i], 11) fig, ax = plt.subplots(2,1,figsize=(15,20)) for i in range(0,len(ax)): ax[i].set_ylim(0.996,1.0007) ax[i].set_title('KOI-1274.01') ax[i].set_xlabel('Phase',size=16) ax[i].set_ylabel('Normalized flux',size=14) for i in range(0,len(normFlux)): normFlux[i] = normFlux[i]/prom normFerr[i] = normFerr[i]/prom phase[i] = time[i]/p %1 ax[0].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3) ax[0].plot(phase[i], normFlux[i],'k.') ax[1].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3) ax[1].plot(phase[i], normFlux[i],'k--', alpha=.2) ax[1].set_xlim(0.699,0.7005) plt.show() plt.close() """ Explanation: Tarea 2, parte 2 <hr> Pregunta 1 Según <a href="https://ui.adsabs.harvard.edu/#abs/2003ApJ...585.1038S/abstract">Seager & Mallén-Ornelas (2003)</a>, La geometría del tránsito puede ser descrita por (se presentan imágenes del trabajo de Seager & Mallén-Ornelas): <img src="transitshape.png"> y la duración total como: <img src="transittotaltime.png"> <img src="fig1.png"> <em>a</em> es el semieje mayor. A partir de las características de la curva de luz de un tránsito, teniendo en cuenta la geometría del evento y la Tercera Ley de Kepler, se pueden obtener cuatro parámetros derivables de observables del sistema: <ol> <li><strong>La razón entre el radio planetario y el estelar</strong> $$R_P/R_* = \sqrt{\Delta F}$$ directamente de definir $\Delta F \equiv (F_{no transit}-F_{transit})/F_{no transit} = (R_P/R*)^2$</li> <li><strong>El parámetro de impacto</strong> <img src="b.png"> definido como la distancia proyectada entre el los centros del planeta y de la estrella durante la mitad del tránsito, en términos de $R_*$. Se deriva directamente de la ecuación anterior y de la ecuación de la forma del tránsito.</li> <li><strong>La razón entre el semieje mayor de la órbita y el radio estelar</strong> <img src="ar.png"> a partir de la ecuación de la duración del tránsito.</li> <li><strong>La densidad estelar</strong> <img src="rho.png"> a partir de la ecuación anterior y de la Tercera Ley de Kepler cuando $M_P \ll M_*$. Depende del parámetro de impacto puesto que $b$ afecta la duración del tránsito. </ol> Si se considera la relación masa-radio para una estrella $$R_ = kM_^x $$ para algunas constantes $k,x$ se obtiene un sistema de cinco ecuaciones y cinco incógnitas, y se pueden derivar las cantidades físicas una por una. <hr> Pregunta 2 En este paso se reliza una estandarización de los cuartos de modo que queden en torno al promedio de las medianas. Luego, se realizan dos pasos de <em>median filter</em> con ventanas de 11 valores y se normaliza el flujo. Para el período sugerido (704.2 días), se grafica el flujo en función de la fase. En verde se incluyen las barras de error asociadas al flujo. End of explanation """ df = 0.003 tt = 0.7 tf = 0.4 sintf = np.sin(tf*np.pi/p)**2 # un par de variables auxiliares sintt = np.sin(tt*np.pi/p)**2 ratio = np.sqrt(df) #Rp/R* b = np.sqrt( ((1-ratio)**2 - (sintf)/(sintt) *(1+ratio)**2) /(1-(sintf/sintt)) ) aR = np.sqrt( ((1+ratio)**2 - b**2 *(1-sintt)) /sintt ) i = np.arccos(b/aR) i = np.degrees(i) rho = aR**3 * 365.25**2 / 215**3 / p**2 print 'Rp/R* \t = \t' + repr(ratio) print 'b \t = \t' + repr(b) print 'a/R* \t = \t' + repr(aR) print 'i \t = \t' + repr(i) print 'rho \t = \t' + repr(rho) + ' densidades solares' """ Explanation: <hr> Pregunta 3 A simple vista, aproximadamente, se tiene $T_T = 0.7$ d $T_F = 0.4$ d $\Delta F = 0.003$ Con esto, se calculan los cuatro parámetros a partir de observables (sin considerar en este punto una ley de potencias que relacione masa y radio). Como las ecuaciones son las del trabajo de Seager & Mallén-Ornelas, las suposiciones necesarias para determinar las cantidades son las que allí se hacen. End of explanation """ from scipy.optimize import leastsq from scipy.interpolate import UnivariateSpline import scipy.integrate as integrate w, r = np.loadtxt('kepler_response_hires1.txt', unpack=True) w = 10*w S = UnivariateSpline(w,r,s=0,k=1) min_w = min(w) max_w = max(w) idx = np.where((w>min_w)&(w<max_w))[0] S_wav = np.append(np.append(min_w,w[idx]),max_w) S_res = np.append(np.append(S(min_w),r[idx]),S(max_w)) I = np.array([]) wavelengths = np.array([]) f = open('grav_4.5_lh_1.25.dat','r') counter = 0 while(True): l = f.readline() if(l==''): break # If no jump of line or comment, save the intensities: if(l[0]!='#' and l[:3]!='\n'): splitted = l.split('\t') if(len(splitted)==18): splitted[-1] = (splitted[-1])[:-1] # The last one always has a jump of line (\n), so erase it. wavelength = np.double(splitted[0])*10 # Convert wavelengths, which are in nanometers, to angstroms. intensities = np.double(np.array(splitted[1:])) # Get the intensities. ndigits = len(str(int(intensities[1]))) # Only if I(1) is different from zero, fit the LDs: if(intensities[0]!=0.0): intensities[1:] = intensities[1:]/1e5 # Kurucz doesn't put points on his files (e.g.: 0.8013 is 8013). intensities[1:] = intensities[1:]*intensities[0] # All the rest of the intensities are normalized w/r to the center one. if(counter == 0): I = intensities else: I = np.vstack((I,intensities)) wavelengths = np.append(wavelengths,wavelength) counter = counter + 1 f.close() mu = np.array([1.0,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.25,0.2,0.15,0.125,0.1,0.075,0.05,0.025,0.01]) # Define the number of mu angles at which we will perform the integrations: nmus = len(mu) # Now integrate intensity through each angle: I_l = np.array([]) for i in range(nmus): # Interpolate the intensities: Ifunc = UnivariateSpline(wavelengths,I[:,i],s=0,k=1) integrand = S_res*Ifunc(S_wav) integration_results = np.trapz(integrand, x=S_wav) I_l = np.append(I_l,integration_results) I0 = I_l/(I_l[0]) # Normalize profile with respect to I(mu = 1): # Define A matrix for the linear system: A = np.zeros([2,2]) # Define b vector for the linear system: b = np.zeros(2) # Obtain the alpha_n_k and beta_k that fill the A matrix and b vector: for n in range(1,3,1): for k in range(1,3,1): A[n-1,k-1] = sum(((1.0-mu)**n)*((1.0-mu)**k)) b[n-1] = sum(((1.0-mu)**n)*(1.0-I0)) u = list(np.linalg.solve(A,b)) print u """ Explanation: <hr> Pregunta 4 Se calculan los coeficientes según se detalla en en <a href="https://arxiv.org/pdf/1503.07020v3.pdf">Espinoza & Jordán 2015</a>, a partir de una versión modificada del código disponible en <a href="https://github.com/nespinoza/limb-darkening">https://github.com/nespinoza/limb-darkening</a>. End of explanation """ import batman params = batman.TransitParams() #object to store transit parameters params.t0 = 0. #time of inferior conjunction params.per = p #orbital period params.rp = ratio #planet radius (in units of stellar radii) params.a = aR #semi-major axis (in units of stellar radii) params.inc = i #orbital inclination (in degrees) params.ecc = 0. #eccentricity params.w = 90. #longitude of periastron (in degrees) params.limb_dark = "quadratic" #limb darkening model params.u = u #limb darkening coefficients t = np.linspace(-0.025, 0.025, 100) #times at which to calculate light curve m = batman.TransitModel(params, t) #initializes model fluxBatman = m.light_curve(params) #calculates light curve plt.plot(t, fluxBatman) plt.xlabel("Time from central transit") plt.ylabel("Relative flux") plt.show() ############## #oc """ Explanation: Se ejecuta batman como se explica en la documentación, entregando como parámetros los valores obtenidos a lo largo de este trabajo. End of explanation """ koi = client.koi(7016.01) lcs = koi.get_light_curves(short_cadence=True) p = koi.koi_period time, flux, ferr, med = [], [], [], [] for lc in lcs: with lc.open() as f: # The lightcurve data are in the first FITS HDU. hdu_data = f[1].data time.append(hdu_data["time"][~np.isnan(hdu_data["pdcsap_flux"])]) flux.append(hdu_data["pdcsap_flux"][~np.isnan(hdu_data["pdcsap_flux"])]) ferr.append(hdu_data["pdcsap_flux_err"][~np.isnan(hdu_data["pdcsap_flux"])]) # Ignora los NaN al hacer append normFlux, normFerr, phase = flux, ferr, time for i in range(0,len(flux)): med.append(np.median(flux[i])) prom = np.mean(med) for i in range(0,len(flux)): normFlux[i] = normFlux[i] - (med[i] - prom) normFlux[i] = medfilt(normFlux[i], 25) fig, ax = plt.subplots(2,1,figsize=(15,20)) for i in range(0,len(ax)): ax[i].set_ylim(0.996,1.0007) ax[i].set_title('KOI-7016.01') ax[i].set_xlabel('Phase',size=16) ax[i].set_ylabel('Normalized flux',size=14) for i in range(0,len(normFlux)): normFlux[i] = normFlux[i]/prom normFerr[i] = normFerr[i]/prom phase[i] = time[i]/p %1 ax[0].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3) ax[0].plot(phase[i], normFlux[i],'k.') ax[1].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3) ax[1].plot(phase[i], normFlux[i],'k.', alpha=.2) ax[1].set_xlim(0.762,0.782) ax[1].set_ylim(0.9985,1.001) plt.show() plt.close() """ Explanation: <hr> Pregunta Bonus Se repiten los pasos para el planeta anterior. Dado que los valores del flujo son más dispersos, se realiza el paso de median filter con ventanas de 25 valores y se centra el gráfico en un posible tránsito, que corresponde al sector alrededor de la zona de mínimo flujo (obviando un punto, seguramente espuria, en phase ~ 0.35). Si el planeta fuera del radio de la Tierra y su estrella fuera del radio del Sol, entonces $$R_p/R_* = 0.009 \rightarrow \Delta F = 0,000081$$ Esto se acerca a lo que muestra el gráfico. End of explanation """
antoinecarme/sklearn_explain
doc/sklearn_reason_codes_RandomForest.ipynb
bsd-3-clause
from sklearn import datasets import pandas as pd %matplotlib inline ds = datasets.load_breast_cancer(); NC = 4 lFeatures = ds.feature_names[0:NC] df_orig = pd.DataFrame(ds.data[:,0:NC] , columns=lFeatures) df_orig['TGT'] = ds.target df_orig.sample(6, random_state=1960) """ Explanation: Model Explanation for Classification Models This document describes the usage of a classification model to provide an explanation for a given prediction. Model explanation provides the ability to interpret the effect of the predictors on the composition of an individual score. These predictors can then be ranked according to their contribution in the final score (leading to a positive or negative decision). Model explanation has always been used in credit risk applications in presence of regulatory settings . The credit company is expected to give the customer the main (top n) reasons why the credit application was rejected (also known as reason codes). Model explanation was also recently introduced by the European Union’s new General Data Protection Regulation (GDPR, https://arxiv.org/pdf/1606.08813.pdf) to add the possibility to control the increasing use of machine learning algorithms in routine decision-making processes. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. The process we will use here is similar to LIME. The main difference is that LIME uses a data sampling around score value locally, while here we perform as full cross-statistics computation between the predictors and the score and use a local piece-wise linear approximation. Sample scikit-learn Classification Model Here, we will use a sciki-learn classification model on a standard dataset (breast cancer detection model). The dataset used contains 30 predictor variables (numerical features) and one binary target (dependant variable). For practical reasons, we will restrict our study to the first 4 predictors in this document. End of explanation """ from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=120, random_state = 1960) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df_orig[lFeatures].values, df_orig['TGT'].values, test_size=0.2, random_state=1960) df_train = pd.DataFrame(X_train , columns=lFeatures) df_train['TGT'] = y_train df_test = pd.DataFrame(X_test , columns=lFeatures) df_test['TGT'] = y_test clf.fit(X_train , y_train) # clf.predict_proba(df[lFeatures])[:,1] """ Explanation: For the classification task, we will build a ridge regression model, and train it on a part of the full dataset End of explanation """ from sklearn.linear_model import * def create_score_stats(df, feature_bins = 4 , score_bins=30): df_binned = df.copy() df_binned['Score'] = clf.predict_proba(df[lFeatures].values)[:,0] df_binned['Score_bin'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=False, duplicates='drop') df_binned['Score_bin_labels'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=None, duplicates='drop') for col in lFeatures: df_binned[col + '_bin'] = pd.qcut(df[col] , feature_bins, labels=False, duplicates='drop') binned_features = [col + '_bin' for col in lFeatures] lInterpolated_Score= pd.Series(index=df_binned.index) bin_classifiers = {} coefficients = {} intercepts = {} for b in range(score_bins): bin_clf = Ridge(random_state = 1960) bin_indices = (df_binned['Score_bin'] == b) # print("PER_BIN_INDICES" , b , bin_indexes) bin_data = df_binned[bin_indices] bin_X = bin_data[binned_features] bin_y = bin_data['Score'] if(bin_y.shape[0] > 0): bin_clf.fit(bin_X , bin_y) bin_classifiers[b] = bin_clf bin_coefficients = dict(zip(lFeatures, [bin_clf.coef_.ravel()[i] for i in range(len(lFeatures))])) # print("PER_BIN_COEFFICIENTS" , b , bin_coefficients) coefficients[b] = bin_coefficients intercepts[b] = bin_clf.intercept_ predicted = bin_clf.predict(bin_X) lInterpolated_Score[bin_indices] = predicted df_binned['Score_interp'] = lInterpolated_Score return (df_binned , bin_classifiers , coefficients, intercepts) """ Explanation: Model Explanation The goal here is to be able, for a given individual, the impact of each predictor on the final score. For our model, we will do this by analyzing cross statistics between (binned) predictors and the (binned) final score. For each score bin, we fit a linear model locally and use it to explain the score. This is generalization of the linear case, based on the fact that any model can be approximated well enough locally be a linear function (inside each score_bin). The more score bins we use, the more data we have, the better the approximation is. For a random forest , the score can be seen as the probability of the positive class. End of explanation """ (df_cross_stats , per_bin_classifiers , per_bin_coefficients, per_bin_intercepts) = create_score_stats(df_train , feature_bins=5 , score_bins=10) def debrief_score_bin_classifiers(bin_classifiers): binned_features = [col + '_bin' for col in lFeatures] score_classifiers_df = pd.DataFrame(index=(['intercept'] + list(binned_features))) for (b, bin_clf) in per_bin_classifiers.items(): bin score_classifiers_df['score_bin_' + str(b) + "_model"] = [bin_clf.intercept_] + list(bin_clf.coef_.ravel()) return score_classifiers_df df = debrief_score_bin_classifiers(per_bin_classifiers) df.head(10) """ Explanation: For simplicity, to describe our method, we use 5 score bins and 5 predictor bins. We fit our local models on the training dataset, each model is fit on the values inside its score bin. End of explanation """ for col in lFeatures: lcoef = df_cross_stats['Score_bin'].apply(lambda x : per_bin_coefficients.get(x).get(col)) lintercept = df_cross_stats['Score_bin'].apply(lambda x : per_bin_intercepts.get(x)) lContrib = lcoef * df_cross_stats[col + '_bin'] + lintercept/len(lFeatures) df1 = pd.DataFrame(); df1['contrib'] = lContrib df1['Score_bin'] = df_cross_stats['Score_bin'] lContribMeanDict = df1.groupby(['Score_bin'])['contrib'].mean().to_dict() lContribMean = df1['Score_bin'].apply(lambda x : lContribMeanDict.get(x)) # print("CONTRIB_MEAN" , col, lContribMean) df_cross_stats[col + '_Effect'] = lContrib - lContribMean df_cross_stats.sample(6, random_state=1960) """ Explanation: From the table above, we see that lower score values (score_bin_0) are all around zero probability and are not impacted by the predictor values, higher score values (score_bin_5) are all around 1 and are also not impacted. This is what one expects from a good classification model. in the score bin 3, the score values increase significantly with mean area_bin and decrease with mean radius_bin values. Predictor Effects Predictor effects describe the impact of specific predictor values on the final score. For example, some values of a predictor can increase or decrease the score locally by 0.10 or more points and change the negative decision to a positive one. The predictor effect reflects how a specific predictor increases the score (above or below the mean local contribtution of this variable). End of explanation """ import numpy as np reason_codes = np.argsort(df_cross_stats[[col + '_Effect' for col in lFeatures]].values, axis=1) df_rc = pd.DataFrame(reason_codes, columns=['reason_idx_' + str(NC-c) for c in range(NC)]) df_rc = df_rc[list(reversed(df_rc.columns))] df_rc = pd.concat([df_cross_stats , df_rc] , axis=1) for c in range(NC): reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x]) df_rc['reason_' + str(c+1)] = reason # detailed_reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x] + "_bin") # df_rc['detailed_reason_' + str(c+1)] = df_rc[['reason_' + str(c+1) , ]] df_rc.sample(6, random_state=1960) df_rc[['reason_' + str(NC-c) for c in range(NC)]].describe() """ Explanation: The previous sample, shows that the first individual lost 0.000000 score points due to the feature $X_1$, gained 0.003994 with the feature $X_2$, etc Reason Codes The reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects. End of explanation """
cbare/Etudes
notebooks/fractional-approximations-of-pi.ipynb
apache-2.0
from math import pi pi """ Explanation: Rational approximations of 𝝿 The fractions 22/7 and 355/113 are good approximations of pi. Let's find more. End of explanation """ pi.as_integer_ratio() f"{884279719003555/281474976710656:0.48f}" """ Explanation: Spoiler alert: Who knew that Python floats have this handy method? We won't do better than this. End of explanation """ 22/7 355/113 """ Explanation: We'll need to go to larger and larger denominators to get more accuracy. Let's define success as finding an approximation that gets more digits right than preceeding approximations. 22/7 gets 3 digits correct. 355/13 gets 7, which is more than enough for most practical purposes. End of explanation """ def digits_match(a,b): d = abs(b-a) if d==0.0: return len(str(b))-1 i = 0 p=1 while d < p: i += 1 p /= 10 return i digits_match(22/7, pi) digits_match(355/113, pi) %%time best_so_far = 1 for den in range(7,26_000_000): # numerator that is closest to but less than pi # or maybe the next higher numerator is better? for i in range(0,2): num = int(den*pi) + i pi_approx = num/den digits = digits_match(pi_approx, pi) if digits > best_so_far: best_so_far = digits frac = f"{num}/{den}" print(f"{frac:>24} = {pi_approx:<25} {digits:>3} err={abs(pi_approx-pi):0.25f}") """ Explanation: So, let's make it easy to count how many digits match. End of explanation """
sergivalverde/MRI_intensity_normalization
Intensity normalization test.ipynb
gpl-3.0
import os import numpy as np import nibabel as nib from nyul import nyul_train_standard_scale DATA_DIR = 'data_examples' T1_name = 'T1.nii.gz' MASK_name = 'brainmask.nii.gz' # generate training scans train_scans = [os.path.join(DATA_DIR, folder, T1_name) for folder in os.listdir(DATA_DIR)] mask_scans = [os.path.join(DATA_DIR, folder, MASK_name) for folder in os.listdir(DATA_DIR)] """ Explanation: MRI intensity normalization Intensity normalization of multi-channel MRI images using the method proposed by Nyul et al. 2000. In the original paper, the authors suggest a method where a set of standard histogram landmarks are learned from a set of MRI images. These landmarks are then used to equalize the histograms of the images to normalize. In both learning and transformation, the histograms are used to find the intensity landmarks. Ackwoledgements: The Python implementation is based on the awesome implementation available here Reinhold et al. 2019. For this particular tutorial, we use a very small subset from the Calgary-Campinas dataset. Train the standard histogram: To train the standard histogram, we just have to create a list of the input images to process. Optionally, we can also provide the brainmasks: End of explanation """ standard_scale, perc = nyul_train_standard_scale(train_scans, mask_scans) """ Explanation: Then, the train the standard histogram. By default, the parameters are set as follows: * Minimum percentile to consider i_min=1 * Maximum percentile to consider i_max=99 * Minimum percentile on the standard histogram i_s_min=1 * Maximum percentile on the standard histogram i_s_max=100 * Middle percentile lower bound l_percentile=10 * Middle percentile upped bound u_percentile=90 * number of deciles step=10 End of explanation """ standard_path = 'histograms/standard_test.npy' np.save(standard_path, [standard_scale, perc]) """ Explanation: Save the standard histogram: Save the histogram to apply it to unseen images afterwards: End of explanation """ from nyul import nyul_apply_standard_scale import matplotlib.pyplot as plt image_1 = nib.load(train_scans[0]).get_data() mask_1 = nib.load(mask_scans[0]).get_data() image_1_norm = nyul_apply_standard_scale(image_1, standard_path, input_mask=mask_1) fig, axs = plt.subplots(2, 1, constrained_layout=True) f1 = axs[0].hist(image_1.flatten(), bins=64, range=(-10,600)) f2 = axs[1].hist(image_1_norm.flatten(), bins=64, range=(-10,200)) axs[0].set_title('Image 1 Original') axs[0].set_title('Image 1 Normalized') """ Explanation: Apply intensity normalization to new images: Finally, the learned histogram can be applied to new images. Here, we just use the same images before and after normalization as an example. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/csiro-bom/cmip6/models/access-1-0/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csiro-bom', 'access-1-0', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: CSIRO-BOM Source ID: ACCESS-1-0 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:55 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
mrcinv/matpy
03d_iteracija.ipynb
gpl-2.0
g = lambda x: 2**(-x) xp = 1 # začetni približek for i in range(15): xp = g(xp) print(xp) print("Razlika med desno in levo stranjo enačbe je", xp-2**(-xp)) """ Explanation: ^ gor: Uvod Reševanje enačb z navadno iteracijo Pri rekurzivnih zaporedjih smo videli, da za zaporedje, ki zadošča rekurzivni formuli $$x_{n+1}= g(x_n)$$ velja, da je je limita zaporedja $x_n$, vedno rešitev enačbe $$x=g(x).$$ Pri tem smo predpostavili, da limita sploh obstaja in da je $g$ zvezna funkcija. Trditev lahko obrnemo. Ničlo funkcije $f(x)$ lahko poiščemo z rekurzivnim zaporedjem približkov $$x_{n+1} = g(x_n),$$ če enačbo $$f(x) = 0$$ preoblikujemo v ekvivalentno enačbo oblike $$ x = g(x).$$ Žal ne bo vsaka funkcija $g$ dobra, saj moramo poskrbeti, da zaporedje $x_n$ dejansko konvergira. Primer Reši enačbo $$x=2^{-x}.$$ Rešitev Rekurzivna formula se kar sama ponuja. $$x_{n+1} = 2^{-x_n}.$$ Seveda nam nič ne zagotavlja, da bo zaporedje $x_n$ dejansko konvergentno. A poskusiti ni greh, pravijo. End of explanation """ import sympy as sym from IPython.display import display sym.init_printing(use_latex=True) x = sym.Symbol('x') dg = sym.diff(g(x),x) print("Odvod iteracijske funkcije: ") display(dg) print("Odvod g(x) v rešitvi", dg.subs(x,xp).evalf()) """ Explanation: Videti je, da zaporedje konvergira in sicer natanko k rešitvi enačbe. Opazimo tudi, da za vsako pravilno decimalko potrebujemo približno 3 korake rekurzije. Konvergenca Zdi se, da zaporedje približkov konvergira, vendar nekaj izračunanih členov ni dovolj, da bi bili povsem prepričani v konvergenco. Na srečo velja izrek, ki za rekurzivna zaporedja zagotavlja konvergenco: Izrek o konvergenci iteracije Naj bo $x_n$ zaporedje podano z začetnim členom $x_0$ in rekurzivno formulo $x_{n+1}=g(x_n)$. Naj bo $x_p$ rešitev enačbe $x=g(x)$ in naj bo $|g'(x)|<1$ za vse $x\in[x_p-\varepsilon,x_p+\varepsilon]$. Če je $x_0\in [x_p-\varepsilon,x_p+\varepsilon]$, je zaporedje $x_n$ konvergentno in je limita enaka $$\lim_{n\to\infty}x_n=x_p.$$ Izrek nam pove, da je konvergenca rekurzivnega zaporedja odvisna od velikosti odvoda iteracijske funkcije v rešitvi enačbe. Če je $$|g'(x_p)|<1$$ bo za začetni približek, ki je dovolj blizu rešitve, zaporedje podano z rekurzivno formulo $x_{n+1}=g(x_n)$ konvergiralo k rešitvi. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt xp = sym.solve(sym.Eq(x,g(x)),x)[0].evalf() # točna rešitev n = 30; xz = [1] # zaporedje približkov for i in range(n-1): xz.append(g(xz[-1])) napaka = [x - xp for x in xz] # zadnji približek vzamemo za točno rešitev plt.semilogy(range(n),napaka,'o') plt.title("Napaka pri računanju rešitve z navadno iteracijo") """ Explanation: Odvod v rešitvi je približno -0.44, kar je po absolutni vrednosti manj od 1. To pomeni, da zaporedje približkov konvergira. Hitrost konvergence Ko iščemo rešitev enačbe z rekurzivnim zaporedjem, nas seveda zanima, koliko korakov je potrebnih za določeno število decimalk. To najlažje predstavimo z grafom napake v logaritemski skali. End of explanation """ import disqus %reload_ext disqus %disqus matpy """ Explanation: Napaka pada podobno kot pri bisekciji. Za vsako pravilno decimalko potrebujemo približno 3 korake. End of explanation """
Ccaccia73/semimonocoque
03a_Multiconnected_section.ipynb
mit
from pint import UnitRegistry import sympy import networkx as nx import numpy as np import matplotlib.pyplot as plt import sys %matplotlib inline from IPython.display import display """ Explanation: Semi-Monocoque Theory End of explanation """ from Section import Section """ Explanation: Import Section class, which contains all calculations End of explanation """ ureg = UnitRegistry() sympy.init_printing() """ Explanation: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy) End of explanation """ A, A0, t, t0, a, b, h, L = sympy.symbols('A A_0 t t_0 a b h L', positive=True) """ Explanation: Define sympy parameters used for geometric description of sections End of explanation """ values = [(A, 400 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 400 * ureg.millimeter), \ (b, 300 * ureg.millimeter),(h, 150 * ureg.millimeter),(L, 650 * ureg.millimeter),(t, 3 * ureg.millimeter)] datav = [(v[0],v[1].magnitude) for v in values] """ Explanation: We also define numerical values for each symbol in order to plot scaled section and perform calculations End of explanation """ stringers = {1:[(4*a,2*a),A], 2:[(a,2*a),A], 3:[(sympy.Integer(0),a),A], 4:[(a,sympy.Integer(0)),A], 5:[(2*a,a),A], 6:[(4*a,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,4):t, (4,5):t, (5,2):t, (4,6):t, (6,1):t} """ Explanation: Multiconnected Section Define graph describing the section: 1) stringers are nodes with parameters: - x coordinate - y coordinate - Area 2) panels are oriented edges with parameters: - thickness - lenght which is automatically calculated End of explanation """ S3 = Section(stringers, panels) """ Explanation: Define section and perform first calculations End of explanation """ sympy.simplify(S3.A) """ Explanation: As we need to compute $x_{sc}$, we have to perform $$A \cdot q_{ext} = T$$ where: - A is a matrix with number of nodes + number of loops rows and number of edges +1 columns (it is square) - q is a column vector of unknowns: #edges fluxes and shear center coordinate - T is the vector of known terms: $-\frac{T_y}{J_x} \cdot S_{x_i}$ or $-\frac{T_x}{J_y} \cdot S_{y_i}$ for n-1 nodes and the rest are 0 Expression of A End of explanation """ sympy.simplify(S3.T) """ Explanation: Expression of T End of explanation """ sympy.simplify(S3.tempq) """ Explanation: Resulting fluxes and coordinate End of explanation """ start_pos={ii: [float(S3.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S3.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S3.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16); """ Explanation: Plot of S3 section in original reference frame End of explanation """ S3.Ixx0, S3.Iyy0, S3.Ixy0, S3.α0 """ Explanation: Expression of Inertial properties wrt Center of Gravity in with original rotation End of explanation """ positions={ii: [float(S3.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S3.g.nodes() } x_ct, y_ct = S3.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S3.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16); """ Explanation: Plot of S3 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn End of explanation """ S3.Ixx, S3.Iyy, S3.Ixy, S3.θ """ Explanation: Expression of inertial properties in principal reference frame End of explanation """ S3.ct """ Explanation: Shear center expression End of explanation """ S3.cycles """ Explanation: Loops detection End of explanation """ Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z') S3.set_loads(_Tx=0, _Ty=1, _Nz=0, _Mx=Mx, _My=0, _Mz=0) S3.compute_Jt() S3.Jt """ Explanation: Torsional moment of inertia End of explanation """
radical-experiments/AIMES-Experience
OSG/analysis/osg_analysis.ipynb
mit
%matplotlib inline """ Explanation: Using RADICAL-Analytics with RADICAL-Pilot and OSG Experiments This notebook illustrates the analysis of two experiments performed with RADICAL-Pilot and OSG. The experiments use 4 1-core pilots and between 8 and 64 compute units (CU). RADICAL-Analytics is used to acquire two data sets produced by RADICAL Pilot and then to derive aggregated and per-entity performance measures. The state models of RADICAL-Pilot's CU and Pilot entities is presented and all the state-based durations are defined and described. Among these durations, both aggregated and per-entity measures are computed and plotted. The aggregated measures are: TTC: Total time to completion of the given workload, i.e., between 8 and 64 CU; TTQ: Total time spent by the 4 1-core pilots in the OSG Connect queue; TTR: Total time spent by the four pilots running on their respective work node; TTX: Total time spent by all the CU executing their kernel. Each aggregate measure takes into account the time overlap among entities in the same state. For example, if a pilot start running at time t_0, another at time t_0+5s and they both finish at time t_0+100s, TTR will be 100s, not 195s. The same calculation is done for partial, total, and null overlapping. Single-entity performance measures are derived for each pilot and CU: Tq: Time spent by a pilot in the queue of the local resource management system (LRMS); Tx: Time spent by a CU executing its kernel. The kernel of each CU is a <a href='https://github.com/radical-cybertools/radical.synapse'>Synapse</a> executable emulating a GROMACS execution as specified in the <a href='https://docs.google.com/document/d/1TQrax9iSGovECZZ7wyomVk_LbFjNqeRNCUtEkv7FXfg/edit'>GROMACS/CoCo ENSEMBLES</a> use case. We plot and compare these measurements across the two experiments to understand: How the heterogeneity of OSG resources affects the execution of this type of workload. Whether queuing time is as dominant as in XSEDE. Table of Content Setup We need to setup both this notebook and the experiment environment. We start with the notebook and then we will move on to the experiment data. Notebook Settings Display matplotlib diagrams without having to use plt.show(). End of explanation """ import os import sys import glob import pprint import numpy as np import scipy as sp import pandas as pd import scipy.stats as sps import statsmodels.api as sm import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.mlab as mlab import matplotlib.ticker as ticker import matplotlib.gridspec as gridspec import radical.utils as ru import radical.pilot as rp import radical.analytics as ra from IPython.display import display """ Explanation: Load all the Python modules we will use for the analysis. Note that both RADICAL Utils and RADICAL Pilot need to be loaded alongside RADICAL Analytics. End of explanation """ # Global configurations # --------------------- # Use LaTeX and its body font for the diagrams' text. mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.serif'] = ['Nimbus Roman Becker No9L'] # Use thinner lines for axes to avoid distractions. mpl.rcParams['axes.linewidth'] = 0.75 mpl.rcParams['xtick.major.width'] = 0.75 mpl.rcParams['xtick.minor.width'] = 0.75 mpl.rcParams['ytick.major.width'] = 0.75 mpl.rcParams['ytick.minor.width'] = 0.75 # Do not use a box for the legend to avoid distractions. mpl.rcParams['legend.frameon'] = False # Helpers # ------- # Use coordinated colors. These are the "Tableau 20" colors as # RGB. Each pair is strong/light. For a theory of color see: # http://www.tableau.com/about/blog/2016/7/colors-upgrade-tableau-10-56782 # http://tableaufriction.blogspot.com/2012/11/finally-you-can-use-tableau-data-colors.html tableau20 = [(31 , 119, 180), (174, 199, 232), # blue [ 0,1 ] (255, 127, 14 ), (255, 187, 120), # orange [ 2,3 ] (44 , 160, 44 ), (152, 223, 138), # green [ 4,5 ] (214, 39 , 40 ), (255, 152, 150), # red [ 6,7 ] (148, 103, 189), (197, 176, 213), # purple [ 8,9 ] (140, 86 , 75 ), (196, 156, 148), # brown [10,11] (227, 119, 194), (247, 182, 210), # pink [12,13] (127, 127, 127), (199, 199, 199), # gray [14,15] (188, 189, 34 ), (219, 219, 141), # yellow [16,17] (23 , 190, 207), (158, 218, 229)] # cyan [18,19] # Scale the RGB values to the [0, 1] range, which is the format # matplotlib accepts. for i in range(len(tableau20)): r, g, b = tableau20[i] tableau20[i] = (r / 255., g / 255., b / 255.) # Return a single plot without right and top axes def fig_setup(): fig = plt.figure(figsize=(13,7)) ax = fig.add_subplot(111) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() return fig, ax """ Explanation: We configure matplotlib so to produce visually consistent diagrams that look readable and we can directly include in a paper written in LaTeX. End of explanation """ def load_data(rdir): sessions = {} experiments = {} start = rdir.rfind(os.sep)+1 for path, dirs, files in os.walk(rdir): folders = path[start:].split(os.sep) if len(path[start:].split(os.sep)) == 2: sid = os.path.basename(glob.glob('%s/*.json' % path)[0])[:-5] if sid not in sessions.keys(): sessions[sid] = {} sessions[sid] = ra.Session(sid, 'radical.pilot', src=path) experiments[sid] = folders[0] return sessions, experiments # Load experiments' dataset into ra.session objects # stored in a DataFrame. rdir = 'data/' sessions, experiments = load_data(rdir) sessions = pd.DataFrame({'session': sessions, 'experiment': experiments}) # Check the first/last 3 rows display(sessions.head(3)) display(sessions.tail(3)) """ Explanation: Experiment Settings RADICAL-Pilot save runtime data in two types of file: profiles and databases. Profile files are written by the agent module on the work nodes of the remote resources; database files are stored in the MongoDB instance used for the workload execution. Both types of file need to be retrieved from the remote resource and from the MongoDB instance. Currently, ... The data used in this notebook are collected in two compressed bzip2 tar archives, one for each experiment dataset. The two archives need to be decompressed into radical.analytics/use_cases/rp_on_osg/data. Once unzipped, we acquire the datasets by constructing a ra.session object for each experimental run. We use a helper function to construct the session objects in bulk while keeping track of the experiment to which each run belong. We save both to a Pandas DataFrame. This helps to elaborate the datasets furthers offering methods specifically aimed at data analysis and plotting. End of explanation """ os.environ['RADICAL_ANALYTICS_VERBOSE']='ERROR' for s in sessions['session']: s.consistency(['state_model','timestamps']) """ Explanation: Analysis We measure a set of durations. Each duration has two and only two timestamps, the first always preceding in time the second. Each timestamp represents an event, in this case of a state transition. Our choice of the durations depends on the design of the experiment for which we are collecting data. In this case, we want to measure the overall time to completion (TTC) of the run and isolate two of its components: TTQ and Tq: The amount of time spent in the queue waiting for the pilots; TTX and Tx: The amount of time spent to execute each unit. We use TTQ and Tq to understand whether queue time has the same dominance on TTC as we measured on XSEDE. This comparison is relevant when evaluating the performance of OSG and the distribution of tasks to OSG, XSEDE, or across both. TTX and Tx contribute to this understanding by showing how the supposed heterogeneity of OSG resources affects compute performance. The experiment is designed to use homogeneous CUs: every CU has the same compute requirements. Data requirements are not emulated so the differences in execution time across CUs can be related mainly to core performance. Consistency The first step in analyzing the experiments' dataset is to verify the consistenty of the data. Without such a verification, we cannot trust any of the results our analysis will produce. Feeling a nice chill down the spine thinking about the results you have already published?! ;) Here we check for the consistency of the state model and of the timestamps. As documented in the <a href='http://radicalanalytics.readthedocs.io/en/latest/apidoc.html#session-consistency'>RADICAL-Analytics API</a>, a third test mode is available for the event models. Currently, event models are not implemented. End of explanation """ expment = None pexpment = None for sid in sessions.index: etypes = sessions.ix[sid, 'session'].list(['etype']) expment = sessions.ix[sid, 'experiment'] if expment != pexpment: print '%s|%s|%s' % (expment, sid, etypes) pexpment = expment """ Explanation: Entities We need to define the start and end event for TTQ, TTX, Tq, Tx durations. As such, we need to choose the RADICAL-Pilot entity or entities that are relevant to our measurements. We look at what entities were recorded by the experimental runs. End of explanation """ for sid in sessions.index: sessions.ix[sid, 'TTC'] = sessions.ix[sid, 'session'].ttc display(sessions[['TTC']].head(3)) display(sessions[['TTC']].tail(3)) """ Explanation: We choose 'session', 'pilot', and 'unit'. At the moment, we do not need 'umgr', 'pmgr', 'update' as we want to measure and compare the overall duration of each session and the lifespan of pilots and units. Depending on the results of our analysis, we may want to extend these measurements and comparisons also to the RP managers. Time To Completion (TTC) The Session constructor initializes four properties for each session that we can directly access: t_start: timestamp of the session start; t_stop: timestamp of the session end; ttc: total duration of the session; t_range: the time range of the session. We add a column in the sessions DataFrame with the TTC of each run. End of explanation """ for sid in sessions.index: sessions.ix[sid, 'nunits'] = len(sessions.ix[sid, 'session'].filter(etype='unit', inplace=False).get()) display(sessions[['nunits']].head(3)) display(sessions[['nunits']].tail(3)) """ Explanation: We also add a column to the session dataframe with the number of units for each session. End of explanation """ fig = plt.figure(figsize=(13,14)) fig.suptitle('TTC XSEDE OSG Virtual Cluster', fontsize=14) plt.subplots_adjust(wspace=0.3, top=0.85) ttc_subplots = [] for exp in sessions['experiment'].sort_values().unique(): ttc_subplots.append(sessions[ sessions['experiment'] == exp ].sort_values('TTC')) colors = {'exp1': [tableau20[19]], 'exp2': [tableau20[7] ], 'exp3': [tableau20[13]], 'exp4': [tableau20[17]], 'exp5': [tableau20[15]]} ax = [] for splt in range(4): session = ttc_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = ', '.join([str(int(n)) for n in session['nunits'].unique()]) color = colors[experiment] title = 'Experiment %s\n%s tasks; %s sessions.' % (experiment[3], ntasks, session.shape[0]) if not ax: ax.append(fig.add_subplot(2, 2, splt+1)) else: ax.append(fig.add_subplot(2, 2, splt+1, sharey=ax[0])) session['TTC'].plot(kind='bar', color=color, ax=ax[splt], title=title) ax[splt].spines["top"].set_visible(False) ax[splt].spines["right"].set_visible(False) ax[splt].get_xaxis().tick_bottom() ax[splt].get_yaxis().tick_left() ax[splt].set_xticklabels([]) ax[splt].set_xlabel('Sessions') ax[splt].set_ylabel('Time (s)') ax[splt].legend(bbox_to_anchor=(1.25, 1)) # Add table with statistical description of TTC values. table = pd.tools.plotting.table(ax[splt], np.round(session['TTC'].describe(), 2), loc='upper center', colWidths=[0.2, 0.2, 0.2]) # Eliminate the border of the table. for key, cell in table.get_celld().items(): cell.set_linewidth(0) fig.add_subplot(ax[splt]) plt.savefig('figures/osg_ttc_experiments.pdf', dpi=600, bbox_inches='tight') """ Explanation: We now have all the data to plot the TTC of all the experiments runs. We plot the runs of Experiment 1 on the left in blue and those of Experiment 2 on the right in orange. End of explanation """ fig = plt.figure(figsize=(13,14)) title = 'XSEDE OSG Virtual Cluster' subtitle = 'TTC' defs = {'ttq': 'TTQ = Total Time Queuing pilots', 'ttr': 'TTR = Total Time Running pilots', 'ttc': 'TTC = Total Time Completing experiment'} fig.suptitle('%s:\n%s.\n%s.' % (title, subtitle, defs['ttc']), fontsize=14) gs = [] grid = gridspec.GridSpec(2, 2) grid.update(wspace=0.4, top=0.85) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[2])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 5, subplot_spec=grid[3])) ttc_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): if not sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].empty: ttc_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[19]], 'exp2': [tableau20[7] ], 'exp3': [tableau20[13]], 'exp4': [tableau20[17]], 'exp5': [tableau20[15]]} nun_exp = [] nun_exp.append(len(sessions[sessions['experiment'] == 'exp1']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp2']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp3']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp4']['nunits'].sort_values().unique())) ax = [] i = 0 while(i < len(ttc_subplots)): for gn in range(4): for gc in range(nun_exp[gn]): session = ttc_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) repetitions = session.shape[0] color = colors[experiment] title = 'Exp. %s\n%s tasks\n%s reps.' % (experiment[3], ntasks, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session['TTC'].plot(kind='bar', ax=ax[i], color=color, title=title) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Runs') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7 or i == 16: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3 or i == 11: ax[i].legend(labels=['TTC'], bbox_to_anchor=(2.25, 1)) elif i == 16: ax[i].legend(labels=['TTC'], bbox_to_anchor=(2.70, 1)) # else: # ax[i].get_legend().set_visible(False) fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttc_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: We see: A large variation among the TTC of the runs of both experiments. Variations are similar between the two experiments. We still do not know: How the observed differences in TTC varies depending on the size of the workload. What and how the entities of RADICAL-Pilot contribute to TTC and its variations. What and how resource properties contribute to TTC and its variations. First, we subdivide each plot into four plots, one for each workload size: 8, 16, 32, 64 CUs. End of explanation """ ttc_stats = {} for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): tag = exp+'_'+str(int(nun)) ttc_stats[tag] = sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ]['TTC'].describe() ttc_compare = pd.DataFrame(ttc_stats) sort_cols = ['exp1_8' , 'exp2_8' , 'exp1_16', 'exp2_16', 'exp1_32', 'exp2_32', 'exp1_64', 'exp2_64'] ttc_compare = ttc_compare.reindex_axis(sort_cols, axis=1) ttc_compare """ Explanation: The variations among TTC of workloads with the same number of CUs and between the two experiments are marked. Here we create a table with various measures of this variation for all the experiments and workload sizes. End of explanation """ (ttc_compare.loc['std']/ttc_compare.loc['mean'])*100 """ Explanation: For 8 and 16 CUs, the TTC of the second experiment shows a mean twice as large as those of the first experiment. Less pronounced is the difference for mean of the TTC of 32 CUs. The mean of the TTC of the first experiment is 25% smaller that the one of the second experiment for 64 CUs. The standard deviation among runs of the same experiment and number of CUs, varies between the two experiments. We describe this variation by calculating and comparing STD/mean for each experiment and workload size. End of explanation """ last_sv = None last_id = None for s in sessions['session']: sv = s.describe('state_values', etype=['pilot']).values()[0].values()[0] if last_sv and last_sv != sv: print "Different state models:\n%s = %s\n%s = %s" % (last_id, last_sv, sid, sv) last_sv = sv last_id = s._sid pprint.pprint(last_sv) """ Explanation: We notice that STD/mean among repetitions of the same run in Experiment 1 goes from 6.55% to 19.77%, increasing with the increase of the number of CUs. In Experiment 2, STD/mean goes from 20.63% up to 56.18%, independently from the amount of CUs executed by the repeated run. This shows: More measurements are needed to lower STD; TTC of experiment 2 may be influenced by one or more factors not present or relevant in Experiment 1. In order to clarify this discrepancy between Experiments, we need to measure the components of TTC to understand what determines the differences in TTC within the same experiment and between the two. These measurements requires to: define the state model and transitions of the pilot and CU entities; verify that this model has been consistently implemented by each experimental run; define and measure the duration of each state and compare them across experimental runs. Pilot State Model From the RADICAL-Pilot documentation and state model description, we know that: <img src="images/global_state_model_rp_paper.png" width="600"> The states of the pilots are therefore as follow: * pilot described, state NEW; * pilot being queued in a pilot manager (PMGR), state PMGR_LAUNCHING_PENDING; * pilot being queued in a local resource management system (LRMS), state PMGR_LAUNCHING; * pilot having a bootstrapping agent, state PMGR_ACTIVE_PENDING; * pilot having an active agent, state PMGR_ACTIVE; * pilot marked as done by the PMGR, state DONE. We verify whether our run match this model and we test whether the state model of each pilot of every session of all our experiments is the same. In this way we will know: whether our data are consistent with the authoritative RADICAL-Pilot state model; and what states of the pilots we can compare in our analysis given our dataset. End of explanation """ # Model of pilot durations. ttpdm = {'TT_PILOT_PMGR_SCHEDULING': ['NEW' , 'PMGR_LAUNCHING_PENDING'], 'TT_PILOT_PMGR_QUEUING' : ['PMGR_LAUNCHING_PENDING', 'PMGR_LAUNCHING'], 'TT_PILOT_LRMS_SUBMITTING': ['PMGR_LAUNCHING' , 'PMGR_ACTIVE_PENDING'], 'TT_PILOT_LRMS_QUEUING' : ['PMGR_ACTIVE_PENDING' , 'PMGR_ACTIVE'], 'TT_PILOT_LRMS_RUNNING' : ['PMGR_ACTIVE' , ['DONE', 'CANCELED', 'FAILED']]} # Add total pilot durations to sessions' DF. for sid in sessions.index: s = sessions.ix[sid, 'session'].filter(etype='pilot', inplace=False) for d in ttpdm.keys(): sessions.ix[sid, d] = s.duration(ttpdm[d]) # Print the relevant portion of the 'session' DataFrame. display(sessions[['TT_PILOT_PMGR_SCHEDULING', 'TT_PILOT_PMGR_QUEUING', 'TT_PILOT_LRMS_SUBMITTING', 'TT_PILOT_LRMS_QUEUING', 'TT_PILOT_LRMS_RUNNING']].head(3)) display(sessions[['TT_PILOT_PMGR_SCHEDULING', 'TT_PILOT_PMGR_QUEUING', 'TT_PILOT_LRMS_SUBMITTING', 'TT_PILOT_LRMS_QUEUING', 'TT_PILOT_LRMS_RUNNING']].tail(3)) """ Explanation: Pilot State Durations We define four durations to measure the aggreted time spent by all the pilots in each state: | Duration | Start timestamp | End time Stamp | Description | |--------------------|------------------------|------------------------|----------------| | TT_PILOT_PMGR_SCHEDULING | NEW | PMGR_LAUNCHING_PENDING | total time spent by a pilot being scheduled to a PMGR | | TT_PILOT_PMGR_QUEUING | PMGR_LAUNCHING_PENDING | PMGR_LAUNCHING | total time spent by a pilot in a PMGR queue | | TT_PILOT_LRMS_SUBMITTING | PMGR_LAUNCHING | PMGR_ACTIVE_PENDING | total time spent by a pilot being submitted to a LRMS | | TT_PILOT_LRMS_QUEUING | PMGR_ACTIVE_PENDING | PMGR_ACTIVE | total time spent by a pilot being queued in a LRMS queue | | TT_PILOT_LRMS_RUNNING | PMGR_ACTIVE | DONE | total time spent by a pilot being active | We should note that: * Every state transition can end in state CANCELLED or FAILED, depending on the execution conditions. While this has no bearing on the semantics of the state model, when measuring durations we need to keep that in mind. This is why the API of session.duration() allows for passing lists of states as initial and end timestamp. * In presence of multiple pilots, the queue time of one or more pilot can overlap, partially overlap, or not overlap at all. When calculating the total amount of queue time for the whole run, we need to account for overlapping and, therefore, for time subtractions or additions. Luckily, the method session.duration() does all this for us. Time to record some durations! End of explanation """ fig = plt.figure(figsize=(13,14)) title = 'XSEDE OSG Virtual Cluster' subtitle = 'TTQ and TTC.' defs = {'ttq': 'TTQ = Total Time Queuing pilots', 'ttr': 'TTR = Total Time Running pilots', 'ttc': 'TTC = Total Time Completing experiment'} fig.suptitle('%s:\n%s.\n%s;\n%s.' % (title, subtitle, defs['ttq'], defs['ttc']), fontsize=14) gs = [] grid = gridspec.GridSpec(2, 2) grid.update(wspace=0.4, top=0.85) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[2])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 5, subplot_spec=grid[3])) ttc_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): if not sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].empty: ttc_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[0] ,tableau20[19]], 'exp2': [tableau20[2] ,tableau20[7] ], 'exp3': [tableau20[8] ,tableau20[13]], 'exp4': [tableau20[4] ,tableau20[17]], 'exp5': [tableau20[10],tableau20[15]]} nun_exp = [] nun_exp.append(len(sessions[sessions['experiment'] == 'exp1']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp2']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp3']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp4']['nunits'].sort_values().unique())) ax = [] i = 0 while(i < len(ttc_subplots)): for gn in range(4): for gc in range(nun_exp[gn]): session = ttc_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) repetitions = session.shape[0] color = colors[experiment] title = 'Exp. %s\n%s tasks\n%s reps.' % (experiment[3], ntasks, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session[['TT_PILOT_LRMS_QUEUING', 'TTC']].plot(kind='bar', ax=ax[i], color=color, title=title, stacked=True) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Runs') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7 or i == 16: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3 or i == 11: ax[i].legend(labels=['TTQ','TTC'], bbox_to_anchor=(2.25, 1)) elif i == 16: ax[i].legend(labels=['TTQ','TTC'], bbox_to_anchor=(2.70, 1)) else: ax[i].get_legend().set_visible(False) fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttq_ttc_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: Total Time Queueing (TTQ) We can now measure the first component of TTC: total queuing time (TTQ), i.e., the portion of TTC spent waiting for the pilots of each run to become active while they were queued in the OSG Connect broker. We choose TTQ because of the dominant role it plays on XSEDE and the need to compare that role with the one played within OSG. End of explanation """ ttc_stats = {} for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): tag = exp+'_'+str(int(nun)) ttc_stats[tag] = sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ]['TT_PILOT_LRMS_QUEUING'].describe() ttc_compare = pd.DataFrame(ttc_stats) sort_cols = ['exp1_8' , 'exp2_8' , 'exp1_16', 'exp2_16', 'exp1_32', 'exp2_32', 'exp1_64', 'exp2_64'] ttc_compare = ttc_compare.reindex_axis(sort_cols, axis=1) ttc_compare.round(2) """ Explanation: Across runs and experiments, TTQ is: relatively consistent. a small part of TTC. End of explanation """ std_mean = (ttc_compare.loc['std']/ttc_compare.loc['mean'])*100 std_mean.round(2) """ Explanation: Even with an average of just 4 runs for each workload size, TTQ variance is relatively small with a couple of exceptions, as shown by STD/mean End of explanation """ # Model of pilot durations. pdm = {'PMGR_SCHEDULING': ['NEW' , 'PMGR_LAUNCHING_PENDING'], 'PMGR_QUEUING' : ['PMGR_LAUNCHING_PENDING', 'PMGR_LAUNCHING'], 'LRMS_SUBMITTING': ['PMGR_LAUNCHING' , 'PMGR_ACTIVE_PENDING'], 'LRMS_QUEUING' : ['PMGR_ACTIVE_PENDING' , 'PMGR_ACTIVE'], 'LRMS_RUNNING' : ['PMGR_ACTIVE' , ['DONE', 'CANCELED', 'FAILED']]} # DataFrame structure for pilot durations. pds = { 'pid': [], 'sid': [], 'experiment' : [], 'PMGR_SCHEDULING': [], 'PMGR_QUEUING' : [], 'LRMS_SUBMITTING': [], 'LRMS_QUEUING' : [], 'LRMS_RUNNING' : []} # Calculate the duration for each state of each # pilot of each run and Populate the DataFrame # structure. for sid in sessions.index: s = sessions.ix[sid, 'session'].filter(etype='pilot', inplace=False) for p in s.list('uid'): sf = s.filter(uid=p, inplace=False) pds['pid'].append(p) pds['sid'].append(sid) pds['experiment'].append(sessions.ix[sid, 'experiment']) for d in pdm.keys(): if (not sf.timestamps(state=pdm[d][0]) or not sf.timestamps(state=pdm[d][1])): pds[d].append(None) continue pds[d].append(sf.duration(pdm[d])) # Populate the DataFrame. pilots = pd.DataFrame(pds) display(pilots.head(3)) display(pilots.tail(3)) """ Explanation: As with TTC, more experiments and runs are needed. Importantly, due to the dynamic variables of OSG behavior (e.g., number, type or resources available to the broker at any given point in time), it would be useful to perform the runs sequentially so to collect the data (relatively) independent from these dynamics; or characterize these dynamics with long term measurements taken at discrete intervals. Queue Time (Tq) Every run uses four 1-core pilots, submitted to the OSG broker. We can look at the frequency of each pilot Tq (i.e., not TTQ but the queuing time of each pilot) without distinguishing among workload sizes and between experiments. We create a new DataFrame pilot containing all the durations of each pilot, not sessions. | Duration | Start timestamp | End time Stamp | Description | |-----------------|------------------------|------------------------|-------------| | PMGR_SCHEDULING | NEW | PMGR_LAUNCHING_PENDING | Time spent by a pilot being scheduled to a PMGR | | PMGR_QUEUING | PMGR_LAUNCHING_PENDING | PMGR_LAUNCHING | Time spent by a pilot in a PMGR queue | | LRMS_SUBMITTING | PMGR_LAUNCHING | PMGR_ACTIVE_PENDING | Time spent by a pilot being submitted to a LRMS | | LRMS_QUEUING | PMGR_ACTIVE_PENDING | PMGR_ACTIVE | Time spent by a pilot being queued in a LRMS queue | | LRMS_RUNNING | PMGR_ACTIVE | DONE | Time spent by a pilot being active | Note how only the name of the duration changes when comparing this table to the table with the durations for a session. The pilot state model is always the same, here it is used to calculate durations for a specific entity; for a session is used to calculate the aggregated duration for all the entities of a type in that session. End of explanation """ def parse_osg_hostid(hostid): ''' Heuristic: eliminate node-specific information from hostID. ''' domain = None # Split domain name from IP. host = hostid.split(':') # Split domain name into words. words = host[0].split('.') # Get the words in the domain name that do not contain # numbers. Most hostnames have no number but there are # exceptions. literals = [l for l in words if not any((number in set('0123456789')) for number in l)] # Check for exceptions: # a. every word of the domain name has a number if len(literals) == 0: # Some hostname use '-' instead of '.' as word separator. # The parser would have returned a single word and the # any of that word may have a number. if '-' in host[0]: words = host[0].split('-') literals = [l for l in words if not any((number in set('0123456789')) for number in l)] # FIXME: We do not check the size of literals. domain = '.'.join(literals) # Some hostnames may have only the name of the node. We # have to keep the IP to decide later on whether two nodes # are likely to belong to the same cluster. elif 'n' in host[0] or 'nod' in host[0]: domain = '.'.join(host) # The hostname is identified by an alphanumeric string else: domain = '.'.join(host) # Some hostnames DO have numbers in their name. elif len(literals) == 1: domain = '.'.join(words[1:]) # Some hostname are just simple to parse. else: domain = '.'.join(literals) return domain """ Explanation: We calculate add the name of the resource (hostID) on which the pilot (agent) have become active to the pilots DataFrame. Often, the hostID recoreded by RADICAL-Pilot is not the public name of the resource on whic the pilot becomes active but, instead, the name of a working/compute node/unit of that resource. We use an heuristic to isolate the portion of the hostID string that is common to all the nodes/units of the same resource. It should be noted though, that in some cases this is not possible. End of explanation """ for pix in pilots.index: sid = pilots.ix[pix,'sid'] pid = pilots.ix[pix,'pid'] pls = sessions.ix[sid, 'session'].filter(uid=pid, inplace=False).get(etype=['pilot']) if len(pls) != 1: print "Error: session filter on uid returned multiple pilots" break hostid = pls[0].cfg['hostid'] if hostid: domain = parse_osg_hostid(hostid) else: domain = np.nan pilots.ix[pix,'hostID'] = hostid pilots.ix[pix,'parsed_hostID'] = domain """ Explanation: We use this heuristic with the pilots DataFrame to which we add two columns: 'hostID' and 'parsed_hostID'. End of explanation """ fig, ax = fig_setup() title='XSEDE OSG Virtual Cluster\nDensity of Pilot Tq' tq_exp1 = pilots[pilots['experiment'].str.contains('exp1')]['LRMS_QUEUING'].dropna().reset_index(drop=True) tq_exp2 = pilots[pilots['experiment'].str.contains('exp2')]['LRMS_QUEUING'].dropna().reset_index(drop=True) tq_exp3 = pilots[pilots['experiment'].str.contains('exp3')]['LRMS_QUEUING'].dropna().reset_index(drop=True) tq_exp4 = pilots[pilots['experiment'].str.contains('exp4')]['LRMS_QUEUING'].dropna().reset_index(drop=True) plots = pd.DataFrame({'exp1': tq_exp1, 'exp2': tq_exp2, 'exp3': tq_exp3, 'exp4': tq_exp4}) #plots.plot.hist(ax=ax, color=[tableau20[19],tableau20[7],tableau20[13],tableau20[17]], title=title) plots.plot.density(ax=ax, color=[tableau20[0],tableau20[2],tableau20[8],tableau20[4]], title=title) ax.set_xlabel('Time (s)') ax.legend(labels=['Tq Experiment 1','Tq Experiment 2','Tq Experiment 3','Tq Experiment 4']) plt.savefig('figures/osg_tq_frequency.pdf', dpi=600, bbox_inches='tight') """ Explanation: We plot the frequency of Tq for both experiments as histogram. Ideally, this should be the first step towards the definition of the characteristic distribution of Tq. Part of this characterization will be to study how stable this distribution is across time, i.e., between two experiments executed at different point in time. As we know that the pool or resources of OSG Is both heterogeneous and dynamic, we expect this distribution not to be stable across time due to the potentially different pool of resources available at any point in time. End of explanation """ fig = plt.figure(figsize=(13,14)) title = 'XSEDE OSG Virtual Cluster' subtitle = 'TTQ, TTR and TTC' defs = {'ttq': 'TTQ = Total Time Queuing pilots', 'ttr': 'TTR = Total Time Running pilots', 'ttc': 'TTC = Total Time Completing experiment'} fig.suptitle('%s:\n%s.\n%s;\n%s;\n%s.' % (title, subtitle, defs['ttq'], defs['ttr'], defs['ttc']), fontsize=14) gs = [] grid = gridspec.GridSpec(2, 2) grid.update(wspace=0.4, top=0.85) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[2])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 5, subplot_spec=grid[3])) ttq_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): if not sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].empty: ttq_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[0] ,tableau20[18],tableau20[19]], 'exp2': [tableau20[2] ,tableau20[6] ,tableau20[7] ], 'exp3': [tableau20[8] ,tableau20[12],tableau20[13]], 'exp4': [tableau20[4] ,tableau20[16],tableau20[17]], 'exp5': [tableau20[10],tableau20[14],tableau20[15]]} nun_exp = [] nun_exp.append(len(sessions[sessions['experiment'] == 'exp1']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp2']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp3']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp4']['nunits'].sort_values().unique())) ax = [] i = 0 while(i < len(ttq_subplots)): for gn in range(4): for gc in range(nun_exp[gn]): session = ttq_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) repetitions = session.shape[0] color = colors[experiment] title = 'Exp. %s\n%s tasks\n%s rep.' % (experiment[3], ntasks, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session[['TT_PILOT_LRMS_QUEUING', 'TT_PILOT_LRMS_RUNNING', 'TTC']].plot(kind='bar', ax=ax[i], color=color, title=title, stacked=True) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Runs') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7 or i == 16: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3 or i == 11: ax[i].legend(labels=['TTQ','TTR','TTC'], bbox_to_anchor=(2.25, 1)) elif i == 16: ax[i].legend(labels=['TTQ','TTR','TTC'], bbox_to_anchor=(2.70, 1)) else: ax[i].get_legend().set_visible(False) fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttq_ttr_ttc_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: The diagrams hint to bimodal distributions, more measurements are required to study this further. Total Time Running (TTR) We know that TTQ marginally contributes to the TTC of each run. We still do not know whether most of TTC depends on the running time of the pilots or on the overheads of bootstrapping and managing them. We suspect the former but we have to exclude the latter. We define TTR as the aggregated time spent by the pilots in running state. We plot TTR stacked with TTQ and TTC and we verify whether TTR and the remaining part of TTC are analogous. As usual, this could be done just numerically but, hey, we spent a senseless ton of time dooming matplotlib so now we want to use it! End of explanation """ # Temporary: workaround for bug ticket \#15. Calculates # the number of active pilots by looking into the # length of the list returned by timestamp on the # PMGR_ACTIVE state. for sid in sessions.index: sessions.ix[sid, 'npilot_active'] = len(sessions.ix[sid, 'session'].filter(etype='pilot', inplace=False).timestamps(state='PMGR_ACTIVE')) display(sessions[['npilot_active']].head(3)) display(sessions[['npilot_active']].tail(3)) """ Explanation: As expected, TTR is largely equivalent to TTC-TTQ. This tells us that we will have to investigate the time spent describing, binding, scheduling, and executing CUs, measuring whether pilots TTR is spent effectively running CUs or managing them. Also, we will have to measure how much time is spent staging data in and out to and from the resources. In these experiments, data staging was not performed so we will limit our analysis to the execution time of CUs. In the diagram above, we confirm that the differences previously observed between the TTC of Experiment 1 and 2 (particularly evident when comparing runs with 8 and 64 units). As they depend on running pilots, we investigate whether every run uses the same amount of (active) pilots to execute CUs. The rationale is that when using fewer pilots, running the same amount of CUs will take more time (sequential execution on a single core). We add a column to the session DataFrame with the number of pilots for each session. End of explanation """ fig = plt.figure(figsize=(13,14)) title = 'XSEDE OSG Virtual Cluster' subtitle = 'TTQ, TTR and TTC with Number of Active Pilots' defs = {'ttq': 'TTQ = Total Time Queuing pilots', 'ttr': 'TTR = Total Time Running pilots', 'ttc': 'TTC = Total Time Completing experiment'} fig.suptitle('%s:\n%s.\n%s;\n%s;\n%s.' % (title, subtitle, defs['ttq'], defs['ttr'], defs['ttc']), fontsize=14) gs = [] grid = gridspec.GridSpec(2, 2) grid.update(wspace=0.4, top=0.85) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[2])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 5, subplot_spec=grid[3])) ttq_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): if not sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].empty: ttq_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[0] ,tableau20[18],tableau20[19]], 'exp2': [tableau20[2] ,tableau20[6] ,tableau20[7] ], 'exp3': [tableau20[8] ,tableau20[12],tableau20[13]], 'exp4': [tableau20[4] ,tableau20[16],tableau20[17]], 'exp5': [tableau20[10],tableau20[14],tableau20[15]]} nun_exp = [] nun_exp.append(len(sessions[sessions['experiment'] == 'exp1']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp2']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp3']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp4']['nunits'].sort_values().unique())) ax = [] i = 0 while(i < len(ttq_subplots)): for gn in range(4): for gc in range(nun_exp[gn]): session = ttq_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) repetitions = session.shape[0] color = colors[experiment] title = 'Exp. %s\n%s tasks\n%s rep.' % (experiment[3], ntasks, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session[['TT_PILOT_LRMS_QUEUING', 'TT_PILOT_LRMS_RUNNING', 'TTC']].plot(kind='bar', ax=ax[i], color=color, title=title, stacked=True) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Runs') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7 or i == 16: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3 or i == 11: ax[i].legend(labels=['TTQ','TTR','TTC'], bbox_to_anchor=(2.25, 1)) elif i == 16: ax[i].legend(labels=['TTQ','TTR','TTC'], bbox_to_anchor=(2.70, 1)) else: ax[i].get_legend().set_visible(False) # Add labels with number of pilots per session. rects = ax[i].patches labels = [int(l) for l in session['npilot_active']] for rect, label in zip(rects[-repetitions:], labels): height = rect.get_height() ax[i].text(rect.get_x() + rect.get_width()/2, (height*2), label, ha='center', va='bottom') fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttq_ttr_ttc_npactive_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: We add the numbers of pilot that become active in each run to the plot we used above to show TTQ, TTR, and TTC. End of explanation """ last_sv = None last_id = None for s in sessions['session']: sv = s.describe('state_values', etype=['unit']).values()[0].values()[0] if last_sv and last_sv != sv: print "Different state models:\n%s = %s\n%s = %s" % (last_id, last_sv, sid, sv) last_sv = sv last_id = s._sid pprint.pprint(last_sv) """ Explanation: As expected, the largest differences we observed in TTC and TTR among the runs with the same number of CU and among experiments map to the number of pilots used to execute CUs. Our analysis show that the two experiments have a different number of independent variables. Any comparison has to take into account whether the measure observed depends on the number of active pilots used to execute CUs. We have now exhausted the analyses of TTC via the defined pilot durations. We have now to look at the composition of TTR, namely at the durations depending on the CU states. Compute Unit State Model Interpretation From the RADICAL-Pilot documentation and state model description, we know that: <img src="images/global_state_model_rp_paper.png" width="600"> The states of the units are therefore as follow: * unit described, state NEW; * unit queuing in a unit manager (UMGR)'s queue, state UMGR_SCHEDULING_PENDING; * unit being scheduled by a UMGR to an active pilot agent, state UMGR_SCHEDULING; * input file(s) of a scheduling unit queuing in a UMGR's queue, state UMGR_STAGING_INPUT_PENDING; * input file(s) of a scheduling unit being staged to a (pilot) agent's MongoDB queue. The agent is the same on which the input file(s)' unit is being scheduled, state UMGR_STAGING_INPUT; * input file(s) of a scheduling unit queuing in the (pilot) agent's MongoDB queue. The agent is the same on which the input file(s)' unit is being scheduled, state AGENT_STAGING_INPUT_PENDING; * input file(s) of a scheduling unit being staged to an agent's resource. The agent is the same on which the input file(s)' unit is being scheduled, state AGENT_STAGING_INPUT; * unit queuing in a agent's queue, state AGENT_SCHEDULING_PENDING; * unit being scheduled by the agent for execution on pilot's resources, state AGENT_SCHEDULING; * unit queuing in a agent's queue, state AGENT_EXECUTING_PENDING; * unit being executed by the agent on pilot's resources, state AGENT_EXECUTING; * output file(s) of an executed unit queuing on an agent's queue, state AGENT_STAGING_OUTPUT_PENDING; * output file(s) of an executed unit being staged on a UMGR's MongoDB queue, state AGENT_STAGING_OUTPUT; * output file(s) of an executed unit queuing on a UMGR's MongoDB queue, state UMGR_STAGING_OUTPUT_PENDING; * output file(s) of an executed unit being staged on a UMGR's resource (e.g., user's workstation), state UMGR_STAGING_OUTPUT; * unit marked as done by a UMGR, state DONE. As done with the pilot state model, we verify whether our run match the compute unit state model, and we test whether the state model of each unit of every session of all experiments is the same. In this way we will know: whether our data are consistent with the authoritative RADICAL-Pilot state model; and what states of the units we can compare in our analysis given our datasets. End of explanation """ # Model of unit durations. udm = {'TT_UNIT_UMGR_SCHEDULING' : ['NEW' , 'UMGR_SCHEDULING_PENDING'], 'TT_UNIT_UMGR_BINDING' : ['UMGR_SCHEDULING_PENDING' , 'UMGR_SCHEDULING'], 'TT_IF_UMGR_SCHEDULING' : ['UMGR_SCHEDULING' , 'UMGR_STAGING_INPUT_PENDING'], 'TT_IF_UMGR_QUEING' : ['UMGR_STAGING_INPUT_PENDING' , 'UMGR_STAGING_INPUT'], 'TT_IF_AGENT_SCHEDULING' : ['UMGR_STAGING_INPUT' , 'AGENT_STAGING_INPUT_PENDING'], 'TT_IF_AGENT_QUEUING' : ['AGENT_STAGING_INPUT_PENDING' , 'AGENT_STAGING_INPUT'], 'TT_IF_AGENT_TRANSFERRING' : ['AGENT_STAGING_INPUT' , 'AGENT_SCHEDULING_PENDING'], 'TT_UNIT_AGENT_QUEUING' : ['AGENT_SCHEDULING_PENDING' , 'AGENT_SCHEDULING'], 'TT_UNIT_AGENT_SCHEDULING' : ['AGENT_SCHEDULING' , 'AGENT_EXECUTING_PENDING'], 'TT_UNIT_AGENT_QUEUING_EXEC': ['AGENT_EXECUTING_PENDING' , 'AGENT_EXECUTING'], 'TT_UNIT_AGENT_EXECUTING' : ['AGENT_EXECUTING' , 'AGENT_STAGING_OUTPUT_PENDING'], 'TT_OF_AGENT_QUEUING' : ['AGENT_STAGING_OUTPUT_PENDING', 'AGENT_STAGING_OUTPUT'], 'TT_OF_UMGR_SCHEDULING' : ['AGENT_STAGING_OUTPUT' , 'UMGR_STAGING_OUTPUT_PENDING'], 'TT_OF_UMGR_QUEUING' : ['UMGR_STAGING_OUTPUT_PENDING' , 'UMGR_STAGING_OUTPUT'], 'TT_OF_UMGR_TRANSFERRING' : ['UMGR_STAGING_OUTPUT' , 'DONE']} # Calculate total unit durations for each session. for sid in sessions.index: s = sessions.ix[sid, 'session'].filter(etype='unit', inplace=False) for d in udm.keys(): sessions.ix[sid, d] = s.duration(udm[d]) # Print the new columns of the session DF with total unit durations. display(sessions[['TT_UNIT_UMGR_SCHEDULING' , 'TT_UNIT_UMGR_BINDING' , 'TT_IF_UMGR_SCHEDULING' , 'TT_IF_UMGR_QUEING' , 'TT_IF_AGENT_SCHEDULING' , 'TT_IF_AGENT_QUEUING' , 'TT_IF_AGENT_TRANSFERRING' , 'TT_UNIT_AGENT_QUEUING' , 'TT_UNIT_AGENT_SCHEDULING', 'TT_UNIT_AGENT_QUEUING_EXEC', 'TT_UNIT_AGENT_EXECUTING', 'TT_OF_AGENT_QUEUING' , 'TT_OF_UMGR_SCHEDULING' , 'TT_OF_UMGR_QUEUING' , 'TT_OF_UMGR_TRANSFERRING']].head(3)) display(sessions[['TT_UNIT_UMGR_SCHEDULING' , 'TT_UNIT_UMGR_BINDING' , 'TT_IF_UMGR_SCHEDULING' , 'TT_IF_UMGR_QUEING' , 'TT_IF_AGENT_SCHEDULING' , 'TT_IF_AGENT_QUEUING' , 'TT_IF_AGENT_TRANSFERRING' , 'TT_UNIT_AGENT_QUEUING' , 'TT_UNIT_AGENT_SCHEDULING', 'TT_UNIT_AGENT_QUEUING_EXEC', 'TT_UNIT_AGENT_EXECUTING', 'TT_OF_AGENT_QUEUING' , 'TT_OF_UMGR_SCHEDULING' , 'TT_OF_UMGR_QUEUING' , 'TT_OF_UMGR_TRANSFERRING']].tail(3)) """ Explanation: Unit Durations We define 15 durations to measure the aggreted time spent by all the units of a session in each state: | Duration | Start timestamp | End time Stamp | Description | |------------------------------------|------------------------------|------------------------------|-------------| | TT_UNIT_UMGR_SCHEDULING | NEW | UMGR_SCHEDULING_PENDING | total time spent by a unit being scheduled to a UMGR | | TT_UNIT_UMGR_BINDING | UMGR_SCHEDULING_PENDING | UMGR_SCHEDULING | total time spent by a unit being bound to a pilot by a UMGR | | TT_IF_UMGR_SCHEDULING | UMGR_SCHEDULING | UMGR_STAGING_INPUT_PENDING | total time spent by input file(s) being scheduled to a UMGR | | TT_IF_UMGR_QUEING | UMGR_STAGING_INPUT_PENDING | UMGR_STAGING_INPUT | total time spent by input file(s) queuing in a UMGR | | TT_IF_AGENT_SCHEDULING | UMGR_STAGING_INPUT | AGENT_STAGING_INPUT_PENDING | total time spent by input file(s) being scheduled to an agent's MongoDB queue | | TT_IF_AGENT_QUEUING | AGENT_STAGING_INPUT_PENDING | AGENT_STAGING_INPUT | total time spent by input file(s) queuing in an agent's MongoDB queue | | TT_IF_AGENT_TRANSFERRING | AGENT_STAGING_INPUT | AGENT_SCHEDULING_PENDING | total time spent by input file(s)' payload to be transferred from where the UMGR is being executed (e.g., the user's workstation) to the resource on which the agent is executing | | TT_UNIT_AGENT_QUEUING | AGENT_SCHEDULING_PENDING | AGENT_SCHEDULING | total time spent by a unit in the agent's scheduling queue | | TT_UNIT_AGENT_SCHEDULING | AGENT_SCHEDULING | AGENT_EXECUTING_PENDING | total time spent by a unit to be scheduled to the agent's executing queue | | TT_UNIT_AGENT_QUEUING_EXECUTION | AGENT_EXECUTING_PENDING | AGENT_EXECUTING | total time spent by a unit in the agent's executing queue | | TT_UNIT_AGENT_EXECUTING | AGENT_EXECUTING | AGENT_STAGING_OUTPUT_PENDING | total time spent by a unit executing | | TT_OF_AGENT_QUEUING | AGENT_STAGING_OUTPUT_PENDING | AGENT_STAGING_OUTPUT | total time spent by output file(s) queuing in the agent's stage out queue | | TT_OF_UMGR_SCHEDULING | AGENT_STAGING_OUTPUT | UMGR_STAGING_OUTPUT_PENDING | total time spent by output file(s) being scheduled to a UMGR's MongoDB queue | | TT_OF_UMGR_QUEUING | UMGR_STAGING_OUTPUT_PENDING | UMGR_STAGING_OUTPUT | total time spent by output file(s) queuing in a UMGR's MongoDB queue | | TT_OF_UMGR_TRANSFERRING | UMGR_STAGING_OUTPUT | DONE | total time spent by output file(s)' payload to be transferred from the resource to where the UMGR is being executed (e.g., the user's workstation) | Unit Durations Aggregates Durations can be aggregated so to represent a middle-level semantics: ``` * TT_UNIT_RP_OVERHEAD = TT_UMGR_UNIT_SCHEDULING + TT_AGENT_UNIT_QUEUING + TT_AGENT_UNIT_SCHEDULING + TT_AGENT_UNIT_QUEUING_EXECUTION TT_IF_RP_OVERHEAD = TT_UMGR_IF_SCHEDULING + TT_UMGR_IF_QUEING + TT_AGENT_IF_QUEUING TT_OF_RP_OVERHEAD = TT_AGENT_OF_QUEUING + TT_UMGR_OF_QUEING + TT_IF_NETWORK_OVERHEAD = TT_AGENT_IF_SCHEDULING + TT_AGENT_IF_TRANSFERRING TT_OF_NETWORK_OVERHEAD = TT_UMGR_OF_SCHEDULING + TT_UMGR_OF_TRANSFERRING TT_IF_STAGING = TT_IF_RP_OVERHEAD + TT_IF_NETWORK_OVERHEAD TT_OF_STAGING = TT_OF_RP_OVERHEAD + TT_OF_NETWORK_OVERHEAD and higher-level semantics: TT_RP_OVERHEADS = TT_UNIT_RP_OVERHEAD + TT_IF_RP_OVERHEAD + TT_OF_RP_OVERHEAD TT_NETWORK_OVERHEADS = TT_IF_NETWORK_OVERHEAD + TT_OF_NETWORK_OVERHEAD TT_FILE_STAGING = TT_IF_STAGING + TT_OF_STAGING TT_UNIT_EXECUTING = TT_AGENT_UNIT_EXECUTING ``` Consistency Rules Note that we can derive consistency constraints from these models. For every session, the following has always to be true: As done with the pilot, we first calculate the overall time spent during the session to execute CUs. End of explanation """ # Add number of unique resorces per session. for sid in sessions.index: sessions.ix[sid, 'n_unique_host'] = len(pilots[pilots['sid'] == sid]['parsed_hostID'].unique()) fig = plt.figure(figsize=(13,14)) title = 'XSEDE OSG Virtual Cluster' subtitle = 'TTQ, TTR, TTX and TTC with Number of Active Pilots (black) and Number of Unique Resources (red)' fig.suptitle('%s:\n%s.' % (title, subtitle), fontsize=16) defs = {'ttq': 'TTQ = Total Time Queuing pilots', 'ttr': 'TTR = Total Time Running pilots', 'ttx': 'TTR = Total Time Executing compute units', 'ttc': 'TTC = Total Time Completing experiment'} defslist = '%s;\n%s;\n%s;\n%s.' % (defs['ttq'], defs['ttr'], defs['ttx'], defs['ttc']) plt.figtext(.38,.89, defslist, fontsize=14, ha='left') gs = [] grid = gridspec.GridSpec(2, 2) grid.update(wspace=0.4, hspace=0.4, top=0.825) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[2])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 5, subplot_spec=grid[3])) ttq_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): if not sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].empty: ttq_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[0] ,tableau20[18],tableau20[1] ,tableau20[19]], 'exp2': [tableau20[2] ,tableau20[6] ,tableau20[3] ,tableau20[7] ], 'exp3': [tableau20[8] ,tableau20[12],tableau20[9] ,tableau20[13]], 'exp4': [tableau20[4] ,tableau20[16],tableau20[5] ,tableau20[17]], 'exp5': [tableau20[10],tableau20[14],tableau20[11],tableau20[15]]} nun_exp = [] nun_exp.append(len(sessions[sessions['experiment'] == 'exp1']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp2']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp3']['nunits'].sort_values().unique())) nun_exp.append(len(sessions[sessions['experiment'] == 'exp4']['nunits'].sort_values().unique())) ax = [] i = 0 while(i < len(ttq_subplots)): for gn in range(4): for gc in range(nun_exp[gn]): session = ttq_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) npilots = int(session[session['experiment'] == experiment]['npilot_active'][0]) repetitions = session.shape[0] color = colors[experiment] title = 'Exp. %s\n%s tasks\n%s pilots\n%s rep.' % (experiment[3], ntasks, npilots, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session[['TT_PILOT_LRMS_QUEUING', 'TT_PILOT_LRMS_RUNNING', 'TT_UNIT_AGENT_EXECUTING', 'TTC']].plot(kind='bar', ax=ax[i], color=color, title=title, stacked=True) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Runs') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7 or i == 16: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3 or i == 11: ax[i].legend(labels=['TTQ','TTR','TTX','TTC'], bbox_to_anchor=(2.25, 1)) elif i == 16: ax[i].legend(labels=['TTQ','TTR','TTX','TTC'], bbox_to_anchor=(2.70, 1)) else: ax[i].get_legend().set_visible(False) # Add labels with number of pilots per session. rects = ax[i].patches labels = [int(l) for l in session['npilot_active']] for rect, label in zip(rects[-repetitions:], labels): height = rect.get_height() ax[i].text(rect.get_x() + rect.get_width()/2, (height*3)+1500, label, ha='center', va='bottom') # Add labels with number of unique resources per session. rects = ax[i].patches labels = [int(l) for l in session['n_unique_host']] for rect, label in zip(rects[-repetitions:], labels): height = rect.get_height() ax[i].text(rect.get_x() + rect.get_width()/2, height*3, label, ha='center', va='bottom', color='red') fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttq_ttr_ttx_ttc_npactive_nrunique_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: Total Time eXecuting (TTX) We now calculate the total amount of TTC spent executing CUs. This will tell us how much of TTR is indeed spent executing CUs. End of explanation """ sessions[['TT_PILOT_LRMS_RUNNING', 'TT_UNIT_AGENT_EXECUTING']].describe() """ Explanation: The diagram confirms the similarity between the size of TTR and TTX. More analytically: End of explanation """ fig = plt.figure(figsize=(13,7)) fig.suptitle('TTX with Number of Active Pilots - XSEDE OSG Virtual Cluster', fontsize=14) gs = [] grid = gridspec.GridSpec(1, 2) grid.update(wspace=0.4, top=0.85) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[0])) gs.append(gridspec.GridSpecFromSubplotSpec(1, 4, subplot_spec=grid[1])) ttq_subplots = [] for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): ttq_subplots.append(sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ].sort_values('TTC')) colors = {'exp1': [tableau20[18]], 'exp2': [tableau20[10]]} ax = [] i = 0 while(i<8): for gn in range(2): for gc in range(4): session = ttq_subplots.pop(0) experiment = session['experiment'].unique()[0] ntasks = int(session['nunits'].unique()[0]) repetitions = session.shape[0] color = colors[experiment] title = 'Experiment %s\n%s tasks\n%s repetitions.' % (experiment[3], ntasks, repetitions) if i == 0: ax.append(plt.Subplot(fig, gs[gn][0, gc])) else: ax.append(plt.Subplot(fig, gs[gn][0, gc], sharey=ax[0])) session[['TT_UNIT_AGENT_EXECUTING']].plot(kind='bar', ax=ax[i], color=color, title=title, stacked=True) ax[i].spines["top"].set_visible(False) ax[i].spines["right"].set_visible(False) ax[i].get_xaxis().tick_bottom() ax[i].get_yaxis().tick_left() ax[i].set_xticklabels([]) ax[i].set_xlabel('Sessions') # Handle a bug that sets yticklabels to visible # for the last subplot. if i == 7: plt.setp(ax[i].get_yticklabels(), visible=False) else: ax[i].set_ylabel('Time (s)') # Handle legens. if i == 7 or i == 3: ax[i].legend(labels=['TTX'], bbox_to_anchor=(2.25, 1)) else: ax[i].get_legend().set_visible(False) # Add labels with number of pilots per session. rects = ax[i].patches labels = [int(l) for l in session['npilot_active']] for rect, label in zip(rects[-repetitions:], labels): height = rect.get_height() ax[i].text(rect.get_x() + rect.get_width()/2, height+0.5, label, ha='center', va='bottom') fig.add_subplot(ax[i]) i += 1 plt.savefig('figures/osg_ttx_npactive_nunits.pdf', dpi=600, bbox_inches='tight') """ Explanation: We now have to characterize the variation among TTX of the runs with the same number of CUs and between the two experiments. We plot just TTX focusing on these variations. End of explanation """ ttx_stats = {} for exp in sessions['experiment'].sort_values().unique(): for nun in sessions['nunits'].sort_values().unique(): tag = exp+'_'+str(int(nun)) ttx_stats[tag] = sessions[ (sessions['experiment'] == exp) & (sessions['nunits'] == nun) ]['TT_UNIT_AGENT_EXECUTING'].describe() ttx_compare = pd.DataFrame(ttx_stats) sort_cols_runs = ['exp1_8' , 'exp2_8' , 'exp1_16', 'exp2_16', 'exp1_32', 'exp2_32', 'exp1_64', 'exp2_64'] sort_cols_exp = ['exp1_8' , 'exp1_16', 'exp1_32', 'exp1_64', 'exp2_8', 'exp2_16', 'exp2_32', 'exp2_64'] ttx_compare_runs = ttx_compare.reindex_axis(sort_cols_runs, axis=1) ttx_compare_exp = ttx_compare.reindex_axis(sort_cols_exp, axis=1) ttx_compare_exp.round(2) std_mean = (ttx_compare_exp.loc['std']/ttx_compare_exp.loc['mean'])*100 std_mean.round(2) """ Explanation: We notice lage variations both within and across experiments. Specifically: End of explanation """ from collections import OrderedDict cTTXs = OrderedDict() ncu = 64 ss = sessions[sessions['nunits'] == ncu].sort_values(['experiment','TTC'])['session'] for s in ss: cTTXs[s._sid] = s.filter(etype='unit').concurrency(state=['AGENT_EXECUTING','AGENT_STAGING_OUTPUT_PENDING'], sampling=1) for sid, cTTX in cTTXs.iteritems(): title = 'Degree of Concurrent Execution of %s CUs - XSEDE OSG Virtual Cluster\nSession %s' % (ncu, sid) x = [x[0] for x in cTTX] y = [y[1] for y in cTTX] color = tableau20[2] if 'ming' in sid: color = tableau20[0] fig, ax = fig_setup() fig.suptitle(title, fontsize=14) ax.set_xlabel('Time (s)') ax.set_ylabel('Number of CU') display(ax.plot(x, y, marker='.', linestyle='', color=color)) """ Explanation: Variation within Experiment 1 is between 7 and 18% of mean, almost proportionally increasing with the increasing of the number of unit executed. Variation in Experiment 2 is more pronounced, ranging from 25 to 78% of the mean. Clearly, these values are not indicative. While our analysis showed that this difference is due to the varying number of active pilots in Experiment 2, the relative uniformity of Experiment 1 may be also a byproduct of a reduced functionality of RP that limited the number of resources in which the experiment was able to run successfully. Experiment 1 failed when any of the pilots failed while in Experiment 2, failures where limited to pilots, without affecting the overall execution. Concurrency Finally, another element that should be considered is the impact of RP overheads on the execution of CUs. While the comparison between TTR and TTX excluded major overheads outside TTX, the averarages between the two parameters show that TTR grown bigger than TTX proportionally to the increase of the number of CU executed. This can be measured by looking at the degree of concurrency of CU executions across pilots. Here we plot this degree only for 64 CUs comparing both exp1 and exp2 runs. End of explanation """ # Model of unit durations. udm = {'UNIT_UMGR_SCHEDULING' : ['NEW' , 'UMGR_SCHEDULING_PENDING'], 'UNIT_UMGR_BINDING' : ['UMGR_SCHEDULING_PENDING' , 'UMGR_SCHEDULING'], 'IF_UMGR_SCHEDULING' : ['UMGR_SCHEDULING' , 'UMGR_STAGING_INPUT_PENDING'], 'IF_UMGR_QUEING' : ['UMGR_STAGING_INPUT_PENDING' , 'UMGR_STAGING_INPUT'], #'IF_AGENT_SCHEDULING' : ['UMGR_STAGING_INPUT' , 'AGENT_STAGING_INPUT_PENDING'], 'IF_AGENT_QUEUING' : ['AGENT_STAGING_INPUT_PENDING' , 'AGENT_STAGING_INPUT'], 'IF_AGENT_TRANSFERRING' : ['AGENT_STAGING_INPUT' , 'AGENT_SCHEDULING_PENDING'], 'UNIT_AGENT_QUEUING' : ['AGENT_SCHEDULING_PENDING' , 'AGENT_SCHEDULING'], 'UNIT_AGENT_SCHEDULING' : ['AGENT_SCHEDULING' , 'AGENT_EXECUTING_PENDING'], 'UNIT_AGENT_QUEUING_EXEC': ['AGENT_EXECUTING_PENDING' , 'AGENT_EXECUTING'], 'UNIT_AGENT_EXECUTING' : ['AGENT_EXECUTING' , 'AGENT_STAGING_OUTPUT_PENDING'], #'OF_AGENT_QUEUING' : ['AGENT_STAGING_OUTPUT_PENDING', 'AGENT_STAGING_OUTPUT'], #'OF_UMGR_SCHEDULING' : ['AGENT_STAGING_OUTPUT' , 'UMGR_STAGING_OUTPUT_PENDING'], 'OF_UMGR_QUEUING' : ['UMGR_STAGING_OUTPUT_PENDING' , 'UMGR_STAGING_OUTPUT'], 'OF_UMGR_TRANSFERRING' : ['UMGR_STAGING_OUTPUT' , 'DONE']} # DataFrame structure for pilot durations. uds = { 'pid': [], 'sid': [], 'experiment' : [], 'UNIT_UMGR_SCHEDULING' : [], 'UNIT_UMGR_BINDING' : [], 'IF_UMGR_SCHEDULING' : [], 'IF_UMGR_QUEING' : [], 'IF_AGENT_SCHEDULING' : [], 'IF_AGENT_QUEUING' : [], 'IF_AGENT_TRANSFERRING' : [], 'UNIT_AGENT_QUEUING' : [], 'UNIT_AGENT_SCHEDULING' : [], 'UNIT_AGENT_QUEUING_EXEC': [], 'UNIT_AGENT_EXECUTING' : [], 'OF_AGENT_QUEUING' : [], 'OF_UMGR_SCHEDULING' : [], 'OF_UMGR_QUEUING' : [], 'OF_UMGR_TRANSFERRING' : []} # Calculate the duration for each state of each # pilot of each run and Populate the DataFrame # structure. for sid in sessions[['session', 'experiment']].index: s = sessions.ix[sid, 'session'].filter(etype='unit', inplace=False) for u in s.list('uid'): sf = s.filter(uid=u, inplace=False) uds['pid'].append(u) uds['sid'].append(sid) uds['experiment'].append(sessions.ix[sid, 'experiment']) for d in udm.keys(): if (not sf.timestamps(state=udm[d][0]) or not sf.timestamps(state=udm[d][1])): pds[d].append(None) print udm[d] continue uds[d].append(sf.duration(udm[d])) # Populate the DataFrame. We have empty lists units = pd.DataFrame(dict([(k,pd.Series(v)) for k,v in uds.iteritems()])) display(units.head(3)) display(units.tail(3)) """ Explanation: Experiment 2 shows better concurrency than Experiment 1. This might be due to the evolution of the RADICAL Pilot code or more performant resources used by the runs of Experiment 2. These data shows at least three elements that need further development: We lack information on what resource each pilot has been running. Without this information we cannot make statistical inferences about some of the durations we measure. RADICAL Pilot overheads affect experimental measures at both global (see state models) and local level (see concurrency overheads). Their characterization and, in case, normalization is required by every experiment and need to be considered a first citizen of analytics. Experiments are potentially affected by small differences in the RADICAL Pilot code base. This would require the use of a stable RADICAL Pilot code base across all the experiment computational campaign. Time execution (Tx) Overall, our analysis confirms that the heterogeneity of OSG resources affects TTX and that TTQ has a marginal impact. The following step is trying to characterize statistically the effect of resource heterogeneity on TTX. We begin this characterization by looking at the time that each unit takes to execute on a OSG resource (Tx). Each CU has the same computing requirements. Therefore, we can aggregate their analysis across experiments. Note that each unit executed on only and only one pilot so the number of pilot that becomes active have no bearing on the units' Tx. End of explanation """ def measures_of_center(durations): m = {} m['mu'] = np.mean(durations) # Mean value of the data # standard error of the mean. Quantifies how # precisely we know the true mean of the # population. It takes into account both the # value of the SD and the sample size. SEM # gets smaller as your samples get larger: # precision of the mean gets higher with the # sample size. m['sem'] = sps.sem(durations) # Are there extremes in our dataset? Compare # to the mean. m['median'] = np.median(durations) # What value occours most often? m['mode'] = sps.mstats.mode(durations) return m Txs = units['UNIT_AGENT_EXECUTING'] Txs_exp1 = units[ units['experiment'] == 'exp1']['UNIT_AGENT_EXECUTING'] Txs_exp2 = units[ units['experiment'] == 'exp2']['UNIT_AGENT_EXECUTING'] Txs = sorted(Txs) Txs_exp1 = sorted(Txs_exp1) Txs_exp2 = sorted(Txs_exp2) Tx_measures = measures_of_center(Txs) Tx_measures_exp1 = measures_of_center(Txs_exp1) Tx_measures_exp2 = measures_of_center(Txs_exp2) print 'Tx' pprint.pprint(Tx_measures) print '\nTx_exp1' pprint.pprint(Tx_measures_exp1) print '\nTx_exp2' pprint.pprint(Tx_measures_exp2) """ Explanation: Note the NaN value for IF_AGENT_SCHEDULING and OF_UMGR_SCHEDULING. The timestamp of these states is broken in the RADICAL Pilot branch used for this experiments. Without analytics it would be difficult to spot and/or understand the error. Statistics We now have two set of measures that we can use to do some statistics. Descriptive Measures of Center mean ($\mu$) standard error of the mean (SEM) median mode End of explanation """ def measures_of_spread(durations): m = {} m['range'] = max(durations)-min(durations) m['min'], m['q1'], m['q2'], m['q3'], m['max'] = np.percentile(durations, [0,25,50,75,100]) m['irq'] = m['q3'] - m['q1'] m['var'] = np.var(durations) m['std'] = np.std(durations) m['mad'] = sm.robust.scale.mad(durations) return m print "Tx" pprint.pprint(measures_of_spread(Txs)) print "\nTx exp1" pprint.pprint(measures_of_spread(Txs_exp1)) print "\nTx exp2" pprint.pprint(measures_of_spread(Txs_exp2)) plots = [Txs, Txs_exp1, Txs_exp2] fig, ax = fig_setup() fig.suptitle('Distribution of Tx for Experiment 1 and 2', fontsize=14) ax.set_ylabel('Time (s)') bp = ax.boxplot(plots, labels=['Tx', 'Tx exp1', 'Tx exp2'])#, showmeans=True, showcaps=True) bp['boxes'][0].set( color=tableau20[8] ) bp['boxes'][1].set( color=tableau20[0] ) bp['boxes'][2].set( color=tableau20[2] ) plt.savefig('figures/osg_cu_spread_box.pdf', dpi=600, bbox_inches='tight') # - Mann-Whitney-Wilcoxon (MWW) RankSum test: determine # whether two distributions are significantly # different or not. Unlike the t-test, the RankSum # test does not assume that the data are normally # distributed. How do we interpret the difference? x = np.linspace(min(Txs),max(Txs),len(Txs)) Txs_pdf = mlab.normpdf(x, Tx_measures['mu'], Tx_measures['std']) z_stat, p_val = sps.ranksums(Txs, Txs_pdf) """ Explanation: Measures of Spread range percentiles interquartile (IRQ) variance standard deviation ($\sigma$) median absolute deviation (MAD) End of explanation """ Tx_measures['skew'] = sps.skew(Txs, bias=True) Tx_measures['kurt'] = sps.kurtosis(Txs) u_skew_test = sps.skewtest(Txs) u_kurt_test = sps.kurtosistest(Txs) print Tx_measures['skew'] print Tx_measures['kurt'] print u_skew_test print u_kurt_test metric = 'T_x' description = 'Histogram of $%s$' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures['mu'], Tx_measures['std'], Tx_measures['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, patches = ax.hist(Txs, bins='fd', normed=1, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[8], color=tableau20[9]) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) # ax.set_xlim(150, 700) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('figures/osg_cu_spread_hist.pdf', dpi=600, bbox_inches='tight') Tx_measures_exp1['skew'] = sps.skew(Txs, bias=True) Tx_measures_exp1['kurt'] = sps.kurtosis(Txs) u_skew_test = sps.skewtest(Txs) u_kurt_test = sps.kurtosistest(Txs) print Tx_measures_exp1['skew'] print Tx_measures_exp1['kurt'] print u_skew_test print u_kurt_test metric = 'T_x_exp1' description = 'Histogram of $%s$' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp1) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp1['mu'], Tx_measures_exp1['std'], Tx_measures_exp1['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, patches = ax.hist(Txs_exp1, bins='fd', normed=1, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[0], color=tableau20[1]) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) # ax.set_xlim(150, 700) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('figures/osg_exp1_cu_spread_hist.pdf', dpi=600, bbox_inches='tight') Tx_measures_exp2['skew'] = sps.skew(Txs, bias=True) Tx_measures_exp2['kurt'] = sps.kurtosis(Txs) u_skew_test = sps.skewtest(Txs) u_kurt_test = sps.kurtosistest(Txs) print Tx_measures_exp2['skew'] print Tx_measures_exp2['kurt'] print u_skew_test print u_kurt_test metric = 'T_x_exp2' description = 'Histogram of $%s$' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp2) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp2['mu'], Tx_measures_exp2['std'], Tx_measures_exp2['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, patches = ax.hist(Txs_exp2, bins='fd', normed=1, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[2], color=tableau20[3]) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) # ax.set_xlim(150, 700) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('figures/osg_exp2_cu_spread_hist.pdf', dpi=600, bbox_inches='tight') # - Fit to the normal distribution: fit the empirical # distribution to the normal for comparison purposes. (f_mu, f_sigma) = sps.norm.fit(Txs) (f_mu_exp1, f_sigma_exp1) = sps.norm.fit(Txs_exp1) (f_mu_exp2, f_sigma_exp2) = sps.norm.fit(Txs_exp2) # sample_pdf = np.linspace(min(Txs),max(Txs), len(Txs)) sample_pdf = np.linspace(0,max(Txs), len(Txs)) sample_pdf_exp1 = np.linspace(0,max(Txs_exp1), len(Txs_exp1)) sample_pdf_exp2 = np.linspace(0,max(Txs_exp2), len(Txs_exp2)) metric = 'T_x' description = 'Histogram of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures['mu'], Tx_measures['std'], Tx_measures['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, p = ax.hist(Txs, bins='fd', normed=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[8], color=tableau20[9]) pdf = mlab.normpdf(sample_pdf, f_mu, f_sigma) print min(pdf) print max(pdf) ax.plot(sample_pdf, pdf, label="$\phi$", color=tableau20[6]) # ax.fill_between(bins, # sample_pdf, # color=tableau20[1], # alpha=0.25) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) ax.set_xlim(min(sample_pdf), max(sample_pdf)) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('osg_cu_spread_pdf.pdf', dpi=600, bbox_inches='tight') metric = 'T_x_exp1' description = 'Histogram of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp1) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp1['mu'], Tx_measures_exp1['std'], Tx_measures_exp1['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, p = ax.hist(Txs_exp1, bins='fd', normed=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[0], color=tableau20[1]) pdf_exp1 = mlab.normpdf(sample_pdf_exp1, f_mu_exp1, f_sigma_exp1) print min(pdf_exp1) print max(pdf_exp1) ax.plot(sample_pdf_exp1, pdf_exp1, label="$\phi$", color=tableau20[6]) # ax.fill_between(bins, # sample_pdf, # color=tableau20[1], # alpha=0.25) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) ax.set_xlim(min(sample_pdf_exp1), max(sample_pdf_exp1)) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('osg_exp1_cu_spread_pdf.pdf', dpi=600, bbox_inches='tight') metric = 'T_x Experiment 2' description = 'Histogram of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp2) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp2['mu'], Tx_measures_exp2['std'], Tx_measures_exp2['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, p = ax.hist(Txs_exp2, bins='fd', normed=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[2], color=tableau20[3]) pdf_exp2 = mlab.normpdf(sample_pdf_exp2, f_mu_exp2, f_sigma_exp2) print min(pdf_exp2) print max(pdf_exp2) ax.plot(sample_pdf_exp2, pdf_exp2, label="$\phi$", color=tableau20[6]) # ax.fill_between(bins, # sample_pdf, # color=tableau20[1], # alpha=0.25) # ax.xaxis.set_major_locator(ticker.MultipleLocator(50)) ax.set_xlim(min(sample_pdf_exp2), max(sample_pdf_exp2)) ax.set_ylim(0.0, 0.005) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title)#, y=1.05) plt.legend(loc='upper right') plt.savefig('osg_exp2_cu_spread_pdf.pdf', dpi=600, bbox_inches='tight') # Values for analytical pdf sample_pdf = np.random.normal(loc=f_mu, scale=f_sigma, size=len(Txs)) metric = 'T_x' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures['mu'], Tx_measures['std'], Tx_measures['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) # fig = plt.figure() # ax = fig.add_subplot(111) # ax.spines["top"].set_visible(False) # ax.spines["right"].set_visible(False) # ax.get_xaxis().tick_bottom() # ax.get_yaxis().tick_left() ax = fig_setup() n, bins, p = ax.hist(Txs, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[8], color=tableau20[9], alpha=0.75) ax.hist(sample_pdf, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$cmd$", edgecolor=tableau20[6], color=tableau20[7], alpha=0.25) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_cu_cumulative_hist.pdf', dpi=600, bbox_inches='tight') # Values for analytical pdf sample_pdf_exp1 = np.random.normal(loc=f_mu_exp1, scale=f_sigma_exp1, size=len(Txs_exp1)) metric = 'T_x Experiment 1' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp1['mu'], Tx_measures_exp1['std'], Tx_measures_exp1['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) # fig = plt.figure() # ax = fig.add_subplot(111) # ax.spines["top"].set_visible(False) # ax.spines["right"].set_visible(False) # ax.get_xaxis().tick_bottom() # ax.get_yaxis().tick_left() ax = fig_setup() n, bins, p = ax.hist(Txs_exp1, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[0], color=tableau20[1], alpha=0.75) ax.hist(sample_pdf_exp1, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$cmd$", edgecolor=tableau20[6], color=tableau20[7], alpha=0.25) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_exp1_cu_cumulative_hist.pdf', dpi=600, bbox_inches='tight') # Values for analytical pdf sample_pdf_exp2 = np.random.normal(loc=f_mu_exp2, scale=f_sigma_exp2, size=len(Txs_exp2)) metric = 'T_x Experiment 1' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp2['mu'], Tx_measures_exp2['std'], Tx_measures_exp2['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) ax = fig_setup() n, bins, p = ax.hist(Txs_exp2, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$T_x$", linewidth=0.75, edgecolor=tableau20[2], color=tableau20[3], alpha=0.75) ax.hist(sample_pdf_exp2, bins='fd', normed=True, cumulative=True, histtype='stepfilled', label="$cmd$", edgecolor=tableau20[6], color=tableau20[7], alpha=0.25) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_exp2_cu_cumulative_hist.pdf', dpi=600, bbox_inches='tight') Txs_np = np.array(Txs) # Cumulative samples Txs_sum = np.cumsum(np.ones(Txs_np.shape))/len(Txs) # Values for analytical cdf sample_cdf = np.linspace(0,max(Txs), len(Txs)) metric = 'T_x' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures['mu'], Tx_measures['std'], Tx_measures['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) # fig = plt.figure() # ax = fig.add_subplot(111) # ax.spines["top"].set_visible(False) # ax.spines["right"].set_visible(False) # ax.get_xaxis().tick_bottom() # ax.get_yaxis().tick_left() ax = fig_setup() ax.plot(sample_cdf, sps.norm.cdf(sample_cdf, f_mu, f_sigma), label="cdf", color=tableau20[6]) ax.step(Txs, Txs_sum, label="$T_x$", where='post', color=tableau20[8]) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_cu_cumulative_plot.pdf', dpi=600, bbox_inches='tight') Txs_np_exp1 = np.array(Txs_exp1) # Cumulative samples Txs_sum_exp1 = np.cumsum(np.ones(Txs_np_exp1.shape))/len(Txs_exp1) # Values for analytical cdf sample_cdf_exp1 = np.linspace(0,max(Txs_exp1), len(Txs_exp1)) metric = 'T_x Experiment 1' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp1) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp1['mu'], Tx_measures_exp1['std'], Tx_measures_exp1['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) # fig = plt.figure() # ax = fig.add_subplot(111) # ax.spines["top"].set_visible(False) # ax.spines["right"].set_visible(False) # ax.get_xaxis().tick_bottom() # ax.get_yaxis().tick_left() ax = fig_setup() ax.plot(sample_cdf_exp1, sps.norm.cdf(sample_cdf_exp1, f_mu_exp1, f_sigma_exp1), label="cdf", color=tableau20[6]) ax.step(Txs_exp1, Txs_sum_exp1, label="$T_x$", where='post', color=tableau20[0]) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_exp1_cu_cumulative_plot.pdf', dpi=600, bbox_inches='tight') Txs_np_exp2 = np.array(Txs_exp2) # Cumulative samples Txs_sum_exp2 = np.cumsum(np.ones(Txs_np_exp2.shape))/len(Txs_exp2) # Values for analytical cdf sample_cdf_exp2 = np.linspace(0,max(Txs_exp2), len(Txs_exp2)) metric = 'T_x Experiment 2' description = 'Cumulative distribution of $%s$ compared to its fitted normal distribution' % metric task = 'Gromacs emulation' repetition = '$%s$ repetitions' % len(Txs_exp2) resource = 'XSEDE OSG Virtual Cluster' stats = '$\mu$=%.3f,\ \sigma=%.3f,\ SE_\mu=%.3f$' % (Tx_measures_exp2['mu'], Tx_measures_exp2['std'], Tx_measures_exp2['sem']) title = '%s.\n%s; %s;\n%s.' % (description, repetition, resource, stats) # fig = plt.figure() # ax = fig.add_subplot(111) # ax.spines["top"].set_visible(False) # ax.spines["right"].set_visible(False) # ax.get_xaxis().tick_bottom() # ax.get_yaxis().tick_left() ax = fig_setup() ax.plot(sample_cdf_exp2, sps.norm.cdf(sample_cdf_exp2, f_mu_exp2, f_sigma_exp2), label="cdf", color=tableau20[6]) ax.step(Txs_exp2, Txs_sum_exp2, label="$T_x$", where='post', color=tableau20[2]) plt.ylabel('$P(Tx)$') plt.xlabel('$T_x$ (s)') plt.title(title, y=1.05) plt.legend(loc='upper left') plt.savefig('osg_exp2_cu_cumulative_plot.pdf', dpi=600, bbox_inches='tight') """ Explanation: Skewness and Kurtosis End of explanation """
tensorflow/docs-l10n
site/ja/tutorials/estimator/boosted_trees.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import numpy as np import pandas as pd from IPython.display import clear_output from matplotlib import pyplot as plt # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') y_train = dftrain.pop('survived') y_eval = dfeval.pop('survived') import tensorflow as tf tf.random.set_seed(123) """ Explanation: Estimators を使用するブースティング木 <table class="tfo-notebook-buttons" align="left"> <td> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/boosted_trees.ipynb">TensorFlow.org で表示</a> </td> <td> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/boosted_trees.ipynb">Google Colab で実行</a> </td> <td> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/estimator/boosted_trees.ipynb">GitHubでソースを表示</a> </td> <td> <img src="https://www.tensorflow.org/images/download_logo_32px.png"><a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/boosted_trees.ipynb">ノートブックをダウンロード</a> </td> </table> 警告: 新しいコードには Estimators は推奨されません。Estimators は v1.Session スタイルのコードを実行しますが、これは正しく記述するのはより難しく、特に TF 2 コードと組み合わせると予期しない動作をする可能性があります。Estimators は、[互換性保証] (https://tensorflow.org/guide/versions) の対象となりますが、セキュリティの脆弱性以外の修正は行われません。詳細については、移行ガイドを参照してください。 注意: 多くの最先端の決定フォレストアルゴリズムの最新の Keras ベースの実装は、TensorFlow 決定フォレストから利用できます。 このチュートリアルは、tf.estimatorAPI で決定木を使用する勾配ブースティングモデルのエンドツーエンドのウォークスルーです。ブースティング木モデルは、回帰と分類の両方のための最も一般的かつ効果的な機械学習アプローチの 1 つです。これは、複数(10 以上、100 以上、あるいは 1000 以上の場合も考えられます)の木モデルからの予測値を結合するアンサンブル手法です。 最小限のハイパーパラメータ調整で優れたパフォーマンスを実現できるため、ブースティング木モデルは多くの機械学習実践者に人気があります。 Titanic データセットを読み込む Titanic データセットを使用します。ここでの目標は、性別、年齢、クラスなど与えられた特徴から(やや悪趣味ではありますが)乗船者の生存を予測することです。 End of explanation """ dftrain.head() dftrain.describe() """ Explanation: データセットはトレーニングセットと評価セットで構成されています。 dftrainとy_trainは トレーニングセットです — モデルが学習に使用するデータです。 モデルは評価セット、dfeval、y_evalに対してテストされます。 トレーニングには以下の特徴を使用します。 <table> <tr> <th>特徴名</th> <th>説明</th> </tr> <tr> <td>sex</td> <td>乗船者の性別</td> </tr> <tr> <td>age</td> <td>乗船者の年齢</td> </tr> <tr> <td>n_siblings_spouses</td> <td>同乗する兄弟姉妹および配偶者</td> </tr> <tr> <td>parch</td> <td>同乗する両親および子供</td> </tr> <tr> <td>fare</td> <td>運賃</td> </tr> <tr> <td>class</td> <td>船室のクラス</td> </tr> <tr> <td>deck</td> <td>搭乗デッキ</td> </tr> <tr> <td>embark_town</td> <td>乗船者の乗船地</td> </tr> <tr> <td>alone</td> <td>一人旅か否か</td> </tr> </table> データを検証する まず最初に、データの一部をプレビューして、トレーニングセットの要約統計を作成します。 End of explanation """ dftrain.shape[0], dfeval.shape[0] """ Explanation: トレーニングセットと評価セットには、それぞれ 627 個と 264 個の例があります。 End of explanation """ dftrain.age.hist(bins=20) plt.show() """ Explanation: 乗船者の大半は 20 代から 30 代です。 End of explanation """ dftrain.sex.value_counts().plot(kind='barh') plt.show() """ Explanation: 男性の乗船者数は女性の乗船者数の約 2 倍です。 End of explanation """ dftrain['class'].value_counts().plot(kind='barh') plt.show() """ Explanation: 乗船者の大半は「3 等」の船室クラスを利用していました。 End of explanation """ dftrain['embark_town'].value_counts().plot(kind='barh') plt.show() """ Explanation: 大半の乗船者はサウサンプトンから乗船しています。 End of explanation """ pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive') plt.show() """ Explanation: 女性は男性よりも生存する確率がはるかに高く、これは明らかにモデルの予測特徴です。 End of explanation """ CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] def one_hot_cat_column(feature_name, vocab): return tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocab)) feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: # Need to one-hot encode categorical features. vocabulary = dftrain[feature_name].unique() feature_columns.append(one_hot_cat_column(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32)) """ Explanation: 特徴量カラムを作成して関数を入力する 勾配ブースティング Estimator は数値特徴とカテゴリ特徴の両方を利用します。特徴量カラムは、全ての TensorFlow Estimator と機能し、その目的はモデリングに使用される特徴を定義することにあります。さらに、One-Hot エンコーディング、正規化、バケット化などいくつかの特徴量エンジニアリング機能を提供します。このチュートリアルでは、CATEGORICAL_COLUMNSのフィールドはカテゴリカラムから One-Hot エンコーディングされたカラム(インジケータカラム)に変換されます。 End of explanation """ example = dict(dftrain.head(1)) class_fc = tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list('class', ('First', 'Second', 'Third'))) print('Feature value: "{}"'.format(example['class'].iloc[0])) print('One-hot encoded: ', tf.keras.layers.DenseFeatures([class_fc])(example).numpy()) """ Explanation: 特徴量カラムが生成する変換は表示することができます。例えば、indicator_columnを単一の例で使用した場合の出力は次のようになります。 End of explanation """ tf.keras.layers.DenseFeatures(feature_columns)(example).numpy() """ Explanation: さらに、特徴量カラムの変換を全てまとめて表示することができます。 End of explanation """ # Use entire batch since this is such a small dataset. NUM_EXAMPLES = len(y_train) def make_input_fn(X, y, n_epochs=None, shuffle=True): def input_fn(): dataset = tf.data.Dataset.from_tensor_slices((dict(X), y)) if shuffle: dataset = dataset.shuffle(NUM_EXAMPLES) # For training, cycle thru dataset as many times as need (n_epochs=None). dataset = dataset.repeat(n_epochs) # In memory training doesn't use batching. dataset = dataset.batch(NUM_EXAMPLES) return dataset return input_fn # Training and evaluation input functions. train_input_fn = make_input_fn(dftrain, y_train) eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1) """ Explanation: 次に、入力関数を作成する必要があります。これらはトレーニングと推論の両方のためにデータをモデルに読み込む方法を指定します。 tf.data API のfrom_tensor_slicesメソッドを使用して Pandas から直接データを読み取ります。これは小規模でインメモリのデータセットに適しています。大規模のデータセットの場合は、多様なファイル形式(csvを含む)をサポートする tf.data API を使用すると、メモリに収まりきれないデータセットも処理することができます。 End of explanation """ linear_est = tf.estimator.LinearClassifier(feature_columns) # Train model. linear_est.train(train_input_fn, max_steps=100) # Evaluation. result = linear_est.evaluate(eval_input_fn) clear_output() print(pd.Series(result)) """ Explanation: モデルをトレーニングして評価する 以下のステップで行います。 特徴とハイパーパラメータを指定してモデルを初期化する。 train_input_fnを使用してモデルにトレーニングデータを与え、train関数を使用してモデルをトレーニングする。 評価セット(この例ではdfeval DataFrame)を使用してモデルのパフォーマンスを評価する。予測値がy_eval配列のラベルと一致することを確認する。 ブースティング木モデルをトレーニングする前に、まず線形分類器(ロジスティック回帰モデル)をトレーニングしてみましょう。ベンチマークを確立するには、より単純なモデルから始めるのがベストプラクティスです。 End of explanation """ # Since data fits into memory, use entire dataset per layer. It will be faster. # Above one batch is defined as the entire dataset. n_batches = 1 est = tf.estimator.BoostedTreesClassifier(feature_columns, n_batches_per_layer=n_batches) # The model will stop training once the specified number of trees is built, not # based on the number of steps. est.train(train_input_fn, max_steps=100) # Eval. result = est.evaluate(eval_input_fn) clear_output() print(pd.Series(result)) """ Explanation: 次に、ブースティング木モデルをトレーニングしてみましょう。ブースティング木では、回帰(BoostedTreesRegressor)と分類(BoostedTreesClassifier)をサポートします。目標は、生存か非生存かのクラスを予測することなので、BoostedTreesClassifierを使用します。 End of explanation """ pred_dicts = list(est.predict(eval_input_fn)) probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts]) probs.plot(kind='hist', bins=20, title='predicted probabilities') plt.show() """ Explanation: このトレーニングモデルを使用して、評価セットからある乗船者に予測を立てることができます。TensorFlow モデルは、バッチ、コレクション、または例に対してまとめて予測を立てられるように最適化されています。以前は、eval_input_fn は評価セット全体を使って定義されていました。 End of explanation """ from sklearn.metrics import roc_curve fpr, tpr, _ = roc_curve(y_eval, probs) plt.plot(fpr, tpr) plt.title('ROC curve') plt.xlabel('false positive rate') plt.ylabel('true positive rate') plt.xlim(0,) plt.ylim(0,) plt.show() """ Explanation: 最後に、結果の受信者操作特性(ROC)を見てみましょう。真陽性率と偽陽性率間のトレードオフに関し、より明確な予想を得ることができます。 End of explanation """
google/applied-machine-learning-intensive
content/00_prerequisites/01_intermediate_python/00-objects.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/00_prerequisites/01_intermediate_python/00-objects.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2019 Google LLC. End of explanation """ for data in ( 1, # integer 3.5, # float "Hello Python", # string (1, "funny", "tuple"), # tuple ["a", "list"], # list {"and": "a", "dict": 2} # dictionary ): print("Is {} an object? {}".format(type(data), isinstance(data, object))) """ Explanation: Intermediate Python - Objects At this point in your Python journey, you should be familiar with the following concepts and when to use them. different data types string number list tuple dictionary printing for and while loops if/else statements functions code commenting In this lab, we will move into the more advanced concept of objects. You may have heard of object-oriented programming, especially in other languages. If not, don't worry. This will be a gentle introduction that will give you the skills you need to know in order to build your own objects in Python. Objects Introduction It is likely that you have seen programs written in a procedural programming style. These programs consist of procedures (also called functions and methods) that operate on data passed to them. Imagine that you had a function compute_paycheck that computed the weekly paycheck for a worker. If you wanted to compute the paycheck of a given employee in a procedural style, you would pass the necessary data to compute the pay to the compute_paycheck function. employee_data = get_employee_data() pay = compute_paycheck(employee_data) Though you could write something like this in Python, it isn't necessarily idiomatic to the language. What this means is that Python tends to work better and look better when you use object-oriented programming. Python is an object-oriented language. This means that your program can be modeled as logical objects with methods built in to the object to operate on data. In an object-oriented programming style, you could encode each employee as its own object, and write a method called compute_paycheck which returns the weekly paycheck for a given employee. In that case, computing an employee's paycheck would look more like the following: employee_data = get_employee_data() pay = employee_data.compute_paycheck() In this case, compute_paycheck is a method that is bound to the returned argument employee_data, and can be called directly on this type. A method is just a function that is tied to an object. However, the terms "function" and "method" are often used interchangeably. See here for a more in-depth discussion. Using object-oriented programming does not mean that you can't pass data to functions/methods. Imagine that the employee data only contained information like hourly wage and tax holdouts. In this case, compute_paycheck would need to know the number of hours worked in order to calculate the employee's pay. employee_data = get_employee_data() hours_worked = get_hours_worked() pay = employee_data.compute_paycheck(hours_worked) In the example above, you can see the procedural and object-oriented styles mixed together in the same block. (The hours_worked variable is computed using the get_hours_worked function, and the employee_data variable is computed using the get_employee_data function.) However, even these variables could be computed in an object-oriented style. For example, hours_worked could come from an object representing the time clock, and employee_data could come from an object representing the HR system. employee_data = hr.get_employee_data() hours_worked = timeclock.get_hours_worked() employee_data.compute_pay(hours_worked) In Python, everything is an object. The code below uses the inbuilt isinstance function to check if each item is an instance of an object. End of explanation """ class Cow: pass """ Explanation: You can create your own object using the class keyword. End of explanation """ # Create an instance of Cow called elsie elsie = Cow() # Create an instance of Cow called annabelle annabelle = Cow() print(Cow) print(elsie) print(annabelle) """ Explanation: Why did we use the keyword class and not object? You can think of the class as a template for the object, and the object itself as an instance of the class. To create an object from a class, you use parentheses to instantiate the class. End of explanation """ class Cow: def talk(): print("Moo") """ Explanation: Notice that Cow is a class and that elsie and annabelle are Cow objects. The text following at indicates where in memory these objects are stored. You might have to look closely, but elsie and annabelle are located at different locations in memory. Adding methods to a class is easy. You simply create a function, but have it indented so that it is inside the class. End of explanation """ Cow.talk() """ Explanation: You can then call the method directly on the class. End of explanation """ class Cow: def talk(self): print("Moo") elsie = Cow() elsie.talk() """ Explanation: While you can call talk() on the Cow class, you can't actually call talk() on any instances of Cow, such as elsie and annabelle. In order to make Elsie and Annabelle talk, we need to pass the self keyword to the talk method. In general, all object functions should pass self as the first parameter. Let's modify the Cow class to make talk an object (also known as instance) function instead of a class function. End of explanation """ class Cow: def talk(self): print("Moo") def eat(self): print("Crunch") elsie = Cow() elsie.eat() elsie.talk() """ Explanation: Now talk can be called on objects of type Cow, but not on the Cow class itself. You can add as many methods as you want to a class. End of explanation """ class Cow: def __init__(self, name): self.__name = name def talk(self): print("{} says Moo".format(self.__name)) annie = Cow("Annabelle") annie.talk() elly = Cow("Elsie") elly.talk() """ Explanation: Initialization There are special functions that you can define in a class. These functions do things like initialize an object, convert an object to a string, determine the length of an object, and more. These special functions all start and end with double-underscores. The most common of these is __init__. __init__ initializes the class. Let's add an initializer to our Cow class. End of explanation """ class Animal: def talk(self): print("...") # The sound of silence def eat(self): print("crunch") class Cow(Animal): def talk(self): print("Moo") class Worm(Animal): pass cow = Cow() worm = Worm() cow.talk() cow.eat() worm.talk() worm.eat() """ Explanation: There are a few new concepts in the code above. __init__ You can see that __init__ is passed the object itself, commonly referred to as self. __init__ can also accept any number of other arguments. In this case, we want the name of the cow. We save that name in the object (represented by self), and also use it in the talk method. __name Notice that the instance variable __name has two underscores before it. This naming is a way to tell Python to hide the variable from the rest of the program, so that it is only accessible to other methods within the object. This data hiding provides encapsulation which is an important concept in object-oriented programming. Had __name been called name or _name (single-underscore), it would not be hidden, and could then be accessed on the object (eg. annie.name). There are many different double-underscore (dunder) methods. They are all documented in the official Python documentation. Inheritance Python objects are able to inherit functionality from other Python objects. Let's look at an example. End of explanation """ class Animal: def move(self): pass def eat(self): pass class Legless(Animal): def move(self): print("Wriggle wriggle") class Legged(Animal): def move(self): print("Trot trot trot") class Toothless(Animal): def eat(self): print("Slurp") class Toothed(Animal): def eat(self): print("Chomp") class Worm(Legless, Toothless): pass class Cow(Legged, Toothed): pass class Rock: pass def live(animal): if isinstance(animal, Animal): animal.move() animal.eat() w = Worm() c = Cow() r = Rock() print("The worm goes...") live(w) print("The cow goes...") live(c) print("The rock goes...") live(r) """ Explanation: In the code above, we create an Animal class that has a generic implementation of the talk and eat functions that we created earlier. We then create a Cow object that implements its own talk function but relies on the Animal's eat function. We also create a Worm class that fully relies on Animal to provide talk and eat functions. The reason this is so useful is that we can scaffold classes to inherit base features. For example, we might want different base classes Plant and Animal that represent generic plants and animals respectively. Then, we could create different plants such as Cactus and Sunflower inheriting from the Plant class, and different animals such as Cow and Worm. Python also supports multiple inheritance and many layers of inheritance. In the code below, move and eat are methods of the base class Animal, which are then inherited by different types of animals. End of explanation """ # Your code goes here """ Explanation: Exercises Exercise 1 In the code block below, create a Cow class that has an __init__ method that accepts a name and breed so that a cow can be created like: elsie = Cow("Elsie", "Jersey") Name the class variables name and breed. Make sure that if the name and breed of cow passed to the constructor are changed, the values stored in the instance variables reflect the different names. In other words, don't hard-code "Elsie" and "Jersey". Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 2 Take the Cow class that you implemented in exercise one, and add a double-underscore method so that if you create a cow using: cow = Cow("Elsie", "Shorthorn") Calling print(cow) prints: Elsie is a Shorthorn cow. Hint: you may want to look through the Python documentation on special method names to find the dunder method that dictates a string representation of the object. Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 3 Take the Cow class that you implemented in exercise two (or one), and add a double-underscore method so that print(repr(elsie)) prints: Cow("Elsie", "Jersey") Student Solution End of explanation """ # Your code goes here class Vehicle: def go(): pass class Car: def go(): print("Vroom!") # No changes below here! car = Car() if isinstance(car, Vehicle): car.go() """ Explanation: Exercise 4 Fix the Car class in the code inheritance below so that "Vroom!" is printed. Student Solution End of explanation """
bioinformatica-corso/lezioni
laboratorio/lezione4-08ott21/esercizio2-soluzione.ipynb
cc0-1.0
input_file_name = './movies.csv' n_most_popular = 15 # Parametro N """ Explanation: Esercizio 2 Considerare il file movies.csv ottenuto estraendo i primi 1000 record del dataset scaricabile all'indirizzo https://www.kaggle.com/rounakbanik/the-movies-dataset#movies_metadata.csv. Tale dataset è in formato csv e contiene, in record di 24 campi separati da virgola, le informazioni su film. Il file movies.csv contiene solo un subset dei campi del dataset originale. I campi del file csv che occorrono per risolvere l'esercizio sono: id: indice progressivo genres: stringa che rappresenta il letterale di una lista di dizionari con chiavi id e name che forniscono ciascuno un genere [{'id': 16, 'name': 'Animation'}, {'id': 35, 'name': 'Comedy'}] original_title: titolo originale popularity: valore di popolarità tagline: tagline del film original_language: lingua originale production_countries: stringa che rappresenta il letterale di una lista di dizionari con chiavi iso_3166_1 e name che forniscono ciascuno un paese di origine [{'iso_3166_1': 'DE', 'name': 'Germany'}, {'iso_3166_1': 'US', 'name': 'United States of America'}] Si richiede di: - elencare i 10 paesi che hanno prodotto più film, ordinandoli per numero decrescente di film prodotti, specificando per ognuno il numero di film prodotti - fornire per ognuno dei generi cinematografici presenti nel dataset la classifica degli N (parametro in input) film più popolari (per quel genere) ordinandoli per popolarità decrescente e specificando per ognuno di essi il titolo originale e la tagline - l'insieme delle lingue originali che sono coinvolte nella classifica precedente Parametri di input: - dataset dei film - parametro N Requisiti generali: definire una funzione get_items() che prenda in input uno qualsiasi tra i due campi genrese production_countries (indifferentemente) ed estragga la lista dei generi nel caso si passi come argomento il valore di un campo genres la lista dei paesi di produzione nel caso si passi come argomento il valore di un campo production_countries Produrre l'output nelle seguenti variabili: lista di 10 tuple di due elementi (nome di paese, numero di film prodotti) contenenti i primi 10 paesi che hanno prodotto più film, ordinate per numero decrescente di film prodotti dizionario delle classifiche per genere dei primi N film ordinati per popolarità decrescente: chiave: genere di un film valore: lista di N liste di due elementi [titolo originale, tagline] con i primi N film ordinati per popolarità decrescente dizionario degli insiemi delle lingue coinvolte in ciascuna delle classifiche precedenti per genere: chiave: genere di un film valore: insieme delle lingue originali coinvolte Soluzione Parametri di input End of explanation """ import pandas as pd import ast import numpy as np """ Explanation: Importazione dei moduli pandas e ast e numpy. End of explanation """ def get_items(arg_string): return [d['name'] for d in ast.literal_eval(arg_string)] #get_items("[{'iso_3166_1': 'DE', 'name': 'Germany'}, {'iso_3166_1': 'US', 'name': 'United States of America'}]") """ Explanation: 1) Definizione della funzione get_items() La funzione get_items() prende in input un valore di campo genres|production_countries e restituisce la lista dei generi|paesi di produzione. End of explanation """ df = pd.read_csv('movies.csv') #df """ Explanation: 2) Lettura del file csv con Pandas End of explanation """ info_dict = {} for (index, record) in df.iterrows(): info_dict[index] = (record['original_title'], record['tagline'], record['original_language']) info_dict """ Explanation: 3) Costruzione delle tre strutture dati di base a) Dizionario delle informazioni sui film: - chiave: id dei film - valore: tupla (titolo originale, tagline, lingua originale) End of explanation """ country_list = [] for (index, record) in df.iterrows(): country_list.extend(get_items(record['production_countries'])) country_list """ Explanation: b) Lista dei paesi che hanno prodotto almeno un film (ciascun paese deve essere presente nella lista esattamente il numero di volte in cui ha prodotto un film). End of explanation """ pop_dict = {} for (index, record) in df.iterrows(): if np.isnan(record['popularity']) == False: for gen in get_items(record['genres']): value = pop_dict.get(gen, []) value.append([record['popularity'], index]) pop_dict[gen] = value pop_dict """ Explanation: c) Dizionario delle popolarità: - chiave: genere cinematografico - valore: lista dei film associati al genere (ognuno dei film deve essere rappresentato come lista annidata [popolarità, id]) NB: controllare che il campo popularity sia diverso da NaN (Not a Number). End of explanation """ from collections import Counter country_rank_list = Counter(country_list).most_common()[:10] country_rank_list """ Explanation: 4) Estrazione dei 10 paesi che hanno prodotto più film Costruire la lista (primo output) dei primi 10 paesi che hanno prodotto più film, ordinandoli per numero decrescente di film. Ogni paese deve essere rappresentato come tupla (nome del paese, numero di film prodotti). End of explanation """ tuple_list = [(genere, sorted(pop_dict[genere])[::-1][:n_most_popular]) for genere in pop_dict] pop_rank_dict = dict(tuple_list) pop_rank_dict """ Explanation: 5) Estrazione, per ogni genere, degli n_most_popular film più popolari ordinati per popolarità descrescente, ed estrazione delle lingue coinvolte per ciascuno dei generi a) Derivare dal dizionario delle popolarità il dizionario che ha la stessa struttura di chiavi e valori, con la differenza che il valore relativo a una chiave (genere) è la lista degli n_most_popular film più popolari ordinati per popolarità decrescente. NOTA BENE: i valori di questo dizionario sono le liste del dizionario delle popolarità ordinate per popolarità decrescente e troncate ai primi n_most_popular elementi. Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista. End of explanation """ tuple_list = [] for genere in pop_rank_dict: new_list = [] for film in pop_rank_dict[genere]: film_id = film[1] original_title = info_dict[film_id][0] tagline = info_dict[film_id][1] new_film = [original_title, tagline] new_list.append(new_film) tuple_list.append((genere, new_list)) pop_rank_dict_out = dict(tuple_list) pop_rank_dict_out """ Explanation: b) Derivare dal dizionario precedente il dizionario con la stessa struttura in cui le liste [popolarità, id] sono sostituite dalle liste [titolo originale, tagline] (secondo output). Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista. End of explanation """ tuple_list = [] for genere in pop_rank_dict: language_set = set() for film in pop_rank_dict[genere]: language_set.add(info_dict[film[1]][2]) tuple_list.append((genere, language_set)) language_set_dict = dict(tuple_list) language_set_dict """ Explanation: c) Estrarre dal dizionario del punto 5a il dizionario degli insiemi delle lingue originali coinvolte: - chiave: genere cinematografico - valore: insieme delle lingue originali coinvolte (oggetto di tipo set) Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista. End of explanation """
VictorQuintana91/Thesis
notebooks/test/002_pos_tagging-Copy1.ipynb
mit
import pandas as pd df0 = pd.read_csv("../../data/interim/001_normalised_keyed_reviews.csv", sep="\t", low_memory=False) df0.head() # For monitoring duration of pandas processes from tqdm import tqdm, tqdm_pandas # To avoid RuntimeError: Set changed size during iteration tqdm.monitor_interval = 0 # Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm` # (can use `tqdm_gui`, `tqdm_notebook`, optional kwargs, etc.) tqdm.pandas(desc="Progress:") # Now you can use `progress_apply` instead of `apply` # and `progress_map` instead of `map` # can also groupby: # df.groupby(0).progress_apply(lambda x: x**2) def convert_text_to_list(review): return review.replace("[","").replace("]","").replace("'","").split(",") # Convert "reviewText" field to back to list df0['reviewText'] = df0['reviewText'].astype(str) df0['reviewText'] = df0['reviewText'].progress_apply(lambda text: convert_text_to_list(text)); df0['reviewText'].head() df0['reviewText'][12] import nltk nltk.__version__ # Split negs def split_neg(review): new_review = [] for token in review: if '_' in token: split_words = token.split("_") new_review.append(split_words[0]) new_review.append(split_words[1]) else: new_review.append(token) return new_review df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: split_neg(review)) df0["reviewText"].head() ### Remove Stop Words from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) def remove_stopwords(review): return [token for token in review if not token in stop_words] df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: remove_stopwords(review)) df0["reviewText"].head() """ Explanation: Pos-Tagging & Feature Extraction Following normalisation, we can now proceed to the process of pos-tagging and feature extraction. Let's start with pos-tagging. POS-tagging Part-of-speech tagging is one of the most important text analysis tasks used to classify words into their part-of-speech and label them according the tagset which is a collection of tags used for the pos tagging. Part-of-speech tagging also known as word classes or lexical categories. The nltk library provides its own pre-trained POS-tagger. Let's see how it is used. End of explanation """ from nltk.tag import StanfordPOSTagger from nltk import word_tokenize # import os # os.getcwd() # Add the jar and model via their path (instead of setting environment variables): jar = '../../models/stanford-postagger-full-2017-06-09/stanford-postagger.jar' model = '../../models/stanford-postagger-full-2017-06-09/models/english-left3words-distsim.tagger' pos_tagger = StanfordPOSTagger(model, jar, encoding='utf8') def pos_tag(review): if(len(review)>0): return pos_tagger.tag(review) # Example text = pos_tagger.tag(word_tokenize("What's the airspeed of an unladen swallow ?")) print(text) tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: pos_tag(review))) tagged_df.head() # tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: nltk.pos_tag(review))) # tagged_df.head() """ Explanation: <span style="color:red">Unfortunatelly, this tagger, though much better and accurate, takes a lot of time. In order to process the above data set it would need close to 3 days of running.</span> Follow this link for more info on the tagger: https://nlp.stanford.edu/software/tagger.shtml#History End of explanation """ tagged_df['reviewText'][8] """ Explanation: Thankfully, nltk provides documentation for each tag, which can be queried using the tag, e.g., nltk.help.upenn_tagset(‘RB’), or a regular expression. nltk also provides batch pos-tagging method for document pos-tagging: End of explanation """ ## Join with Original Key and Persist Locally to avoid RE-processing uniqueKey_series_df = df0[['uniqueKey']] uniqueKey_series_df.head() pos_tagged_keyed_reviews = pd.concat([uniqueKey_series_df, tagged_df], axis=1); pos_tagged_keyed_reviews.head() pos_tagged_keyed_reviews.to_csv("../data/interim/002_pos_tagged_keyed_reviews.csv", sep='\t', header=True, index=False); """ Explanation: The list of all possible tags appears below: | Tag | Description | |------|------------------------------------------| | CC | Coordinating conjunction | | CD | Cardinal number | | DT | Determiner | | EX | ExistentialĘthere | | FW | Foreign word | | IN | Preposition or subordinating conjunction | | JJ | Adjective | | JJR | Adjective, comparative | | JJS | Adjective, superlative | | LS | List item marker | | MD | Modal | | NN | Noun, singular or mass | | NNS | Noun, plural | | NNP | Proper noun, singular | | NNPS | Proper noun, plural | | PDT | Predeterminer | | POS | Possessive ending | | PRP | Personal pronoun | | PRP | Possessive pronoun | | RB | Adverb | | RBR | Adverb, comparative | | RBS | Adverb, superlative | | RP | Particle | | SYM | Symbol | | TO | to | | UH | Interjection | | VB | Verb, base form | | VBD | Verb, past tense | | VBG | Verb, gerund or present participle | | VBN | Verb, past participle | | VBP | Verb, non-3rd person singular present | | VBZ | Verb, 3rd person singular present | | WDT | Wh-determiner | | WP | Wh-pronoun | | WP | Possessive wh-pronoun | | WRB | Wh-adverb | Notice: where you see * replace with $. End of explanation """ def noun_collector(word_tag_list): if(len(word_tag_list)>0): return [word for (word, tag) in word_tag_list if tag in {'NN', 'NNS', 'NNP', 'NNPS'}] nouns_df = pd.DataFrame(tagged_df['reviewText'].progress_apply(lambda review: noun_collector(review))) nouns_df.head() keyed_nouns_df = pd.concat([uniqueKey_series_df, nouns_df], axis=1); keyed_nouns_df.head() keyed_nouns_df.to_csv("../../data/interim/002_keyed_nouns_stanford.csv", sep='\t', header=True, index=False); ## END_OF_FILE """ Explanation: Nouns Nouns generally refer to people, places, things, or concepts, e.g.: woman, Scotland, book, intelligence. Nouns can appear after determiners and adjectives, and can be the subject or object of the verb. The simplified noun tags are N for common nouns like book, and NP for proper nouns like Scotland. End of explanation """
stevetjoa/stanford-mir
sheet_music_representations.ipynb
mit
ipd.SVG("https://upload.wikimedia.org/wikipedia/commons/2/27/MozartExcerptK331.svg") ipd.YouTubeVideo('dP9KWQ8hAYk') """ Explanation: &larr; Back to Index Sheet Music Representations Music can be represented in many different ways. The printed, visual form of a musical work is called a score or sheet music. For example, here is a sheet music excerpt from Mozart Piano Sonata No. 11 K. 331: End of explanation """ ipd.Image("https://upload.wikimedia.org/wikipedia/commons/a/a5/Perfect_octave_on_C.png") """ Explanation: Sheet music consists of notes. A note has several properties including pitch, timbre, loudness, and duration. Pitch (Wikipedia is a perceptual property that indicates how "high" or "low" a note sounds. Pitch is closely related to the fundamental frequency sounded by the note, although fundamental frequency is a physical property of the sound wave. An octave (Wikipedia) is an interval between two notes where the higher note is twice the fundamental frequency of the lower note. For example, an A at 440 Hz and an A at 880 Hz are separated by one octave. Here are two Cs separated by one octave: End of explanation """ ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Pitch_class_on_C.png/187px-Pitch_class_on_C.png") """ Explanation: A pitch class (Wikipedia) is the set of all notes that are an integer number of octaves apart. For example, the set of all Cs, {..., C1, C2, ...} is one pitch class, and the set of all Ds, {..., D1, D2, ...} is another pitch class. Here is the pitch class for C: End of explanation """
sbu-python-summer/python-tutorial
day-2/python-day2-exercises1.ipynb
bsd-3-clause
def four_letter_words(message): words = message.split() four_letters = [w for w in words if len(w) == 4] return four_letters message = "The quick brown fox jumps over the lazy dog" print(four_letter_words(message)) """ Explanation: Q 1 (function practice) Let's practice functions. Here's a simple function that takes a string and returns a list of all the 4 letter words: End of explanation """ a = "2.0" b = float(a) print(b, type(b)) """ Explanation: Write a version of this function that takes a second argument, n, that is the word length we want to search for Q 2 (primes) A prime number is divisible only by 1 and itself. We want to write a function that takes a positive integer, n, and finds all of the primes up to that number. A simple (although not very fast) way to find the primes is to start at 1, and build a list of primes by checking if the current number is divisible by any of the previously found primes. If it is not divisible by any earlier primes, then it is a prime. The modulus operator, % could be helpful here. Q 3 (exceptions for error handling) We want to safely convert a string into a float, int, or leave it as a string, depending on its contents. As we've already seen, python provides float() and int() functions for this: End of explanation """ a = "this is a string" b = float(a) a = "1.2345" b = int(a) print(b, type(b)) b = float(a) print(b, type(b)) """ Explanation: But these throw exceptions if the conversion is not possible End of explanation """ board = """ {s1:^3} | {s2:^3} | {s3:^3} -----+-----+----- {s4:^3} | {s5:^3} | {s6:^3} -----+-----+----- 123 {s7:^3} | {s8:^3} | {s9:^3} 456 789 """ """ Explanation: Notice that an int can be converted to a float, but if you convert a float to an int, you rise losing significant digits. A string cannot be converted to either. your task Write a function, convert_type(a) that takes a string a, and converts it to a float if it is a number with a decimal point, an int if it is an integer, or leaves it as a string otherwise, and returns the result. You'll want to use exceptions to prevent the code from aborting. Q 4 (tic-tac-toe) Here we'll write a simple tic-tac-toe game that 2 players can play. First we'll create a string that represents our game board: End of explanation """ print(board) """ Explanation: This board will look a little funny if we just print it&mdash;the spacing is set to look right when we replace the {} with x or o End of explanation """ play = {} def initialize_board(play): for n in range(9): play["s{}".format(n+1)] = "" initialize_board(play) play """ Explanation: and well use a dictionary to denote the status of each square, "x", "o", or empty, "" End of explanation """ a = "{s1:} {s2:}".format(s2=1, s1=2) a """ Explanation: Note that our {} placeholders in the board string have identifiers (the numbers in the {}). We can use these to match the variables we want to print to the placeholder in the string, regardless of the order in the format() End of explanation """ def show_board(play): """ display the playing board. We take a dictionary with the current state of the board We rely on the board string to be a global variable""" print(board.format(**play)) show_board(play) """ Explanation: Here's an easy way to add the values of our dictionary to the appropriate squares in our game board. First note that each of the {} is labeled with a number that matches the keys in our dictionary. Python provides a way to unpack a dictionary into labeled arguments, using ** This lets us to write a function to show the tic-tac-toe board. End of explanation """ def get_move(n, xo, play): """ ask the current player, n, to make a move -- make sure the square was not already played. xo is a string of the character (x or o) we will place in the desired square """ valid_move = False while not valid_move: idx = input("player {}, enter your move (1-9)".format(n)) if play["s{}".format(idx)] == "": valid_move = True else: print("invalid: {}".format(play["s{}".format(idx)])) play["s{}".format(idx)] = xo help(get_move) """ Explanation: Now we need a function that asks a player for a move: End of explanation """ def play_game(): """ play a game of tic-tac-toe """ """ Explanation: your task Using the functions defined above, * initialize_board() * show_board() * get_move() fill in the function play_game() below to complete the game, asking for the moves one at a time, alternating between player 1 and 2 End of explanation """
TomAugspurger/PracticalPandas
Practical Pandas 02 - More Cleaning, More Data, and Merging.ipynb
mit
%matplotlib inline import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df = pd.read_hdf('data/cycle_store.h5', key='merged') df.head() """ Explanation: This is Part 2 in the Practical Pandas Series, where I work through a data analysis problem from start to finish. It's a misconception that we can cleanly separate the data analysis pipeline into a linear sequence of steps from data acqusition data tidying exploratory analysis model building production As you work through a problem you'll realize, "I need this other bit of data", or "this would be easier if I stored the data this way", or more commonly "strange, that's not supposed to happen". We'll follow up our last post by circling back to cleaning up our data set, and fetching some more data. Here's a reminder of where we were. End of explanation """ df = df.drop(['Ride Time', 'Stopped Time', 'Pace', 'Average Pace'], axis=1) def renamer(name): for char in ['(', ')']: name = name.replace(char, '') name = name.replace(' ', '_') name = name.lower() return name df = df.rename(columns=renamer) list(df.columns) """ Explanation: Because of a bug in pandas, we lost our timzone information when we filled in our missing values. Until that's fixed we'll have to manually add back the timezone info and convert. I like to keep my DataFrame columns as valid python identifiers. Let's define a helper function to rename the columns. We also have a few redundant columns that we can drop. End of explanation """ time_pal = sns.color_palette(n_colors=2) # Plot it in mintues fig, axes = plt.subplots(ncols=2, figsize=(13, 5)) # max to get the last observation per ride since we know these are increasing times = df.groupby('ride_id')[['stopped_time_secs', 'ride_time_secs']].max() times['ride_time_secs'].plot(kind='bar', ax=axes[0], color=time_pal[0]) axes[0].set_title("Ride Time") times['stopped_time_secs'].plot(kind='bar', ax=axes[1], color=time_pal[1]) axes[1].set_title("Stopped Time") """ Explanation: Do you trust the data? Remember that I needed to manually start and stop the timer each ride, which natuarlly means that I messed this up at least once. Let's see if we can figure out the rides where I messed things up. The first heuristic we'll use is checking to see if I moved at all. All of my rides should have take roughly the same about of time. Let's get an idea of how the distribution of ride times look. We'll look at both the ride time and the time I spent stopped. If I spend a long time in the same place, there's a good chance that I finished my ride and forgot to stop the timer. End of explanation """ idx = times.stopped_time_secs.argmax() long_stop = df[df.ride_id == idx] ax = long_stop.set_index('time')['distance_miles'].plot() avg_distance = df.groupby('ride_id').distance_miles.max().mean() ax.set_ylabel("Distance (miles)") ax.hlines(avg_distance, *ax.get_xlim()) """ Explanation: Let's dig into that spike in the stopped time. We'll get it's ride id with the Series.argmax method. End of explanation """ import datetime def as_timedelta(x): return datetime.timedelta(0, x // 1, x % 1) df['time_adj'] = df.time - df.stopped_time_secs.apply(as_timedelta) df.head() """ Explanation: So it looks like I started my timer, sat around for about 15 minutes, and then continued with my normal ride (I verified that by plotting the average distance travelled per ride, and it was right on target). We can use most of the columns fine, it's just the the time column we need to be careful with. Let's make an adjusted time column time_adj that accounts for the stopped time. End of explanation """ # should be 0 if there are no repeats. len(df.time) - len(df.time.unique()) """ Explanation: When we start using the actual GPS data, we may need to do some smoothing. These are just readings from my iPhone, which probably aren't that accurate. Kalman filters, which I learned about in my econometrics class, are commonly used for this purpose. But I think that's good enough for now. Getting More Data I'm interested in explaining the variation in how long it took me to make the ride. I hypothesize that the weather may have had something to do with it. We'll fetch data from forecas.io using their API to get the weather conditions at the time of each ride. I looked at the forecast.io documentation, and noticed that the API will require a timezone. We could proceed in two ways Set df.time to be the index (a DatetimeIndex). Then localize with df.tz_localize Pass df.time through the DatetimeIndex constructor to set the timezone, and set that to be a column in df. Ideally we'd go with 1. Pandas has a lot of great additoinal functionality to offer when you have a DatetimeIndex (such as resample). However, this conflicts with the desire to have a unique index with this specific dataset. The times recorded are at the second frequency, but there are occasionally multiple readings in a second. End of explanation """ df['time'] = pd.DatetimeIndex(df.time, tz='US/Central') df.head() """ Explanation: So we'll go with #2, running the time column through the DatetimeIndex constructor, which has a tz (timezone) parameter, and placing that in a 'time' column. I'm in the US/Central timezone. End of explanation """ import json import requests with open('/Users/tom/Dropbox/bin/api-keys.txt') as f: key = json.load(f)['forecast.io'] url = "https://api.forecast.io/forecast/{key}/{Latitude},{Longitude},{Time}" vals = df.loc[0, ['latitude', 'longitude', 'time']].rename(lambda x: x.title()).to_dict() vals['Time'] = str(vals['Time']).replace(' ', 'T') vals['key'] = key r = requests.get(url.format(**vals)) resp = r.json() resp.keys() """ Explanation: There's nothing specific to pandas here, but knowing the basics of calling an API and parsing the response is still useful. We'll use requests to make the API call. You'll need to register for you own API key. I keep mine in a JSON file in my Dropbox bin folder. For this specific call we need to give the Latitude, Longitude, and Time that we want the weather for. We fill in those to a url with the format https://api.forecast.io/forecast/{key}/{Latitude},{Longitude},{Time}. End of explanation """ def get_weather(df, ride_id, key): """ Get the current weather conditions for for a ride at the time of departure. """ url = "https://api.forecast.io/forecast/{key}/{Latitude},{Longitude},{Time}" vals = df.query("ride_id == @ride_id").iloc[0][['latitude', 'longitude', 'time']].rename(lambda x: x.title()).to_dict() vals['key'] = key vals['Time'] = str(vals['Time']).replace(' ', 'T') r = requests.get(url.format(**vals)) resp = r.json()['currently'] return resp """ Explanation: Here's the plan. For each ride, we'll get the current conditions at the time, latitude, and longitude of departure. We'll use those values for the entirety of that ride. I'm a bit concerned about the variance of some quantities from the weather data (like the windspeed and bearing). This would be something to look into for a serious analysis. If the quantities are highly variable you would want to take a rolling average over more datapoints. forecast.io limits you to 1,000 API calls per day though (at the free tier), so we'll just stick with one request per ride. End of explanation """ get_weather(df, df.ride_id.unique()[0], key) """ Explanation: Let's test it out: End of explanation """ conditions = [get_weather(df, ride_id, key) for ride_id in df.ride_id.unique()] weather = pd.DataFrame(conditions) weather.head() """ Explanation: Now do that for each ride_id, and store the result in a DataFrame End of explanation """ weather['time'] = pd.DatetimeIndex(pd.to_datetime(weather.time, unit='s'), tz='UTC').\ tz_convert('US/Central') """ Explanation: Let's fixup the dtype on the time column. We need to convert from the seconds to a datetime. Then handle the timezone like before. This is returned in 'UTC', so we'll bring it back to my local time with .tz_convert. End of explanation """ with_weather = pd.merge(df, weather, on='time', how='outer') print(with_weather.time.dtype) with_weather[weather.columns] = with_weather[weather.columns].fillna(method='ffill') print(with_weather.time.dtype) with_weather.time.head() with_weather.time.head() """ Explanation: Now we can merge the two DataFrames weather and df. In this case it's quite simple since the share a single column, time. Pandas behaves exactly as you'd expect, merging on the provided column. We take the outer join since we only have weather information for the first obervation of each ride. We'll fill those values forward for the entirety of the ride. I don't just call with_weather.fillna() since the non-weather columns have NaNs that we may want to treat separately. End of explanation """ with_weather.to_hdf('data/cycle_store.h5', key='with_weather', append=False, method='table') weather.to_hdf('data/cycle_store.h5', key='weather', append=False, method='table') """ Explanation: With that done, let's write with_weather out to disk. We'll get a Performance Warning since some of the columns are text, which are relatively slow for HDF5, but it's not a problem worht worrying about for a dataset this small. If you needed you could encode the text ones as integers with pd.factorize, write the integers out the the HDF5 store, and store the mapping from integer to text description elsewhere. End of explanation """ sns.puppyplot() """ Explanation: A bit of Exploring We've done a lot of data wrangling with a notable lack of pretty pictures to look at. Let's fix that. End of explanation """ sns.set(style="white") cols = ['temperature', 'apparentTemperature', 'humidity', 'dewPoint', 'pressure'] # 'pressure', 'windBearing', 'windSpeed']].reset_index(drop=True)) g = sns.PairGrid(weather.reset_index()[cols]) g.map_diag(plt.hist) g.map_lower(sns.kdeplot, cmap="Blues_d") g.map_upper(plt.scatter) """ Explanation: For some other (less) pretty pictures, let's visualize some of the weather data we collected. End of explanation """ ax = plt.subplot(polar=True) ax.set_theta_zero_location('N') ax.set_theta_direction('clockwise') bins = np.arange(0, 361, 30) ax.hist(np.radians(weather.windBearing.dropna()), bins=np.radians(bins)) ax.set_title("Direction of Wind Origin") """ Explanation: Not bad! Seaborn makes exploring these relationships very easy. Let's also take a look at the wind data. I'm not a metorologist, but I saw a plot one time that's like a histogram for wind directions, but plotted on a polar axis (brings back memories of Calc II). Fortunately for us, matplotlib handles polar plots pretty easily, we just have to setup the axes and hand it the values as radians. End of explanation """ wind = weather[['windSpeed', 'windBearing']].dropna() ct = pd.cut(wind.windBearing, bins) speeds = wind.groupby(ct)['windSpeed'].mean() colors = plt.cm.BuGn(speeds.div(speeds.max())) """ Explanation: windBearing represent the direction the wind is coming from so the most common direction is from the S/SW. It may be clearer to flip that around to represent the wind direction; I'm not sure what's standard. If we were feeling ambitious, we could try to color the wedges by the windspeed. Let's give it a shot! We'll need to get the average wind speed in each of our bins from above. This is clearly a groupby, but what excatly is the grouper? This is where pandas Catagorical comes in handy. We'll pd.cut the wind direction, and group the wind data by that. End of explanation """ fig = plt.figure() ax = plt.subplot(polar=True) ax.set_theta_zero_location('N') ax.set_theta_direction('clockwise') bins = np.arange(0, 360, 30) ax.hist(np.radians(weather.windBearing.dropna()), bins=np.radians(bins)) for p, color in zip(ax.patches, colors): p.set_facecolor(color) ax.set_title("Direction of Wind Origin") """ Explanation: I map the speeds to colors with one of matplotlib's colormaps. It expects values in [0, 1], so we normalize the speeds by dividing by the maximum. hist doesn't take a cmap argument, and I couldn't get color to work, so we'll just plot it like before, and then modify the color of the patches after the fact. End of explanation """
tsivula/becs-114.1311
demos_ch2/demo2_4.ipynb
gpl-3.0
# Import necessary packages import numpy as np from scipy.stats import beta %matplotlib inline import matplotlib.pyplot as plt # add utilities directory to path import os, sys util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data')) if util_path not in sys.path and os.path.exists(util_path): sys.path.insert(0, util_path) # import from utilities import plot_tools # edit default plot settings plt.rc('font', size=12) """ Explanation: Bayesian Data Analysis, 3rd ed Chapter 2, demo 4 Authors: - Aki Vehtari &#97;&#107;&#105;&#46;&#118;&#101;&#104;&#116;&#97;&#114;&#105;&#64;&#97;&#97;&#108;&#116;&#111;&#46;&#102;&#105; - Tuomas Sivula &#116;&#117;&#111;&#109;&#97;&#115;&#46;&#115;&#105;&#118;&#117;&#108;&#97;&#64;&#97;&#97;&#108;&#116;&#111;&#46;&#102;&#105; Probability of a girl birth given placenta previa (BDA3 p. 37). Calculate the posterior distribution on a discrete grid of points by multiplying the likelihood and a non-conjugate prior at each point, and normalizing over the points. Simulate samples from the resulting non-standard posterior distribution using inverse cdf using the discrete grid. End of explanation """ # data (437,543) a = 437 b = 543 # grid of nx points nx = 1000 x = np.linspace(0, 1, nx) # compute density of non-conjugate prior in grid # this non-conjugate prior is same as in Figure 2.4 in the book pp = np.ones(nx) ascent = (0.385 <= x) & (x <= 0.485) descent = (0.485 <= x) & (x <= 0.585) pm = 11 pp[ascent] = np.linspace(1, pm, np.count_nonzero(ascent)) pp[descent] = np.linspace(pm, 1, np.count_nonzero(descent)) # normalize the prior pp /= np.sum(pp) # unnormalised non-conjugate posterior in grid po = beta.pdf(x, a, b)*pp po /= np.sum(po) # cumulative pc = np.cumsum(po) # inverse-cdf sampling # get n uniform random numbers from [0,1] n = 10000 r = np.random.rand(n) # map each r into corresponding grid point x: # [0, pc[0]) map into x[0] and [pc[i-1], pc[i]), i>0, map into x[i] rr = x[np.sum(pc[:,np.newaxis] < r, axis=0)] """ Explanation: Calculate results End of explanation """ # plot 3 subplots fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(6, 8)) # show only x-axis plot_tools.modify_axes.only_x(axes) # manually adjust spacing fig.subplots_adjust(hspace=0.5) # posterior with uniform prior Beta(1,1) axes[0].plot(x, beta.pdf(x, a+1, b+1)) axes[0].set_title('Posterior with uniform prior') # non-conjugate prior axes[1].plot(x, pp) axes[1].set_title('Non-conjugate prior') # posterior with non-conjugate prior axes[2].plot(x, po) axes[2].set_title('Posterior with non-conjugate prior') # cosmetics #for ax in axes: # ax.set_ylim((0, ax.get_ylim()[1])) # set custom x-limits axes[0].set_xlim((0.35, 0.65)); # plot samples # apply custom background plotting style plt.style.use(plot_tools.custom_styles['gray_background']) plt.figure(figsize=(8, 6)) # calculate histograms and scale them into the same figure hist_r = np.histogram(r, bins=30) hist_rr = np.histogram(rr, bins=30) plt.barh( hist_r[1][:-1], hist_r[0]*0.025/hist_r[0].max(), height=hist_r[1][1]-hist_r[1][0], left=0.35, align='edge', color=plot_tools.lighten('C1', 0.6), label='random uniform numbers' ) plt.bar( hist_rr[1][:-1], hist_rr[0]*0.5/hist_rr[0].max(), width=hist_rr[1][1]-hist_rr[1][0], bottom=-0.04, align='edge', color=plot_tools.lighten('C0'), label='posterior samples' ) # plot cumulative posterior plt.plot( x, pc, color='C0', label='cumulative posterior' ) # turn spines off # legend plt.legend( loc='center left', bbox_to_anchor=(1.0, 0.5), fontsize=12 ) # set limits plt.xlim((0.35, 0.55)) plt.ylim((-0.04, 1.04)); """ Explanation: Plot results End of explanation """
tensorflow/docs-l10n
site/ja/probability/examples/Probabilistic_PCA.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import functools import warnings import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow.compat.v2 as tf import tensorflow_probability as tfp from tensorflow_probability import bijectors as tfb from tensorflow_probability import distributions as tfd tf.enable_v2_behavior() plt.style.use("ggplot") warnings.filterwarnings('ignore') """ Explanation: 確率的主成分分析(PCA) <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_PCA"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Probabilistic_PCA.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Probabilistic_PCA.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Probabilistic_PCA.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td> </table> 確率的主成分分析(PCA)は、低次元の潜在空間を介してデータを分析する次元削減手法です(Tipping and Bishop 1999)。データの値が欠落している場合や多次元スケーリングに多く使用されます。 インポート End of explanation """ def probabilistic_pca(data_dim, latent_dim, num_datapoints, stddv_datapoints): w = yield tfd.Normal(loc=tf.zeros([data_dim, latent_dim]), scale=2.0 * tf.ones([data_dim, latent_dim]), name="w") z = yield tfd.Normal(loc=tf.zeros([latent_dim, num_datapoints]), scale=tf.ones([latent_dim, num_datapoints]), name="z") x = yield tfd.Normal(loc=tf.matmul(w, z), scale=stddv_datapoints, name="x") num_datapoints = 5000 data_dim = 2 latent_dim = 1 stddv_datapoints = 0.5 concrete_ppca_model = functools.partial(probabilistic_pca, data_dim=data_dim, latent_dim=latent_dim, num_datapoints=num_datapoints, stddv_datapoints=stddv_datapoints) model = tfd.JointDistributionCoroutineAutoBatched(concrete_ppca_model) """ Explanation: モデル $N$ データポイントのデータセット $\mathbf{X} = {\mathbf{x}_n}$ を検討します。各データポイントは $D$-dimensional, $\mathbf{x}_n \in \mathbb{R}^D$ です。低次元 $K &lt; D$ で、潜在変数 $\mathbf{z}_n \in \mathbb{R}^K$ で各 $\mathbf{x}_n$ を表現したいと思います。主軸 $\mathbf{W}$ のセットは、潜在変数をデータに関連付けます。 具体的には、各潜在変数は正規に分布されていると仮定します。 $$ \begin{equation} \mathbf{z}_n \sim N(\mathbf{0}, \mathbf{I}). \end{equation} $$ 対応するデータポイントは、プロジェクションを介して生成されます。 $$ \begin{equation} \mathbf{x}_n \mid \mathbf{z}_n \sim N(\mathbf{W}\mathbf{z}_n, \sigma^2\mathbf{I}), \end{equation} $$ 上記の行列 $\mathbf{W}\in\mathbb{R}^{D\times K}$ は主軸として知られています。確率的 PCA では通常、主軸 $\mathbf{W}$ とノイズ項 $\sigma^2$ の推定に関心があります。 確率的 PCA は、古典的な PCA を一般化したものです。潜在変数を除外した場合、各データポイントの分布は、次のようになります。 $$ \begin{equation} \mathbf{x}_n \sim N(\mathbf{0}, \mathbf{W}\mathbf{W}^\top + \sigma^2\mathbf{I}). \end{equation} $$ 古典的な PCA は、ノイズの共分散が $\sigma^2 \to 0$ のように非常に小さくなる確率的 PCA 特有のケースです。 モデルを以下のようにセットアップしました。この分析では、$\sigma$ が既知であると想定しており、$\mathbf{W}$ をモデルパラメータとして想定しているポイントの代わりに、主軸に対する分布を推論するために事前分布をかぶせています。このモデルを TFP JointDistribution として表現し、具体的に、JointDistributionCoroutineAutoBatched を使用します。 End of explanation """ actual_w, actual_z, x_train = model.sample() print("Principal axes:") print(actual_w) """ Explanation: データ このモデルを使用し、同時事前分布からサンプリングしてデータを生成することができます。 End of explanation """ plt.scatter(x_train[0, :], x_train[1, :], color='blue', alpha=0.1) plt.axis([-20, 20, -20, 20]) plt.title("Data set") plt.show() """ Explanation: データセットを視覚化します。 End of explanation """ w = tf.Variable(tf.random.normal([data_dim, latent_dim])) z = tf.Variable(tf.random.normal([latent_dim, num_datapoints])) target_log_prob_fn = lambda w, z: model.log_prob((w, z, x_train)) losses = tfp.math.minimize( lambda: -target_log_prob_fn(w, z), optimizer=tf.optimizers.Adam(learning_rate=0.05), num_steps=200) plt.plot(losses) """ Explanation: 最大事後確率推定 まず、事後確率密度を最大化する潜在変数の点推定を探します。これは、最大事後確率(MAP)推定法として知られており、事後確率密度 $p(\mathbf{W}, \mathbf{Z} \mid \mathbf{X}) \propto p(\mathbf{W}, \mathbf{Z}, \mathbf{X})$ を最大化する $\mathbf{W}$ と $\mathbf{Z}$ の値を計算することで行われます。 End of explanation """ print("MAP-estimated axes:") print(w) _, _, x_generated = model.sample(value=(w, z, None)) plt.scatter(x_train[0, :], x_train[1, :], color='blue', alpha=0.1, label='Actual data') plt.scatter(x_generated[0, :], x_generated[1, :], color='red', alpha=0.1, label='Simulated data (MAP)') plt.legend() plt.axis([-20, 20, -20, 20]) plt.show() """ Explanation: モデルを使用して、$\mathbf{W}$ と $\mathbf{Z}$ の推定値を得るデータをサンプリングし、条件を設定した実際のデータセットと比較します。 End of explanation """ qw_mean = tf.Variable(tf.random.normal([data_dim, latent_dim])) qz_mean = tf.Variable(tf.random.normal([latent_dim, num_datapoints])) qw_stddv = tfp.util.TransformedVariable(1e-4 * tf.ones([data_dim, latent_dim]), bijector=tfb.Softplus()) qz_stddv = tfp.util.TransformedVariable( 1e-4 * tf.ones([latent_dim, num_datapoints]), bijector=tfb.Softplus()) def factored_normal_variational_model(): qw = yield tfd.Normal(loc=qw_mean, scale=qw_stddv, name="qw") qz = yield tfd.Normal(loc=qz_mean, scale=qz_stddv, name="qz") surrogate_posterior = tfd.JointDistributionCoroutineAutoBatched( factored_normal_variational_model) losses = tfp.vi.fit_surrogate_posterior( target_log_prob_fn, surrogate_posterior=surrogate_posterior, optimizer=tf.optimizers.Adam(learning_rate=0.05), num_steps=200) print("Inferred axes:") print(qw_mean) print("Standard Deviation:") print(qw_stddv) plt.plot(losses) plt.show() posterior_samples = surrogate_posterior.sample(50) _, _, x_generated = model.sample(value=(posterior_samples)) # It's a pain to plot all 5000 points for each of our 50 posterior samples, so # let's subsample to get the gist of the distribution. x_generated = tf.reshape(tf.transpose(x_generated, [1, 0, 2]), (2, -1))[:, ::47] plt.scatter(x_train[0, :], x_train[1, :], color='blue', alpha=0.1, label='Actual data') plt.scatter(x_generated[0, :], x_generated[1, :], color='red', alpha=0.1, label='Simulated data (VI)') plt.legend() plt.axis([-20, 20, -20, 20]) plt.show() """ Explanation: 変分推論 MAP は、事後分布のモード(またはモードの 1 つ)を見つけるために使用できますが、それに関するインサイトは何も提供しません。次に、変分推論を使用してみましょう。事後分布 $p(\mathbf{W}, \mathbf{Z} \mid \mathbf{X})$ は $\boldsymbol{\lambda}$ でパラメーター化された変分分布 $q(\mathbf{W}, \mathbf{Z})$ を使用して概算されます。q と事後分布の KL 発散を最小化する変分パラメーター $\boldsymbol{\lambda}$ を見つけること($\mathrm{KL}(q(\mathbf{W}, \mathbf{Z}) \mid\mid p(\mathbf{W}, \mathbf{Z} \mid \mathbf{X}))$)または同様に、根拠の下限を最大化する変分パラメーター $\boldsymbol{\lambda}$ を見つけること($\mathbb{E}_{q(\mathbf{W},\mathbf{Z};\boldsymbol{\lambda})}\left[ \log p(\mathbf{W},\mathbf{Z},\mathbf{X}) - \log q(\mathbf{W},\mathbf{Z}; \boldsymbol{\lambda}) \right]$)を目標とします。 End of explanation """
AllenDowney/ThinkStats2
homeworks/homework04.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style='white') from utils import decorate from thinkstats2 import Pmf, Cdf import thinkstats2 import thinkplot """ Explanation: Homework 4 Regression Allen Downey MIT License End of explanation """ %time brfss = pd.read_hdf('brfss.hdf5', 'brfss') brfss.head() """ Explanation: Simple regression An important thing to remember about regression is that it is not symmetric; that is, the regression of A onto B is not the same as the regression of B onto A. To demonstrate, I'll load data from the BRFSS. End of explanation """ rows = brfss['_VEGESU1'] > 8 brfss.loc[rows, '_VEGESU1'] = 8 """ Explanation: A few people report many vegetable servings per day. To simplify the visualization, I'm going to replace values greater than 8 with 8. End of explanation """ from scipy.stats import linregress subset = brfss.dropna(subset=['INCOME2', '_VEGESU1']) xs = subset['INCOME2'] ys = subset['_VEGESU1'] res = linregress(xs, ys) res """ Explanation: We can use SciPy to compute servings of vegetables as a function of income class. End of explanation """ x_jitter = xs + np.random.normal(0, 0.15, len(xs)) plt.plot(x_jitter, ys, 'o', markersize=1, alpha=0.02) plt.xlabel('Income code') plt.ylabel('Vegetable servings per day') fx1 = np.array([xs.min(), xs.max()]) fy1 = res.intercept + res.slope * fx1 plt.plot(fx1, fy1, '-', color='C1'); """ Explanation: Increasing income class by 1 is associated with an increase of 0.07 vegetables per day. So if we hypothesize that people with higher incomes eat more vegetables, this result would not get us too excited. We can see what the regression looks like by plotting the line of best fit on top of the scatter plot. End of explanation """ xs = subset['_VEGESU1'] ys = subset['INCOME2'] res = linregress(xs, ys) res """ Explanation: Now let's do it the other way around, regressing income as a function of vegetable servings. End of explanation """ y_jitter = ys + np.random.normal(0, 0.3, len(xs)) plt.plot(xs, y_jitter, 'o', markersize=1, alpha=0.02) plt.ylabel('Income code') plt.xlabel('Vegetable servings per day') fx2 = np.array([xs.min(), xs.max()]) fy2 = res.intercept + res.slope * fx2 plt.plot(fx2, fy2, '-', color='C2'); """ Explanation: Again, we can plot the line of best fit on top of the scatter plot. End of explanation """ y_jitter = ys + np.random.normal(0, 0.3, len(xs)) plt.plot(xs, y_jitter, 'o', markersize=1, alpha=0.02) plt.ylabel('Income code') plt.xlabel('Vegetable servings per day') fx2 = np.array([xs.min(), xs.max()]) fy2 = res.intercept + res.slope * fx2 plt.plot(fx2, fy2, '-', color='C2') plt.plot(fy1, fx1, '-', color='C1'); """ Explanation: The slope looks more impressive now. Each additional serving corresponds to 0.24 income codes, and each income code is several thousand dollars. So a result that seemed unimpressive in one direction seems more intruiging in the other direction. But the primary point here is that regression is not symmetric. To see it more clearly, I'll plot both regression lines on top of the scatter plot. The green line is income as a function of vegetables; the orange line is vegetables as a function of income. End of explanation """ xs = subset['INCOME2'] ys = subset['_VEGESU1'] res = linregress(xs, ys) res x_jitter = xs + np.random.normal(0, 0.15, len(xs)) plt.plot(x_jitter, ys, 'o', markersize=1, alpha=0.02) plt.xlabel('Income code') plt.ylabel('Vegetable servings per day') fx1 = np.array([xs.min(), xs.max()]) fy1 = res.intercept + res.slope * fx1 plt.plot(fx1, fy1, '-', color='C1') plt.plot(fy2, fx1, '-', color='C2'); """ Explanation: And here's the same thing the other way around. End of explanation """ import statsmodels.formula.api as smf model = smf.ols('INCOME2 ~ _VEGESU1', data=brfss) model """ Explanation: StatsModels So far we have used scipy.linregress to run simple regression. Sadly, that function doesn't do multiple regression, so we have to switch to a new library, StatsModels. Here's the same example from the previous section, using StatsModels. End of explanation """ results = model.fit() results """ Explanation: The result is an OLS object, which we have to fit: End of explanation """ results.summary() """ Explanation: results contains a lot of information about the regression, which we can view using summary. End of explanation """ results.params """ Explanation: One of the parts we're interested in is params, which is a Pandas Series containing the estimated parameters. End of explanation """ results.rsquared """ Explanation: And rsquared contains the coefficient of determination, $R^2$, which is pretty small in this case. End of explanation """ np.sqrt(results.rsquared) columns = ['INCOME2', '_VEGESU1'] brfss[columns].corr() """ Explanation: We can confirm that $R^2 = \rho^2$: End of explanation """ # Solution goes here # Solution goes here """ Explanation: Exercise: Run this regression in the other direction and confirm that you get the same estimated slope we got from linregress. Also confirm that $R^2$ is the same in either direction (which we know because correlation is the same in either direction). End of explanation """ %time gss = pd.read_hdf('gss.hdf5', 'gss') gss.shape gss.head() gss.describe() """ Explanation: Multiple regression For experiments with multiple regression, let's load the GSS data again. End of explanation """ model = smf.ols('realinc ~ educ', data=gss) model results = model.fit() results.params """ Explanation: Let's explore the relationship between income and education, starting with simple regression: End of explanation """ model = smf.ols('realinc ~ educ + age', data=gss) results = model.fit() results.params """ Explanation: It looks like people with more education have higher incomes, about $3586 per additional year of education. Now that we are using StatsModels, it is easy to add explanatory variables. For example, we can add age to the model like this. End of explanation """ grouped = gss.groupby('age') grouped mean_income_by_age = grouped['realinc'].mean() plt.plot(mean_income_by_age, 'o', alpha=0.5) plt.xlabel('Age (years)') plt.ylabel('Income (1986 $)'); """ Explanation: It looks like the effect of age is small, and adding it to the model has only a small effect on the estimated parameter for education. But it's possible we are getting fooled by a nonlinear relationship. To see what the age effect looks like, I'll group by age and plot the mean income in each age group. End of explanation """ gss['age2'] = gss['age']**2 model = smf.ols('realinc ~ educ + age + age2', data=gss) results = model.fit() results.summary() """ Explanation: Yeah, that looks like a nonlinear effect. We can model it by adding a quadratic term to the model. End of explanation """ # Solution goes here """ Explanation: Now the coefficient associated with age is substantially larger. And the coefficient of the quadratic term is negative, which is consistent with the observation that the relationship has downward curvature. Exercise: To see what the relationship between income and education looks like, group the dataset by educ and plot mean income at each education level. End of explanation """ gss['educ2'] = gss['educ']**2 model = smf.ols('realinc ~ educ + educ2 + age + age2', data=gss) results = model.fit() results.summary() """ Explanation: Exercise: Maybe the relationship with education is nonlinear, too. Add a quadratic term for educ to the model and summarize the results. End of explanation """ df = pd.DataFrame() df['age'] = np.linspace(18, 85) df['age2'] = df['age']**2 """ Explanation: Making predictions The parameters of a non-linear model can be hard to interpret, but maybe we don't have to. Sometimes it is easier to judge a model by its predictions rather than its parameters. The results object provides a predict method that takes a DataFrame and uses the model to generate a prediction for each row. Here's how we can create the DataFrame: End of explanation """ plt.plot(mean_income_by_age, 'o', alpha=0.5) df['educ'] = 12 df['educ2'] = df['educ']**2 pred12 = results.predict(df) plt.plot(df['age'], pred12, label='High school') plt.xlabel('Age (years)') plt.ylabel('Income (1986 $)') plt.legend(); """ Explanation: age contains equally-spaced points from 18 to 85, and age2 contains those values squared. Now we can set educ to 12 years of education and generate predictions: End of explanation """ # Solution goes here """ Explanation: This plot shows the structure of the model, which is a parabola. We also plot the data as an average in each age group. Exercise: Generate the same plot, but show predictions for three levels of education: 12, 14, and 16 years. End of explanation """ formula = 'realinc ~ educ + educ2 + age + age2 + C(sex)' results = smf.ols(formula, data=gss).fit() results.params """ Explanation: Adding categorical variables In a formula string, we can use C() to indicate that a variable should be treated as categorical. For example, the following model contains sex as a categorical variable. End of explanation """ # Solution goes here # Solution goes here """ Explanation: The estimated parameter indicates that sex=2, which indicates women, is associated with about \$4150 lower income, after controlling for age and education. Exercise: Use groupby to group respondents by educ, then plot mean realinc for each education level. End of explanation """ # Solution goes here # Solution goes here """ Explanation: Exercise: Make a DataFrame with a range of values for educ and constant age=30. Compute age2 and educ2 accordingly. Use this DataFrame to generate predictions for each level of education, holding age constant. Generate and plot separate predictions for men and women. Also plot the data for comparison. End of explanation """ gss['gunlaw'].value_counts() """ Explanation: Logistic regression Let's use logistic regression to see what factors are associated with support for gun control. The variable we'll use is gunlaw, which represents the response to this question: "Would you favor or oppose a law which would require a person to obtain a police permit before he or she could buy a gun?" Here are the values. End of explanation """ gss['gunlaw'].replace([0, 8, 9], np.nan, inplace=True) """ Explanation: 1 means yes, 2 means no, 0 means the question wasn't asked; 8 and 9 mean the respondent doesn't know or refused to answer. First I'll replace 0, 8, and 9 with NaN End of explanation """ gss['gunlaw'].replace(2, 0, inplace=True) """ Explanation: In order to put gunlaw on the left side of a regression, we have to recode it so 0 means no and 1 means yes. End of explanation """ gss['gunlaw'].value_counts() """ Explanation: Here's what it looks like after recoding. End of explanation """ results = smf.logit('gunlaw ~ age + age2 + educ + educ2 + C(sex)', data=gss).fit() """ Explanation: Now we can run a logistic regression model End of explanation """ results.summary() """ Explanation: Here are the results. End of explanation """ results.params """ Explanation: Here are the parameters. The coefficient of sex=2 is positive, which indicates that women are more likely to support gun control, at least for this question. End of explanation """ grouped = gss.groupby('age') favor_by_age = grouped['gunlaw'].mean() plt.plot(favor_by_age, 'o', alpha=0.5) df = pd.DataFrame() df['age'] = np.linspace(18, 89) df['educ'] = 12 df['age2'] = df['age']**2 df['educ2'] = df['educ']**2 df['sex'] = 1 pred = results.predict(df) plt.plot(df['age'], pred, label='Male') df['sex'] = 2 pred = results.predict(df) plt.plot(df['age'], pred, label='Female') plt.xlabel('Age') plt.ylabel('Probability of favoring gun law') plt.legend(); """ Explanation: The other parameters are not easy to interpret, but again we can use the regression results to generate predictions, which makes it possible to visualize the model. I'll make a DataFrame with a range of ages and a fixed level of education, and generate predictions for men and women. End of explanation """ # Solution goes here """ Explanation: Over the range of ages, women are more likely to support gun control than men, by about 15 percentage points. Exercise: Generate a similar plot as a function of education, with constant age=40. End of explanation """ # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here """ Explanation: Exercise: Use the variable grass to explore support for legalizing marijuana. This variable record the response to this question: "Do you think the use of marijuana should be made legal or not?" Recode grass for use with logistic regression. Run a regression model with age, education, and sex as explanatory variables. Use the model to generate predictions for a range of ages, with education held constant, and plot the predictions for men and women. Also plot the mean level of support in each age group. Use the model to generate predictions for a range of education levels, with age held constant, and plot the predictions for men and women. Also plot the mean level of support at each education level. Note: This last graph might not look like a parabola. Why not? End of explanation """
karlstroetmann/Artificial-Intelligence
Python/6 Classification/Gradient-Ascent.ipynb
gpl-2.0
def findMaximum(f, gradF, start, eps): x = start fx = f(x) alpha = 0.1 # learning rate cnt = 0 # number of iterations while True: cnt += 1 xOld, fOld = x, fx x += alpha * gradF(x) fx = f(x) print(f'cnt = {cnt}, f({x}) = {fx}') print(f'gradient = {gradF(x)}') if abs(x - xOld) <= abs(x) * eps: return x, fx, cnt if fx <= fOld: # f didn't increased, learning rate is too high alpha *= 0.5 # decrease the learning rate print(f'decrementing: alpha = {alpha}') x, fx = xOld, fOld # reset x continue else: # f has increased alpha *= 1.2 # increase the learning rate print(f'incrementing: alpha = {alpha}') import numpy as np """ Explanation: Gradient Ascent The function find_maximum that is defined below takes four arguments: - f is a function of the form $\texttt{f}: \mathbb{R}^n \rightarrow \mathbb{R}$. It is assumed that the function f is <font color="blue">convex</font> and therefore there is only one global maximum. - gradF is the gradient of the function f. - start is a numpy array of numbers that is used to start the search for a maximum. - eps is a small floating point number. This number controls the precision. If the values of f change less than eps, then the algorithm stops. The function find_maximum returns a triple of values of the form $$ (x_{max}, \texttt{fx}, \texttt{cnt}) $$ - $x_{max}$ is an approximation of the position of the maximum, - $\texttt{fx}$ is equal to $\texttt{f}(x_{max})$, - $\texttt{cnt}$ is the number of iterations that have been performed. The algorithms computes a sequence $(x_n)n$ that is defined inductively: - $x_0 := \texttt{start}$, - $x{n+1} := x_n + \alpha_n \cdot \nabla f(x_n)$. The algorithm given below adjusts the <font color="blue">learning rate</font> $\alpha$ dynamically: If $f(x_{n+1}) > f(x_n)$, then the learning rate alpha is increased by a factor of $1.2$. Otherwise, the learning rate is decreased by a factor of $\frac{1}{2}$. This way, the algorithm determines the optimal learning rate by itself. End of explanation """ def f(x): return np.sin(x) - x**2 / 2 """ Explanation: We will try to find the maximum of the function $$ f(x) := \sin(x) - \frac{x^2}{2} $$ End of explanation """ import matplotlib.pyplot as plt import seaborn as sns X = np.arange(-0.5, 1.8, 0.01) Y = f(X) plt.figure(figsize=(15, 10)) sns.set(style='whitegrid') plt.title('lambda x: sin(x) - x**2/2') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, step=0.1)) plt.plot(X, Y, color='b') """ Explanation: Let us plot this function. End of explanation """ def fs(x): return np.cos(x) - x """ Explanation: Clearly, this function has a maximum somewhere between 0.7 and 0.8. Let us use gradient ascent to find it. In order to do so, we have to provide the derivative of this function. We have $$ \frac{\mathrm{d}f}{\mathrm{d}x} = \cos(x) - x. $$ End of explanation """ X2 = np.arange(0.4, 1.1, 0.01) Ys = fs(X2) plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('lambda x: sin(x) - x**2/2 and its derivative') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x') plt.ylabel('y') plt.xticks(np.arange(-0.5, 1.81, step=0.1)) plt.yticks(np.arange(-0.6, 0.61, step=0.1)) plt.plot(X, Y, color='b') plt.plot(X2, Ys, color='r') x_max, _, cnt = findMaximum(f, fs, 0.0, 1e-15) x_max, cnt """ Explanation: Let us plot the derivative together with the function. End of explanation """ fs(x_max) """ Explanation: The maximum seems to be at $x \approx 0.739085$. Let's check the derivative at this position. End of explanation """
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160524화_7일차_기초 확률론 3 - 확률 모형 Probability Models(단변수 분포)/1.베르누이 확률 분포.ipynb
mit
theta = 0.6 rv = sp.stats.bernoulli(theta) rv xx = [0, 1] plt.bar(xx, rv.pmf(xx), align="center") plt.xlim(-1, 2) plt.ylim(0, 1) plt.xticks([0, 1], ["X=0", "X=1"]) plt.ylabel("P(x)") plt.title("pmf of Bernoulli distribution") plt.show() """ Explanation: 베르누이 확률 분포 베르누이 시도 결과가 성공(Success) 혹은 실패(Fail) 두 가지 중 하나로만 나오는 것을 베르누이 시도(Bernoulli trial)라고 한다. 예를 들어 동전을 한 번 던져 앞면(H:Head)이 나오거나 뒷면(T:Tail)이 나오게 하는 것은 베르누이 시도의 일종이다. 베르누이 시도의 결과를 확률 변수(random variable) $X$ 로 나타낼 때는 보통 성공을 정수 1 ($X=1$), 실패를 정수 0 ($X=0$)으로 정한다. 경우에 따라서는 성공을 1 ($X=1$), 실패를 -1 ($X=-1$)로 정하는 경우도 있다. 베르누이 분포 베르누이 확률 변수는 0, 1 두 가지 값 중 하나만 가질 수 있으므로 이산 확률 변수(discrete random variable)이다. 따라서 확률 질량 함수(pmf: probability mass function)와 누적 분포 함수(cdf:cumulataive distribution function)으로 정의할 수 있다. 베르누이 확률 변수는 1이 나올 확률 $\theta$ 라는 하나의 모수(parameter)만을 가진다. 0이 나올 확률은 $1 - \theta$ 로 정의된다. 베르누이 확률 질량 함수는 다음과 같다. $$ \text{Bern}(x;\theta) = \begin{cases} \theta & \text{if }x=1, \ 1-\theta & \text{if }x=0 \end{cases} $$ 이를 하나의 수식으로 표현 하면 다음과 같다. $$ \text{Bern}(x;\theta) = \theta^x(1-\theta)^{(1-x)} $$ 만약 베르누이 확률 변수가 1과 -1이라는 값을 가진다면 다음과 같은 수식으로 쓸 수 있다. $$ \text{Bern}(x; \theta) = \theta^{(1+x)/2} (1-\theta)^{(1-x)/2} $$ SciPy를 사용한 베르누이 분포의 시뮬레이션 Scipy의 stats 서브 패키지에 있는 bernoulli 클래스는 베르누이 분포 클래스이다. p 인수로 분포의 모수 $\theta$을 설정한다. End of explanation """ x = rv.rvs(100) x sns.countplot(x) plt.show() """ Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다. 시뮬레이션을 하려면 rvs 메서드를 사용한다. End of explanation """ y = np.bincount(x, minlength=2)/len(x) df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack() df = df.reset_index() df.columns = ["value", "type", "ratio"] df sns.barplot(x="value", y="ratio", hue="type", data=df) plt.show() """ Explanation: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다. End of explanation """
emalgorithm/Algorithm_Notebooks
Sorting/Sorting.ipynb
gpl-3.0
# so our plots get drawn in the notebook %matplotlib inline from matplotlib import pyplot as plt from random import randint from time import clock # a timer - runs the provided function and reports the # run time in ms def time_f(f): before = clock() f() after = clock() return after - before # remember - lambdas are just one line functions # make us a random list length (between 1 - 2000) rand_len = lambda min_len=1, max_len=2e3: randint(min_len, max_len) # choose a random value for a list element (between 0 1e6) rand_int = lambda: randint(0, 1e6) # generate a random list of random length - # here we use a list comprehension, a very tidy # way of transforming lists of data rand_list = lambda min_len=1, max_len=2e3: [rand_int() for i in range(rand_len(min_len=min_len, max_len=max_len))] """ Explanation: Algorithms 202: Coursework 1 Task 1: Sorting Group-ID: 15 Group members: Tencho Tenev, Emanuele Rossi, Nikolay Yotov Objectives The aim of this coursework is to enhance your algorithmic skills by mastering the divide and conquer and dynamic programming strategies. You are asked to show that you can: implement divide and conquer solutions for given problems compare naive and advanced implementations of algorithms solving the same problem This notebook is the coursework. It contains cells with function definitions that you will need to complete. You will submit this notebook as your coursework. The comparisons of different algorithms involve textual descriptions and graphical plots. For graphing you will be using matplotlib to generate plots. This tutorial will be useful to go through to get you up to speed. For the textual descriptions you may wish to use LaTeX inline like $\mathcal{O}(n\log{}n)$. Double click this cell to reveal the required markup - and see here for useful guidance on producing common symbols used in asymptotic run time analysis. Preliminaries: helper functions Here we define a collection of functions that will be useful for the rest of the coursework. You'll need to run this cell to get started. End of explanation """ def insertion_sort(a): length = len(a) # Assume [0;i) is the sorted part of a and insert a[i] for i in range(1, len(a)): # Insert a[i] in the right spot going backwards from a[i-1] for j in range(i, 0, -1): if a[j] < a[j-1]: a[j], a[j-1] = a[j-1], a[j] return a """ Explanation: Task 1: Sorting In this task you are asked to implement insertion_sort and merge_sort. You need to perform an experimental analysis of their running time. Based on your analysis, you should implement a third sorting algorithm, hybrid_sort, which is similar to merge_sort but uses insertion_sort for the base case. The problem size for which the base case is invoked has to be inferred from the running time analysis. 1a. Implement insertion_sort Complete the below definition for insertion_sort. Do not change the name of the function or it's arguments. Hints: Your sort should be in-place (i.e. it changes the input list for the caller) but you should also return the list so the function can be called as indicated below. End of explanation """ x = [2, 4, 1, 3] print(insertion_sort(x) == [1, 2, 3, 4]) """ Explanation: Use this test to confirm your implementation is correct. End of explanation """ import math def merge_sort(a): length = len(a) if length == 1: return a mid = math.floor(length / 2) l = a[:mid] r = a[mid:] l_sorted = merge_sort(l) r_sorted = merge_sort(r) return merge(l_sorted, r_sorted) def merge(l, r): merged = [] while l and r: if l[0] <= r[0]: merged.append(l.pop(0)) else: merged.append(r.pop(0)) # Append all of l (maybe empty) merged.extend(l) # Append all of r (maybe empty) merged.extend(r) return merged """ Explanation: 1b. Implement merge_sort Complete the below definition for merge_sort. Do not change the name of the function or it's arguments. Hints: Your implementation should leave the input list unmodified for the caller You are free to define other functions in this cell End of explanation """ x = [2, 4, 1, 3] print(merge_sort(x) == [1, 2, 3, 4]) """ Explanation: Use this test to confirm your implementation is correct. End of explanation """ short_lists = [rand_list(min_len=5, max_len=30) for _ in range(10000)] test_lists = short_lists[:] insertion_sort_times = [time_f(lambda: insertion_sort(x)) for x in test_lists] test_lists = short_lists[:] merge_sort_times = [time_f(lambda: merge_sort(x)) for x in test_lists] list_sizes = [len(x) for x in short_lists] %matplotlib inline from matplotlib import pyplot as plt insertion_result = plt.scatter(list_sizes, insertion_sort_times, c='red') merge_result = plt.scatter(list_sizes, merge_sort_times, c='blue') plt.xlabel('n') plt.ylabel('time (/s)') plt.xlim(5,30) plt.ylim(0, 0.0001) plt.legend((insertion_result, merge_result), ('Insertion Sort', 'Merge Sort')) """ Explanation: 1c. Analyse the running time performance of insertion_sort and merge_sort Draw graphs showing the running time performance of your insertion_sort and merge_sort for different lengths of random integers. Analyse the performance at the large scale ($n \approx 10^3$) and small scale ($n \approx 10$). End of explanation """ long_lists = [rand_list(max_len=1000) for _ in range(1000)] test_lists = long_lists[:] insertion_sort_times = [time_f(lambda: insertion_sort(x)) for x in test_lists] test_lists = long_lists[:] merge_sort_times = [time_f(lambda: merge_sort(x)) for x in test_lists] list_sizes = [len(x) for x in long_lists] %matplotlib inline from matplotlib import pyplot as plt isort = plt.scatter(list_sizes, insertion_sort_times, c='red') msort = plt.scatter(list_sizes, merge_sort_times, c='blue') plt.xlabel('n') plt.ylabel('time (/s)') plt.xlim(0) plt.ylim(0) plt.legend((isort, msort), ('Insertion sort', 'Merge sort')) pass very_long_lists = [rand_list(max_len=100000) for _ in range(100)] merge_sort_times = [time_f(lambda: merge_sort(x)) for x in very_long_lists] list_sizes = [len(x) for x in very_long_lists] %matplotlib inline from matplotlib import pyplot as plt merge_sort_data = plt.scatter(list_sizes, merge_sort_times, c='blue') plt.xlabel('n') plt.ylabel('time (/s)') plt.xlim(0) plt.ylim(0) plt.legend((merge_sort_data,), ('merge_sort',)) """ Explanation: From the above chart we can clearly see how Insertion Sort is faster for input size up to approximately 20. End of explanation """ def hybrid_sort(a, base_size=20): length = len(a) if length <= base_size: return insertion_sort(a) mid = math.floor(length / 2) l = a[:mid] r = a[mid:] l_sorted = hybrid_sort(l) r_sorted = hybrid_sort(r) return merge(l_sorted, r_sorted) """ Explanation: Now discuss your findings in a few lines in the below cell: Our implementation of insertion sort performs slightly better than merge sort for input size up to 20. Data from sorting longer lists clearly shows that insertion sort has complexity $\mathcal{O}(n^2)$ while merge sort complexity is $\mathcal{O}(n\log{}n)$. 1d. Implement hybrid_sort() Implement hybrid_sort(), a merge_sort() variant which uses insertion_sort() for the base case. The problem size for which the base case is invoked has to be inferred from your above running time analysis. End of explanation """ x = [2, 4, 1, 3] print(hybrid_sort(x) == [1, 2, 3, 4]) """ Explanation: Use this test to confirm your implementation is correct. End of explanation """ lists = [rand_list(max_len=1000) for _ in range(1000)] test_lists = lists[:] merge_sort_times = [time_f(lambda: merge_sort(x)) for x in test_lists] test_lists = lists[:] insertion_sort_times = [time_f(lambda: insertion_sort(x)) for x in test_lists] test_lists = lists[:] hybrid_sort_times = [time_f(lambda: hybrid_sort(x)) for x in test_lists] list_sizes = [len(x) for x in lists] %matplotlib inline from matplotlib import pyplot as plt merge_data = plt.scatter(list_sizes, merge_sort_times, c='blue') insertion_data = plt.scatter(list_sizes, insertion_sort_times, c='red') hybrid_data = plt.scatter(list_sizes, hybrid_sort_times, c='green') plt.xlabel('n') plt.ylabel('time (/s)') plt.xlim(0) plt.ylim(0) plt.legend((merge_data, insertion_data, hybrid_data), ('Merge Sort', 'Insertion Sort', 'Hybrid Sort')) """ Explanation: 1e. Analyse all three sorting implementations together Draw graphs showing the running time performance of your insertion_sort(), merge_sort() and hybrid_sort() for different lengths of random integers. End of explanation """ lists = [rand_list(max_len=10000) for _ in range(1000)] test_lists = lists[:] merge_sort_times = [time_f(lambda: merge_sort(x)) for x in lists] test_lists = lists[:] hybrid_sort_times = [time_f(lambda: hybrid_sort(x)) for x in lists] list_sizes = [len(x) for x in lists] %matplotlib inline from matplotlib import pyplot as plt merge_data = plt.scatter(list_sizes, merge_sort_times, c='blue') hybrid_data = plt.scatter(list_sizes, hybrid_sort_times, c='green') plt.xlabel('n') plt.ylabel('time (/s)') plt.xlim(0) plt.ylim(0, 0.08) plt.legend((merge_data, hybrid_data), ('Merge Sort', 'Hybrid Sort')) """ Explanation: Analyse Merge Sort vs Hybrid Sort End of explanation """
stellaxux/machine-learning-in-python
ch4/handling_categorical_data.ipynb
mit
# create a pandas dataframe with categorical variables to work with import pandas as pd df = pd.DataFrame([['green', 'M', 10.1, 'class1'], ['red', 'L', 13.5, 'class2'], ['blue', 'XL', 15.3, 'class1']]) df.columns = ['color', 'size', 'price', 'classlabel'] df """ Explanation: Handling categorical data In this notebook, I'll demonstrate different ways of mapping or encoding categorical data. End of explanation """ size_mapping = {'XL': 3, 'L': 2, 'M': 1} df['size'] = df['size'].map(size_mapping) df # transform integers back to string values using a reverse-mapping dictionary inv_size_mapping = {v: k for k, v in size_mapping.items()} df['size'].map(inv_size_mapping) """ Explanation: 1. Mapping ordinal features Create a mapping dictionary first and then map the categorical string values into integers. End of explanation """ def size_to_numeric(x): if x=='XL': return 3 if x=='L': return 2 if x=='M': return 1 df['size_num'] = df['size'].apply(size_to_numeric) df """ Explanation: Create a function that converts strings into numbers End of explanation """ # using pandas 'get_dummies' pd.get_dummies(df[['price','color', 'size']]) # using pandas 'get_dummies' pd.get_dummies(df['color']) pd.get_dummies(df['color']).join(df[['size', 'price']]) # using scikit-learn LabelEncoder and OneHotEncoder from sklearn.preprocessing import LabelEncoder color_le = LabelEncoder() df['color'] = color_le.fit_transform(df['color']) df from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder() color = ohe.fit_transform(df['color'].reshape(-1,1)).toarray() df_color = pd.DataFrame(color, columns = ['blue', 'green', 'red']) df_color df[['size', 'price']].join(df_color) """ Explanation: 2. Convert nominal categorical feature into dummy variables Often, machine learning algorithms require that categorical variables be converted into dummy variables (also called OneHot encoding). For example, a single feature Fruit would be converted into three features, Apples, Oranges, and Bananas, one for each category in the categorical feature. There are common ways to preprocess categorical features: using pandas or scikit-learn. End of explanation """ import numpy as np class_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))} df['classlabel'] = df['classlabel'].map(class_mapping) df """ Explanation: 3. Encoding class labels Create a mapping dictionary by enumerating unique categories. Note that class labels are not ordinal; they are nominal. End of explanation """ from sklearn.preprocessing import LabelEncoder class_le = LabelEncoder() df['classlabel'] = class_le.fit_transform(df['classlabel'].values) df class_le.inverse_transform(df.classlabel) """ Explanation: Use LabelEncoder in scikit-learn to convert class labels into integers End of explanation """ import patsy df = pd.DataFrame([['green', 'M', 10.1, 'class1'], ['red', 'L', 13.5, 'class2'], ['blue', 'XL', 15.3, 'class1']]) df.columns = ['color', 'size', 'price', 'classlabel'] # Convert df['color'] into a categorical variable, setting one category as the baseline patsy.dmatrix('color', df, return_type='dataframe') # Convert df['color'] into a categorical variable without setting one category as baseline patsy.dmatrix('color-1', df, return_type='dataframe') """ Explanation: 4. Convert categorical variable with Patsy End of explanation """
nberliner/ChordDiagram
Chord Diagrams for Bokeh.ipynb
mit
# Each row defines how many items were "send" to the group specified by the column # for the "golden image" use case, the matrix should be symmetric matrix = np.array([[16, 3, 28, 0, 18], [18, 0, 12, 5, 29], [ 9, 11, 17, 27, 0], [19, 0, 31, 11, 12], [23, 17, 10, 0, 34]], dtype=int) labels = ['One', 'Two', 'Three', 'Four', 'Five'] pd.DataFrame(matrix, columns=labels, index=labels) """ Explanation: Chord Diagrams for Bokeh Chord diagrams are a wonderful way to visualise interactions between groups, along the genome, and many others (check the circos page for some advances examples). I recently had the need to implement a basic chord diagram into a Bokeh app. Based on Plotly's post about Chord Diagrams, I implemented a basic class which can be used with Bokeh. I hope that it can serve as starting point and can be used in other implementations as well. The existing chord diagram implementation (as of Bokeh version 0.12.4) has a rather odd look (which is espacially apparent if you zoom in a bit). Input data As outlined in Plotly's post, the input data must be a square matrix, where each row denotes one group member (or entity, or item, or, ...). Each column holds the interaction between two group members. It is assumed that the row group member is "sending" interaction to the column group member. The total interaction is thus given by the sum of the ${i,j}$ and ${j,i}$ values. Taking the example of the used in the post we have, End of explanation """ cd = ChordDiagram(matrix) """ Explanation: which indicates that group Three was sending 11 items to group Two while receiving 12 items in return. Group Three has furthermore 17 interactions with itself, whereas group Two has none. Building the Chord Diagram The chord diagram can be build by handing the input data to the ChordDiagram class. End of explanation """ fig = cd.plot(group=0) t = show(row(fig, ), notebook_handle=True) """ Explanation: The interactions between the groups can now be visualised by calling the plot method of the ChordDiagram class. The index of the group corresponds to the row of the input data. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/noaa-gfdl/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-1', 'atmoschem') """ Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: NOAA-GFDL Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:35 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) """ Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) """ Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) """ Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) """ Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation """
PyladiesMx/Pyladies_ifc
1. PrimitiveTypes_and_operators/.ipynb_checkpoints/objetos simples y operaciones básicas-checkpoint.ipynb
mit
import turtle ventana = turtle.Screen() ventana.bgcolor('lightblue') ventana.title('Hello Erika!') erika = turtle.Turtle() erika.color('blue') erika.pensize(5) erika.forward(100) erika.left(90) erika.forward(100) """ Explanation: Bienvenid@s!! En la reunión de hoy aprenderemos acerca de python y sus cimientos. Veremos qué es Python, Ipython y Jupyter; comenzaremos con conceptos básicos de objetos, variables, operaciones. También jugaremos con un módulo de Python llamado Turtle el cuál nos ayudará an entender (espero!) un poco mejor esto de programación orientada a objetos. En Python existen programas ya escritos que nos permiten agregar a nuestro código un sin fin de atributos. Esto generalmente hace la tarea de programar aún más sencilla, ya que no tenemos que implementar desde cero ciertas funciones de uso cotidiano. A esto se le llama módulo en Python. Hoy vamos a usar un módulo que aparte de ser divertido nos ayuda a desarrollar nuestro pensamiento computacional, este módulo se llama Turtle y nos permite dibujar formas o patrones. Vamos a empezar con Turtle. Para esto tenemos que importar un módulo de la siguiente manera: Abre una consola de Ipython En la consola, después de los >>> escribe import turtle y da enter Lo que acabamos de hacer es decirle a python: ...'oye quiero usar las funciones y objetos que tiene un programa llamado Turtle'... Python lo que hace (si es que todo sale bien) es responder con: ...'ok'... que se puede ver en la forma de >>> o In[n]: si es que usamos ipython Ahora ya podemos empezar a crear nuestro programa para dibujar. Ejercicio Escribe los siguientes comandos en tu terminal y pon atención a lo que va pasando en la ventana End of explanation """ import turtle erika = turtle """ Explanation: Ahora cambia el color de tu ventana, del tamaño y color del trazo, y si quieres también crea otra tortuga con tu nombre. Luego intenta dibujar un cuadrado, un triángulo y un hexágono Ahora vamos a ver qué es lo que estuvimos usando. End of explanation """ turtle.Turtle? %quickref """ Explanation: Ipython: la hermana amable de Python Veamos que tiene que decir Wikipedia acerca de Ipython: ..."IPython es un shell interactivo que añade funcionalidades extra al modo interactivo incluido con Python, como resaltado de líneas y errores mediante colores, una sintaxis adicional para el shell, autocompletado mediante tabulador de variables, módulos y atributos; entre otras funcionalidades"... Como bien dice la wikipedia, ipython es una variante de python que es mucho más amable con el usuario. Te dice más acerca de los errores que puedas cometer y te ayuda a completar nombres cuando se te olvidan. El código que se corre en esta libreta de Jupyter es con Ipython End of explanation """ #Veamos las funciones mágicas de la libreta %magic """ Explanation: El planeta Jupyter de Python Esto que ves aquí es una libreta de Jupyter ejecutando Ipython. Aquí puedes escribir texto (al estilo Markdown) y correr líneas de código que escribas, también hay funciones mágicas que te ayudan a hacer cosas como mostrar las gráficas que creas directamente en la libreta (generalmente Ipython genera una ventana nueva con la imagen lo cual es bastante molesto cuando estás con la libreta...) Si te gusta leer en inglés y quieres más información puedes consultar el siguiente enlace End of explanation """ type(5) type(5.0) 5+5 5-5.0 #¿Notas algo raro en este resultado? 5.0 * 5 #¿Qué crees que va a dar de resultado esta operación? 5/5 #¿Cómo se puede hacer para que la división nos dé integer en lugar de float? #Prueba esto 5//5 """ Explanation: Finalmente a Trabajar Números en Python Hay dos tipos de número en Python los cuales se llaman "integers" y "floats". Estos números pueden ser utilizados para realizar operaciones matemáticas como lo harías con cualquier calculadora. End of explanation """ 5.0//5.0 """ Explanation: Pregunta 1 ¿Qué crees que pasaría si le pedimos a python lo siguiente? 5.0//5.0 End of explanation """ 5.0//5 """ Explanation: Pregunta 2 ¿Crees que obtendrías el mismo resultado si sólo uno de los dividendos fuera un float? End of explanation """ #¿Cuánto nos queda de residuo si dividimos 25%5? 25%5 #¿Cuánto nos queda de residuo si dividimos 25%7? 25%7 """ Explanation: Ahora imagina que queremos saber cuál es el residuo de una división. Python tiene un operador que se ve así "%" Veamos cómo funciona... End of explanation """ 25%7.0 """ Explanation: Pregunta 3 ¿Qué pasaría si alguno de los dividendos fuera float? End of explanation """ #Pista: doble estrella 5**2 #Pista: módulo math """ Explanation: Pregunta 4 ¿Cómo podríamos obtener la potencia de un número y su raíz cuadrada? End of explanation """ True False type(True) True and False True or True True > False #Un momento, si True es mayor que False ¿cómo es que python representa internamente a estos valores booleanos? True == 1 False == 0 #¿Entonces se pueden sumar los valores booleanos? ¡Probemos! True + True + False False - True True + True != 1 """ Explanation: Al igual que en matemáticas, si necesitamos que las operaciones se hagan en un orden definido debemos poner la expresión que queremos que se realice primero entre paréntesis. Por ejemplo si queremos que una suma se realice antes que una multiplicación y el resultado de esta se reste a otro número usaríamos una expresión parecida a esta: ((a + b) * c) - d Valores Booleanos y comparaciones En python no sólo podemos manejar números, también podemos manejar valores booleanos de tipo Falso-Verdadero y hacer operaciones con ellos End of explanation """ #Ya vimos que El valor de True "equivale" a 1 True == 1 #Ahora nos preguntamos si True "es" 1 True is 1 #y que tal si ahora preguntamos si True "es" True True is True """ Explanation: Como podrás darte cuenta, las operaciones y comparaciones que puedes hacer en estos valores son bastantes y puedes ir aumentando la complejidad usando paréntesis y combinando con lo que aprendiste a hacer de operaciones con python Nota "==" vs "is" Cuando quieres saber si un objeto es exactamente ese objeto hay un operador que se llama "is". Nada más que es importante no confundirlo con el operador "==" ya que este a diferencia del "is" te dice si el valor de un objeto equivale a otro. Suena confuso pero tal vez este ejemplo lo clarifique End of explanation """ "Esto es un string" 'Esto también' #Esto también es un string '2' #y en python 3 se pueden concatenar estas series de caracteres sólo poniéndolos continuamente. Hay que notar que un #espacio también es un caracter... "Los números"' '"y"' ' 'símbolos'" "'también '"se usan"' ''3456%$#%' """ Explanation: Strings y todo lo que puedes hacer con palabras Los strings son cadenas de caracteres, palabras o frases que en python se declaran al ponerlos dentro de comillas, ya sea simples o dobles End of explanation """ '{} of the {}'.format('75%', 'World') # También puedes usar palabras clave si necesitas poner una misma palabra muchas veces '{sujeto} camina, {sujeto} juega, {sujeto} duerme con todos {predicado}'.format(sujeto = 'El perro', predicado = 'nosotros') print('Esta función imprime') """ Explanation: Puedes crear espacios que luego vas a llenar en estos strings con un método que se llama format End of explanation """ variable_1 = 5 + 5 variable_2 = True variable_1 + variable_2 cajón = 'Hola a todos' cajón print(cajón) cajón == variable_1 """ Explanation: Variables: los cajones de python Hasta ahora hemos visto que podemos hacer operaciones sobre diferentes objetos de python, pero que pasa cuando queremos que una operación o un valor se quede disponible para que los usemos durante todo programa. En python hay algo que se llama variable y está formada por un espacio en el sistema de almacenaje que está asociado a un nombre. Ese espacio contiene una cantidad o información conocida o desconocida, es decir un valor. Veamos algunos ejemplos. End of explanation """
terrydolan/lfctrio
lfctrio.ipynb
mit
%%html <! left align the change log table in next cell > <style> table {float:left} </style> """ Explanation: LFC Data Analysis: A Striking Trio See Terry's blog LFC: A Striking Trio for a discussion of of the data generated by this analysis. This notebook analyses Liverpool FC's goalscoring data from 1892-1893 to 2014-2015. In particular Liverpool's top scoring trio is identified and compared to Barcelona's best from 2014-2015. The analysis uses IPython Notebook, python, pandas and matplotlib to explore the data. Notebook Change Log End of explanation """ import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import sys from datetime import datetime from __future__ import division # enable inline plotting %matplotlib inline """ Explanation: | Date | Change Description | | :------------ | :----------------- | | 1st July 2015 | Initial baseline | Set-up Import the modules needed for the analysis. End of explanation """ print 'python version: {}'.format(sys.version) print 'pandas version: {}'.format(pd.__version__) print 'matplotlib version: {}'.format(mpl.__version__) print 'numpy version: {}'.format(np.__version__) """ Explanation: Print version numbers. End of explanation """ dflfc_scorers = pd.read_csv('data\lfchistory_goalscorers.csv', sep=';') # sort by season, total goals, then league goals, etc # same as on lfchistory.net season archive / goalscorers dflfc_scorers = dflfc_scorers.sort(['season', 'total', 'league', 'facup', 'lccup', 'europe', 'other', 'player'], ascending=False) # check sort order dflfc_scorers[dflfc_scorers.season == '1983-1984'].head() """ Explanation: Load the LFC scorers data into a dataframe and munge End of explanation """ # for example, check the mapping for Jan Molby dflfc_scorers[dflfc_scorers.player.str.startswith('Jan')].head(1) # replace known non-ascii names using a mapping dictionary name_mapper = {'Jan M\xf8lby': 'Jan Molby', 'Emiliano Ins\xfaa': 'Emiliano Insua', 'F\xe1bio Aur\xe9lio': 'Fabio Aurelio', '\xc1lvaro Arbeloa': 'Alvaro Arbeloa', 'Djibril Ciss\xe9': 'Djibril Cisse', 'Djimi Traor\xe9': 'Djimi Traore', '\xd8yvind Leonhardsen': 'Oyvind Leonhardsen', 'Stig Inge Bj\xf8rnebye': 'Stig Inge Bjornebye', 'Glenn Hys\xe9n': 'Glenn Hysen' } dflfc_scorers['player'] = dflfc_scorers['player'].apply(lambda x: name_mapper[x] if x in name_mapper else x) # for example, check the mapping for Jan Molby dflfc_scorers[dflfc_scorers.player.str.startswith('Jan')].head() dflfc_scorers.head() dflfc_scorers.tail() """ Explanation: Replace unwanted 'special' non-ascii characters End of explanation """ dflfc_scorers[['player', 'total']].groupby('player').sum().sort('total', ascending=False).head(10) """ Explanation: Analyse the data Ask a question and find the answer. Who are all time top goal scorers? cross-check the answer with http://www.lfchistory.net/Stats/PlayerGoalscorers End of explanation """ dflfc_scorers[['player', 'season', 'total']].groupby(['player', 'season']).sum().sort('total', ascending=False).head(10) """ Explanation: Who scored the all time most goals scored in a season? End of explanation """ dflfc_scorers[['player', 'season', 'league']].groupby(['player', 'season']).sum().sort('league', ascending=False).head(10) """ Explanation: Who are the top 10 all time most league goals scored in a season? End of explanation """ dflfc_scorers[['season', 'league']].groupby(['season']).sum().sort('league', ascending=False).head(1) """ Explanation: What was most league goals in a season? End of explanation """ LANCS_YRS = ['1892-1893'] SECOND_DIV_YRS = ['1893-1894', '1895-1896', '1904-1905', '1961-1962', '1954-1955', '1955-1956', '1956-1957', '1957-1958', '1958-1959', '1959-1960', '1960-1961'] WAR_YRS = ['1945-1946'] # note that the other war years already excluded NOT_TOP_LEVEL_YRS = LANCS_YRS + SECOND_DIV_YRS + WAR_YRS dflfc_scorers_tl = dflfc_scorers[~dflfc_scorers.season.isin(NOT_TOP_LEVEL_YRS)].copy() # show most league goals in a season in top level # cross-check with http://en.wikipedia.org/wiki/List_of_Liverpool_F.C._records_and_statistics#Goalscorers # expect 101 in 2013-14 dflfc_scorers_tl[['season', 'league']].groupby(['season']).sum().sort('league', ascending=False).head(1) """ Explanation: Create new dataframe of top level seasons End of explanation """ # show highest goals at top level dflfc_scorers_tl_sum = dflfc_scorers_tl.groupby('season').sum().sort('total', ascending=False) dflfc_scorers_tl_sum.reset_index(inplace=True) dflfc_scorers_tl_sum.head() # show top individual scorer in a top level season dflfc_scorers_tl.sort('total', ascending=False).head() """ Explanation: 96 is correct as the dataframe does not include own goals - OG was 5 in 2013-14 End of explanation """ # show best total for a striking partnerships in the league dflfc_scorers_tl_top2_lg = dflfc_scorers_tl[['season', 'league']].groupby('season').head(2).groupby('season').sum() # reset index and move season to column in dataframe dflfc_scorers_tl_top2_lg.reset_index(inplace=True) # show top dflfc_scorers_tl_top2_lg.sort('league', ascending=False).head(10) """ Explanation: Take a quick look at the top scoring partnership in the league End of explanation """ TOP_PARTNERSHIPS = ['1963-1964', '2013-2014'] dflfc_scorers_tl[['season', 'player', 'league']][dflfc_scorers_tl.season.isin(TOP_PARTNERSHIPS)].groupby('season').head(2) """ Explanation: Note that 1963-64 and have 2013-14 have top scoring partnership. End of explanation """ # create dataframe filtered for the league goals dflfc_scorers_tl_lg = dflfc_scorers_tl[['season', 'player', 'league']] dflfc_scorers_tl_lg.head() # show best total for 3 strikers working together dflfc_scorers_tl_top3_lg = dflfc_scorers_tl_lg[['season', 'league']].groupby('season').head(3).groupby('season').sum() # reset index and move season to column in dataframe dflfc_scorers_tl_top3_lg.reset_index(inplace=True) # show top dflfc_scorers_tl_top3_lg.sort('league', ascending=False).head(10) """ Explanation: Remarkably Hunt and Suarez scored 31 and St John and Sturridge scored 21. Let's now focus on the top scoring trio in the league End of explanation """ # capture top league seasons for top 3, in order NUMBER_SEASONS = 10 top_seasons_lg = dflfc_scorers_tl_top3_lg.sort('league', ascending=False).head(NUMBER_SEASONS).season.values top_seasons_lg # show top 3 scorers for top seasons dflfc_scorers_tl_lg[dflfc_scorers_tl_lg.season.isin(top_seasons_lg)].groupby('season').head(3) # check if any of 4ths are same as 3rds import itertools f = dflfc_scorers_tl_lg[dflfc_scorers_tl_lg.season.isin(top_seasons_lg)].groupby('season').head(4) f = f.reset_index(drop=True) # print 3rd and 4th and inspect visually f.irow(list(itertools.chain.from_iterable((i-1, i) for i in range(3, len(f), 4)))) """ Explanation: Now find the top3 scorers for these seasons, in order. End of explanation """ # create dataframe of top 3 league scorers dflfc_trio = dflfc_scorers_tl_lg[dflfc_scorers_tl_lg.season.isin(top_seasons_lg)].groupby('season').head(3) dflfc_trio.head(6) # create custom dict with key of seasons and value of order (0 is first) custom_dict = {s:idx for idx, s in enumerate(top_seasons_lg)} custom_dict # now add a column with the rank for each season using the custom dict dflfc_trio['top_rank'] = dflfc_trio['season'].map(custom_dict) dflfc_trio.head() # now show the striking trios in order, highest first dflfc_trio.sort(['top_rank', 'league'], ascending=[True, False], inplace=True) dflfc_trio.drop('top_rank', axis=1, inplace=True) dflfc_trio.head(6) # print the list, in order this_season = None for season, player, league in dflfc_trio.values: if this_season != season: print '\n' this_season = season print season, player, league # pretty print with single row per season # and create a new dataframe to hold this for good measure df_top3_sum = pd.DataFrame(columns=['Season', 'Goals', 'Goalscorers']) for idx, season in enumerate(dflfc_trio.season.unique(), 1): #print season scorers = [] league_tot = 0 for player, league in dflfc_trio[dflfc_trio.season == season][['player', 'league']].values: league_tot += int(league) scorer = '{} ({})'.format(player, league) #print scorer scorers.append(scorer) print season, league_tot, ', '.join(scorers) df_top3_sum.loc[idx] = (season, league_tot, ', '.join(scorers)) # set pandas option that allows all Goalscorers to be displayed # this avoids the default curtailing of long rows with ... pd.set_option('display.max_colwidth', -1) df_top3_sum # show top 3 trios df_top3_sum.head(3) """ Explanation: Note that in 1928-1929 both Harry Race and Bob Clark scored 9. ok, back to the strking trio - need to get these in order End of explanation """ dflfc_league = pd.read_csv('data\lfchistory_league.csv') dflfc_league.tail() """ Explanation: Check to see if there is a correlation between top trios and league position Load the league data End of explanation """ dflfc_league_pos = dflfc_league[['Season', 'Pos', 'GF', 'GA', 'GD']].copy() dflfc_league_pos.rename(columns={'Season': 'season', 'Pos': 'pos'}, inplace=True) dflfc_league_pos.tail() """ Explanation: Create new dataframe with league position and key goal data End of explanation """ dflfc_scorers_tl_top3_lg.head() dflfc_scorers_tl_top3_lg_pos = dflfc_scorers_tl_top3_lg.merge(dflfc_league_pos) dflfc_scorers_tl_top3_lg_pos.sort('league', ascending=False).head(10) dfp = dflfc_scorers_tl_top3_lg_pos.sort('league', ascending=False).head(10) t = dfp.pos[dfp.pos == 1].count() print 'total league wins in top 10 of top3s is: {}'.format(t) """ Explanation: Now check league position of the top3s End of explanation """ print len(dflfc_scorers_tl_top3_lg) dflfc_scorers_tl_top3_lg.head() # create a list of missing years START_YR = 1890 END_YR = 2015 all_years = ['{}-{}'.format(i, i+1) for i in range(START_YR, END_YR)] years_in_df = dflfc_scorers_tl_top3_lg.season.unique() missing_years = [s for s in all_years if s not in years_in_df] print 'there are {} missing years, here are first 5: {}'.format(len(missing_years), missing_years[0:5]) # add missing years to dataframe, sort and reset the index dflfc_scorers_tl_top3_lg_full = dflfc_scorers_tl_top3_lg.copy() for s in missing_years: dflfc_scorers_tl_top3_lg_full.loc[len(dflfc_scorers_tl_top3_lg_full)]=(s, np.NaN) dflfc_scorers_tl_top3_lg_full = dflfc_scorers_tl_top3_lg_full.sort('season') dflfc_scorers_tl_top3_lg_full.reset_index(drop=True, inplace=True) print len(dflfc_scorers_tl_top3_lg_full) dflfc_scorers_tl_top3_lg_full.head() top_seasons_lg # The aim is to highlight the top 10 trios on the plot, so these need their own column. # create series for top 10 seasons containing the top3 scorers top3_top10 = dflfc_scorers_tl_top3_lg_full.apply(lambda row: row.league if row.season in top_seasons_lg else np.NaN, axis=1) # create series for the other seasons, the ones that don't containing the top 10 top3 scorers top3_other = dflfc_scorers_tl_top3_lg_full.apply(lambda row: np.NaN if row.season in top_seasons_lg else row.league, axis=1) # add these series as columns to the dataframe dflfc_scorers_tl_top3_lg_full['top3_top10'] = top3_top10 dflfc_scorers_tl_top3_lg_full['top3_other'] = top3_other dflfc_scorers_tl_top3_lg_full.tail() # And now plot using different shapes for the top3_top10 and top3_other columns DF = dflfc_scorers_tl_top3_lg_full FIG_SIZE = (9, 6) fig = plt.figure() tot_yrs = len(DF) tot_goals = int(DF.top3_top10.max()) XTICKS = range(0, tot_yrs+10, 10) YTICKS = range(0, tot_goals+30, 10) ax = DF.plot(style='r.', figsize=FIG_SIZE, x='season', y='top3_other', legend=False, rot='vertical', xticks=XTICKS, yticks=YTICKS) DF.plot(ax=ax, style='ro', figsize=FIG_SIZE, x='season', y='top3_top10', legend=False, rot='vertical', xticks=XTICKS, yticks=YTICKS) ax.set_ylabel('total league goals by striking trio') ax.set_xlabel('top level season') ax.set_title('total league goals by LFC striking trio in top level season') ax.text(1, 1, 'prepared by: @terry8dolan') fig = plt.gcf() # save current figure plt.show() fig.savefig('SeasonvsTrioGoals.png', bbox_inches='tight') """ Explanation: 6 out of 10 of seasons with top3 scores did not result in a title. Back to the top trios, let's now plot the data. Plot The Top Trios End of explanation """ # create dictionary with Barca stats for 2014-15 # ref: https://en.wikipedia.org/wiki/2014%E2%80%9315_FC_Barcelona_season barca_201415 = {'season': ['2014-2015', '2014-2015', '2014-2015'], 'team': ['FCB', 'FCB', 'FCB'], 'player': ['Messi', 'Neymar', 'Suarez'], 'appearance': [38, 33, 26], 'league': [43, 22, 16]} # create a dataframe from the dict dfb_trio = pd.DataFrame(data=barca_201415, columns=['season', 'team', 'player', 'appearance', 'league']) dfb_trio = dfb_trio.set_index('season') dfb_trio['GPA'] = (dfb_trio.league/dfb_trio.appearance).round(2) dfb_trio['APG'] = (dfb_trio.appearance/dfb_trio.league).round(2) dfb_trio.head() dfb_trio.league.sum() dfb_trio.plot(kind='bar', x='player', y=['appearance', 'league']) """ Explanation: Compare The Striking Trios: LFC (1963-64) with Barcelona (2014-15) Create a new dataframe for the Barca trio End of explanation """ # create dictionary with LFC stats for 1963-64 lfc_196364 = {'season': ['1963-1964', '1963-1964', '1963-1964'], 'team': ['LFC', 'LFC', 'LFC'], 'player': ['Hunt', 'St John', 'Arrowsmith'], 'appearance': [41, 40, 20], 'league': [31, 21, 15]} # create a dataframe from the dict dfl_trio = pd.DataFrame(data=lfc_196364, columns=['season', 'team', 'player', 'appearance', 'league']) dfl_trio = dfl_trio.set_index('season') dfl_trio['GPA'] = (dfl_trio.league/dfl_trio.appearance).round(2) dfl_trio['APG'] = (dfl_trio.appearance/dfl_trio.league).round(2) dfl_trio.head() dfl_trio.league.sum() dfl_trio.plot(kind='bar', x='player', y=['appearance', 'league'], ) """ Explanation: Create a new dataframe for the LFC trio End of explanation """ df_trio = pd.DataFrame() df_trio = pd.concat([dfl_trio, dfb_trio]) df_trio df_trio.sort('APG') df_trio.plot(kind='bar', x='player', y=['appearance', 'league']) """ Explanation: Create a new combined dataframe with LFC and Barca data End of explanation """ FIG_SIZE = (9, 6) fig = plt.figure() # sort the dataframe by league goals df_trio_lg_sorted = df_trio.sort('league', ascending=False) # produce list of colour based on team team_colours = ['r' if team is 'LFC' else 'b' for team in df_trio_lg_sorted.team.values] # plot dataframe ax = df_trio_lg_sorted.plot(kind='bar', x='player', y='league', legend=False, color=['b', 'r', 'b', 'r', 'b', 'r'], title='Total League Goals for Top 3 Strikers: Barca 2014-15 and LFC 1963-64', figsize=FIG_SIZE, ylim=(0, 50)) # set the axis labels ax.set_xlabel('Player') ax.set_ylabel('Total League Goals') # create fake legend l1 = plt.Line2D([], [], linewidth=10, color='b') l2 = plt.Line2D([], [], linewidth=10, color='r') labels = ['FCB 2014-2015', 'LFC 1963-1964'] ax.legend([l1, l2], labels) ax.text(-.4, 48, 'prepared by: @terry8dolan') fig = plt.gcf() # save current figure plt.show() fig.savefig('PlayervsGoals.png', bbox_inches='tight') """ Explanation: Plot goals End of explanation """ FIG_SIZE = (9, 6) fig = plt.figure() # sort the dataframe by GPA df_trio_GPA_sorted = df_trio.sort('GPA', ascending=False) # produce list of colour based on team team_colours = ['r' if team is 'LFC' else 'b' for team in df_trio_GPA_sorted.team.values] # plot the dataframe ax = df_trio_GPA_sorted.plot(kind='bar', x='player', y='GPA', legend=False, color=team_colours, title='League Goals per Game for Top 3 strikers: Barca 2014-15 and LFC 1963-64', figsize=FIG_SIZE, ylim=(0, 1.4)) # set the axis labels ax.set_xlabel('Player') ax.set_ylabel('League Goals Per Game') # create fake legend l1 = plt.Line2D([], [], linewidth=10, color='b') l2 = plt.Line2D([], [], linewidth=10, color='r') labels = ['FCB 2014-2015', 'LFC 1963-1964'] ax.legend([l1, l2], labels) ax.text(-.4, 1.35, 'prepared by: @terry8dolan') # save current figure and plot fig = plt.gcf() plt.show() fig.savefig('PlayervsGPG.png', bbox_inches='tight') """ Explanation: Plot Goals per Game End of explanation """ WINNERS = ['1900-1901', '1905-1906', '1921-1922', '1922-1923', '1946-1947', '1963-1964', '1965-1966', '1972-1973', '1975-1976', '1976-1977', '1978-1979', '1979-1980', '1981-1982', '1982-1983', '1983-1984', '1985-1986', '1987-1988', '1989-1990'] dfw = dflfc_scorers_tl_lg[dflfc_scorers_tl_lg.season.isin(WINNERS)].sort(['season', 'league'], ascending=False) # check all 18 title winning seasons have matched len(dfw.season.unique()) # print average number of goals by striker in title winning season dfw_1 = dfw[['season', 'league']].groupby('season').head(1).groupby('season').sum() round(dfw_1.sort('league', ascending=False)['league'].mean()) # print average number of goals by partner in title winning season dfw_2 = dfw[['season', 'league']].groupby('season').head(2).groupby('season').nth(1) round(dfw_2.sort('league', ascending=False)['league'].mean()) # print average number of goals by partnership in title winning season dfw_p = dfw[['season', 'league']].groupby('season').head(2).groupby('season').sum() rp = round(dfw_p.sort('league', ascending=False)['league'].mean()) print "Liverpool's history says that to win the league we need a striker partnership that will score {} goals on average.".format(rp) """ Explanation: Find key data for title winning years End of explanation """
chunweixu/Deep-Learning
Time-series/demo_full_notes.ipynb
mit
from IPython.display import Image from IPython.core.display import HTML from __future__ import print_function, division import numpy as np import tensorflow as tf import matplotlib.pyplot as plt Image(url= "https://cdn-images-1.medium.com/max/1600/1*UkI9za9zTR-HL8uM15Wmzw.png") #hyperparams num_epochs = 100 total_series_length = 50000 truncated_backprop_length = 15 state_size = 4 num_classes = 2 echo_step = 3 batch_size = 5 num_batches = total_series_length//batch_size//truncated_backprop_length #Step 1 - Collect data #Now generate the training data, #the input is basically a random binary vector. The output will be the #“echo” of the input, shifted echo_step steps to the right. #Notice the reshaping of the data into a matrix with batch_size rows. #Neural networks are trained by approximating the gradient of loss function #with respect to the neuron-weights, by looking at only a small subset of the data, #also known as a mini-batch.The reshaping takes the whole dataset and puts it into #a matrix, that later will be sliced up into these mini-batches. def generateData(): #0,1, 50K samples, 50% chance each chosen x = np.array(np.random.choice(2, total_series_length, p=[0.5, 0.5])) #shift 3 steps to the left y = np.roll(x, echo_step) #padd beginning 3 values with 0 y[0:echo_step] = 0 #Gives a new shape to an array without changing its data. #The reshaping takes the whole dataset and puts it into a matrix, #that later will be sliced up into these mini-batches. x = x.reshape((batch_size, -1)) # The first index changing slowest, subseries as rows y = y.reshape((batch_size, -1)) return (x, y) data = generateData() print(data) #Schematic of the reshaped data-matrix, arrow curves shows adjacent time-steps that ended up on different rows. #Light-gray rectangle represent a “zero” and dark-gray a “one”. Image(url= "https://cdn-images-1.medium.com/max/1600/1*aFtwuFsboLV8z5PkEzNLXA.png") #TensorFlow works by first building up a computational graph, that #specifies what operations will be done. The input and output of this graph #is typically multidimensional arrays, also known as tensors. #The graph, or parts of it can then be executed iteratively in a #session, this can either be done on the CPU, GPU or even a resource #on a remote server. #operations and tensors #The two basic TensorFlow data-structures that will be used in this #example are placeholders and variables. On each run the batch data #is fed to the placeholders, which are “starting nodes” of the #computational graph. Also the RNN-state is supplied in a placeholder, #which is saved from the output of the previous run. #Step 2 - Build the Model #datatype, shape (5, 15) 2D array or matrix, batch size shape for later batchX_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length]) batchY_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length]) #and one for the RNN state, 5,4 init_state = tf.placeholder(tf.float32, [batch_size, state_size]) #The weights and biases of the network are declared as TensorFlow variables, #which makes them persistent across runs and enables them to be updated #incrementally for each batch. #3 layer recurrent net, one hidden state #randomly initialize weights W = tf.Variable(np.random.rand(state_size+1, state_size), dtype=tf.float32) #anchor, improves convergance, matrix of 0s b = tf.Variable(np.zeros((1,state_size)), dtype=tf.float32) W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32) b2 = tf.Variable(np.zeros((1,num_classes)), dtype=tf.float32) """ Explanation: In this tutorial I’ll explain how to build a simple working Recurrent Neural Network in TensorFlow! We will build a simple Echo-RNN that remembers the input sequence and then echoes it after a few time-steps. This will help us understand how memory works We are mapping two sequences! What is an RNN? It is short for “Recurrent Neural Network”, and is basically a neural network that can be used when your data is treated as a sequence, where the particular order of the data-points matter. More importantly, this sequence can be of arbitrary length. The most straight-forward example is perhaps a time-seriedems of numbers, where the task is to predict the next value given previous values. The input to the RNN at every time-step is the current value as well as a state vector which represent what the network has “seen” at time-steps before. This state-vector is the encoded memory of the RNN, initially set to zero. Great paper on this https://arxiv.org/pdf/1506.00019.pdf End of explanation """ Image(url= "https://cdn-images-1.medium.com/max/1600/1*n45uYnAfTDrBvG87J-poCA.jpeg") #Now it’s time to build the part of the graph that resembles the actual RNN computation, #first we want to split the batch data into adjacent time-steps. # Unpack columns #Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors. #so a bunch of arrays, 1 batch per time step inputs_series = tf.unpack(batchX_placeholder, axis=1) labels_series = tf.unpack(batchY_placeholder, axis=1) """ Explanation: The figure below shows the input data-matrix, and the current batch batchX_placeholder is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3, and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code. The series order index is shown as numbers in a few of the data-points. End of explanation """ Image(url= "https://cdn-images-1.medium.com/max/1600/1*f2iL4zOkBUBGOpVE7kyajg.png") #Schematic of the current batch split into columns, the order index is shown on each data-point #and arrows show adjacent time-steps. """ Explanation: As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step. End of explanation """ #Forward pass #state placeholder current_state = init_state #series of states through time states_series = [] #for each set of inputs #forward pass through the network to get new state value #store all states in memory for current_input in inputs_series: #format input current_input = tf.reshape(current_input, [batch_size, 1]) #mix both state and input data input_and_state_concatenated = tf.concat(1, [current_input, current_state]) # Increasing number of columns #perform matrix multiplication between weights and input, add bias #squash with a nonlinearity, for probabiolity value next_state = tf.tanh(tf.matmul(input_and_state_concatenated, W) + b) # Broadcasted addition #store the state in memory states_series.append(next_state) #set current state to next one current_state = next_state """ Explanation: The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows. End of explanation """ Image(url= "https://cdn-images-1.medium.com/max/1600/1*fdwNNJ5UOE3Sx0R_Cyfmyg.png") """ Explanation: Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch. End of explanation """ #calculate loss #second part of forward pass #logits short for logistic transform logits_series = [tf.matmul(state, W2) + b2 for state in states_series] #Broadcasted addition #apply softmax nonlinearity for output probability predictions_series = [tf.nn.softmax(logits) for logits in logits_series] #measure loss, calculate softmax again on logits, then compute cross entropy #measures the difference between two probability distributions #this will return A Tensor of the same shape as labels and of the same type as logits #with the softmax cross entropy loss. losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)] #computes average, one value total_loss = tf.reduce_mean(losses) #use adagrad to minimize with .3 learning rate #minimize it with adagrad, not SGD #One downside of SGD is that it is sensitive to #the learning rate hyper-parameter. When the data are sparse and features have #different frequencies, a single learning rate for every weight update can have #exponential regret. #Some features can be extremely useful and informative to an optimization problem but #they may not show up in most of the training instances or data. If, when they do show up, #they are weighted equally in terms of learning rate as a feature that has shown up hundreds #of times we are practically saying that the influence of such features means nothing in the #overall optimization. it's impact per step in the stochastic gradient descent will be so small #that it can practically be discounted). To counter this, AdaGrad makes it such that features #that are more sparse in the data have a higher learning rate which translates into a larger #update for that feature #sparse features can be very useful. #Each feature has a different learning rate which is adaptable. #gives voice to the little guy who matters a lot #weights that receive high gradients will have their effective learning rate reduced, #while weights that receive small or infrequent updates will have their effective learning rate increased. #great paper http://seed.ucsd.edu/mediawiki/images/6/6a/Adagrad.pdf train_step = tf.train.AdagradOptimizer(0.3).minimize(total_loss) """ Explanation: You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch End of explanation """ #visualizer def plot(loss_list, predictions_series, batchX, batchY): plt.subplot(2, 3, 1) plt.cla() plt.plot(loss_list) for batch_series_idx in range(5): one_hot_output_series = np.array(predictions_series)[:, batch_series_idx, :] single_output_series = np.array([(1 if out[0] < 0.5 else 0) for out in one_hot_output_series]) plt.subplot(2, 3, batch_series_idx + 2) plt.cla() plt.axis([0, truncated_backprop_length, 0, 2]) left_offset = range(truncated_backprop_length) plt.bar(left_offset, batchX[batch_series_idx, :], width=1, color="blue") plt.bar(left_offset, batchY[batch_series_idx, :] * 0.5, width=1, color="red") plt.bar(left_offset, single_output_series * 0.3, width=1, color="green") plt.draw() plt.pause(0.0001) """ Explanation: The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally. Notice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size]. End of explanation """ #Step 3 Training the network with tf.Session() as sess: #we stupidly have to do this everytime, it should just know #that we initialized these vars. v2 guys, v2.. sess.run(tf.initialize_all_variables()) #interactive mode plt.ion() #initialize the figure plt.figure() #show the graph plt.show() #to show the loss decrease loss_list = [] for epoch_idx in range(num_epochs): #generate data at eveery epoch, batches run in epochs x,y = generateData() #initialize an empty hidden state _current_state = np.zeros((batch_size, state_size)) print("New data, epoch", epoch_idx) #each batch for batch_idx in range(num_batches): #starting and ending point per batch #since weights reoccuer at every layer through time #These layers will not be unrolled to the beginning of time, #that would be too computationally expensive, and are therefore truncated #at a limited number of time-steps start_idx = batch_idx * truncated_backprop_length end_idx = start_idx + truncated_backprop_length batchX = x[:,start_idx:end_idx] batchY = y[:,start_idx:end_idx] #run the computation graph, give it the values #we calculated earlier _total_loss, _train_step, _current_state, _predictions_series = sess.run( [total_loss, train_step, current_state, predictions_series], feed_dict={ batchX_placeholder:batchX, batchY_placeholder:batchY, init_state:_current_state }) loss_list.append(_total_loss) if batch_idx%100 == 0: print("Step",batch_idx, "Loss", _total_loss) plot(loss_list, _predictions_series, batchX, batchY) plt.ioff() plt.show() """ Explanation: There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch. End of explanation """ Image(url= "https://cdn-images-1.medium.com/max/1600/1*uKuUKp_m55zAPCzaIemucA.png") """ Explanation: You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below. End of explanation """ Image(url= "https://cdn-images-1.medium.com/max/1600/1*ytquMdmGMJo0-3kxMCi1Gg.png") """ Explanation: Time series of squares, the elevated black square symbolizes an echo-output, which is activated three steps from the echo input (black square). The sliding batch window is also striding three steps at each run, which in our sample case means that no batch will encapsulate the dependency, so it can not train. The network will be able to exactly learn the echo behavior so there is no need for testing data. The program will update the plot as training progresses, Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Fully trained at 100 epochs look like this End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-ll/aerosol.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'aerosol') """ Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: MOHC Source ID: HADGEM3-GC31-LL Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 69 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:14 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) """ Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) """ Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact internal mixture? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) """ Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation """
geoneill12/phys202-2015-work
assignments/assignment03/NumpyEx03.ipynb
mit
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import antipackage import github.ellisonbg.misc.vizarray as va """ Explanation: Numpy Exercise 3 Imports End of explanation """ def brownian(maxt, n): """Return one realization of a Brownian (Wiener) process with n steps and a max time of t.""" t = np.linspace(0.0,maxt,n) h = t[1]-t[0] Z = np.random.normal(0.0,1.0,n-1) dW = np.sqrt(h)*Z W = np.zeros(n) W[1:] = dW.cumsum() return t, W """ Explanation: Geometric Brownian motion Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process. End of explanation """ c = np.array(brownian(1.0, 1000)) c t = c[0,:] W = c[1,:] print t print W assert isinstance(t, np.ndarray) assert isinstance(W, np.ndarray) assert t.dtype==np.dtype(float) assert W.dtype==np.dtype(float) assert len(t)==len(W)==1000 """ Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W. End of explanation """ plt.plot(t, W) plt.xlabel('t') plt.ylabel('W(t)') assert True # this is for grading """ Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes. End of explanation """ dW = np.diff(W) y = dW.mean() z = dW.std() assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float) """ Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences. End of explanation """ def geo_brownian(t, W, X0, mu, sigma): "Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" X_t = X0 * np.exp((mu - sigma**2) * t + sigma * W) return X_t assert True # leave this for grading """ Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function. End of explanation """ plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3)) plt.xlabel('t') plt.ylabel('X(t)') assert True # leave this for grading """ Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. End of explanation """
jhprinz/openpathsampling
examples/alanine_dipeptide_tps/AD_tps_1b_trajectory.ipynb
lgpl-2.1
import openpathsampling as paths """ Explanation: This notebook is part of the fixed length TPS example. It requires the file alanine_dipeptide_tps_equil.nc, which is written in the notebook alanine_dipeptide_tps_first_traj.ipynb. In this notebook, you will learn: * how to set up a FixedLengthTPSNetwork * how to extend a transition path to satisfy the fixed length TPS ensemble * how to save specific objects to a file End of explanation """ old_storage = paths.Storage("tps_nc_files/alanine_dipeptide_tps_equil.nc", "r") engine = old_storage.engines[0] C_7eq = old_storage.volumes.find('C_7eq') alpha_R = old_storage.volumes.find('alpha_R') traj = old_storage.samplesets[len(old_storage.samplesets)-1][0].trajectory phi = old_storage.cvs.find('phi') psi = old_storage.cvs.find('psi') template = old_storage.snapshots[0] """ Explanation: Loading from storage First, we open the file we made in alanine_dipeptide_tps_first_traj.ipynb and load various things we need from that. End of explanation """ network = paths.FixedLengthTPSNetwork(C_7eq, alpha_R, length=400) trajectories = [] i=0 while len(trajectories) == 0 and i < 5: max_len = 200 + i*50 fwd_traj = engine.generate(traj[-1], [lambda traj, foo: len(traj) < max_len]) bkwd_traj = engine.generate(traj[0], [lambda traj, foo: len(traj) < max_len], direction=-1) new_traj = bkwd_traj[:-1] + traj + fwd_traj[1:] trajectories = network.sampling_ensembles[0].split(new_traj) print trajectories # raises an error if we still haven't found a suitable trajectory trajectory = trajectories[0] """ Explanation: Building a trajectory to suit the ensemble We're starting from a trajectory that makes the transition. However, we need that trajectory to be longer than it is. There's an important subtlety here: we can't just extend the trajectory in one direction until is satisfies our length requirement, because it is very possible that the final frame would be in the no-man's-land that isn't either state, and then it wouldn't satisfy the ensemble. (Additionally, without a shifting move, having the transition at the far edge of the trajectory time could be problematic.) So our approach here is to extend the trajectory in either direction by half the fixed length. That gives us a total trajectory length of the fixed length plus the length of the original trajectory. Within this trajectory, we try to find an subtrajectory that satisfies our ensemble. If we don't, then we add more frames to each side and try again. End of explanation """ # Imports for plotting %matplotlib inline import matplotlib.pyplot as plt plt.plot(phi(trajectory), psi(trajectory)) plt.plot(phi(traj), psi(traj)) """ Explanation: Plot the trajectory This is exactly as done in alanine_dipeptide_tps_first_traj.ipynb. End of explanation """ # save stuff storage = paths.Storage("tps_nc_files/alanine_dipeptide_fixed_tps_traj.nc", "w", old_storage.snapshots[0]) storage.save(engine) storage.save(C_7eq) storage.save(alpha_R) storage.save(phi) storage.save(psi) storage.save(trajectory) storage.sync() """ Explanation: Save stuff When we do path sampling, the PathSampling object automatically handles saving for us. However, we can also save things explicitly. Saving works in two steps: first you mark an object as being something to save with storage.save(object). But at this point, the object is not actually stored to disk. That only happens after storage.sync(). End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cccr-iitm/cmip6/models/sandbox-1/aerosol.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'aerosol') """ Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: CCCR-IITM Source ID: SANDBOX-1 Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 69 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:48 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) """ Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) """ Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact internal mixture? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) """ Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation """
planetlabs/notebooks
jupyter-notebooks/label-data/label_maker_pl_mosaic.ipynb
apache-2.0
import json import os import ipyleaflet as ipyl import ipywidgets as ipyw from IPython.display import Image import numpy as np """ Explanation: Creating Labeled Data from a Planet Mosaic with Label Maker In this notebook, we create labeled data for training a machine learning algorithm. As inputs, we use OpenStreetMap as the ground truth source and a Planet mosaic as the image source. Development Seed's Label Maker tool is used to download and prepare the ground truth data, chip the Planet imagery, and package the two to feed into the training process. The primary interface for Label Maker is through the command-line interface (cli). It is configured through the creation of a configuration file. More information about that configuration file and command line usage can be found in the Label Maker repo README. RUNNING NOTE This notebook is meant to be run in a docker image specific to this folder. The docker image must be built from the custom Dockerfile according to the directions below. In label-data directory: docker build -t planet-notebooks:label . Then start up the docker container as you usually would, specifying planet-notebooks:label as the image. Install Dependencies In addition to the python packages imported below, the label-maker python package is also a dependency. However, it's primary usage is through the command-line interface (cli), so we use juypter notebook bash magic to run label-maker via the cli instead of importing the python package. End of explanation """ # Planet tile server base URL (Planet Explorer Mosaics Tiles) mosaic = 'global_monthly_2018_02_mosaic' mosaicsTilesURL_base = 'https://tiles.planet.com/basemaps/v1/planet-tiles/{}/gmap/{{z}}/{{x}}/{{y}}.png'.format(mosaic) mosaicsTilesURL_base # Planet tile server url with auth planet_api_key = os.environ['PL_API_KEY'] planet_mosaic = mosaicsTilesURL_base + '?api_key=' + planet_api_key # url is not printed because it will show private api key """ Explanation: Define Mosaic Parameters In this tutorial, we use the Planet mosaic tile service. There are many mosaics to choose from. For a list of mosaics available, visit https://api.planet.com/basemaps/v1/mosaics. We first build the url for the xyz basemap tile service, then we add authorization in the form of the Planet API key. End of explanation """ # create data directory data_dir = os.path.join('data', 'label-maker-mosaic') if not os.path.isdir(data_dir): os.makedirs(data_dir) # label-maker doesn't clean up, so start with a clean slate !cd $data_dir && rm -R * # create config file bounding_box = [1.09725, 6.05520, 1.34582, 6.30915] config = { "country": "togo", "bounding_box": bounding_box, "zoom": 15, "classes": [ { "name": "Roads", "filter": ["has", "highway"] }, { "name": "Buildings", "filter": ["has", "building"] } ], "imagery": planet_mosaic, "background_ratio": 1, "ml_type": "classification" } # define project files and folders config_filename = os.path.join(data_dir, 'config.json') # write config file with open(config_filename, 'w') as cfile: cfile.write(json.dumps(config)) print('wrote config to {}'.format(config_filename)) """ Explanation: Prepare label maker config file This config file is pulled from the label-maker repo README.md example and then customized to utilize the Planet mosaic. The imagery url is set to the Planet mosaic url and the zoom is changed to 15, the maximum zoom supported by the Planet tile services. See the label-maker README.md file for a description of the config entries. End of explanation """ # calculate center of map bounds_lat = [bounding_box[1], bounding_box[3]] bounds_lon = [bounding_box[0], bounding_box[2]] def calc_center(bounds): return bounds[0] + (bounds[1] - bounds[0])/2 map_center = [calc_center(bounds_lat), calc_center(bounds_lon)] # lat/lon print(bounding_box) print(map_center) # create and visualize mosaic at approximately the same bounds as defined in the config file map_zoom = 12 layout=ipyw.Layout(width='800px', height='800px') # set map layout mosaic_map = ipyl.Map(center=map_center, zoom=map_zoom, layout=layout) mosaic_map.add_layer(ipyl.TileLayer(url=planet_mosaic)) mosaic_map mosaic_map.bounds """ Explanation: Visualize Mosaic at config area of interest End of explanation """ !cd $data_dir && label-maker download """ Explanation: Download OSM tiles In this step, label-maker downloads the OSM vector tiles for the country specified in the config file. According to Label Maker documentation, these can be visualized with mbview. So far I have not been successful getting mbview to work. I will keep on trying and would love to hear how you got this to work! End of explanation """ !cd $data_dir && label-maker labels """ Explanation: Create ground-truth labels from OSM tiles In this step, the OSM tiles are chipped into label tiles at the zoom level specified in the config file. Also, a geojson file is created for visual inspection. End of explanation """ # !cd $data_dir && label-maker preview -n 3 # !ls $data_dir/data/examples # for fclass in ('Roads', 'Buildings'): # example_dir = os.path.join(data_dir, 'data', 'examples', fclass) # print(example_dir) # for img in os.listdir(example_dir): # print(img) # display(Image(os.path.join(example_dir, img))) """ Explanation: Visualizing classification.geojson in QGIS gives: Although Label Maker doesn't tell us which classes line up with the labels (see the legend in the visualization for labels), it looks like the following relationships hold: - (1,0,0) - no roads or buildings - (0,1,1) - both roads and buildings - (0,0,1) - only buildings - (0,1,0) - only roads Most of the large region with no roads or buildings at the bottom portion of the image is the water off the coast. Preview image chips Create a subset of the image chips for preview before creating them all. Preview chips are placed in subdirectories named after each class specified in the config file. NOTE This section is commented out because preview fails due to imagery-offset arg. See more: https://github.com/developmentseed/label-maker/issues/79 End of explanation """ !cd $data_dir && label-maker images # look at three tiles that were generated tiles_dir = os.path.join(data_dir, 'data', 'tiles') print(tiles_dir) for img in os.listdir(tiles_dir)[:3]: print(img) display(Image(os.path.join(tiles_dir, img))) """ Explanation: Other than the fact that 4 tiles were created instead of the specified 3, the results look pretty good! All Road examples have roads, and all Building examples have buildings. Create image tiles In this step, we invoke label-maker images, which downloads and chips the mosaic into tiles that match the label tiles. Interestingly, only 372 image tiles are downloaded, while 576 label tiles were generated. Looking at the label tile generation output (370 Road tiles, 270 Building tiles) along with the classification.geojson visualization (only two tiles that are Building and not Road), we find that there are only 372 label tiles that represent at least one of the Road/Building classes. This is why only 372 image tiles were generated. End of explanation """ # will not be able to open image tiles that weren't generated because the label tiles contained no classes !cd $data_dir && label-maker package """ Explanation: Package tiles and labels Convert the image and label tiles into train and test datasets. End of explanation """ data_file = os.path.join(data_dir, 'data', 'data.npz') data = np.load(data_file) for k in data.keys(): print('data[\'{}\'] shape: {}'.format(k, data[k].shape)) """ Explanation: Check Package Let's load the packaged data and look at the train and test datasets. End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/ad79868fcd6af353ce922b8a3a2fc362/plot_30_info.ipynb
bsd-3-clause
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_filt-0-40_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) """ Explanation: The Info data structure This tutorial describes the :class:mne.Info data structure, which keeps track of various recording details, and is attached to :class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked objects. :depth: 2 We'll begin by loading the Python modules we need, and loading the same example data &lt;sample-dataset&gt; we used in the introductory tutorial &lt;tut-overview&gt;: End of explanation """ print(raw.info) """ Explanation: As seen in the introductory tutorial &lt;tut-overview&gt;, when a :class:~mne.io.Raw object is loaded, an :class:~mne.Info object is created automatically, and stored in the raw.info attribute: End of explanation """ info = mne.io.read_info(sample_data_raw_file) print(info) """ Explanation: However, it is not strictly necessary to load the :class:~mne.io.Raw object in order to view or edit the :class:~mne.Info object; you can extract all the relevant information into a stand-alone :class:~mne.Info object using :func:mne.io.read_info: End of explanation """ print(info.keys()) print() # insert a blank line print(info['ch_names']) """ Explanation: As you can see, the :class:~mne.Info object keeps track of a lot of information about: the recording system (gantry angle, HPI details, sensor digitizations, channel names, ...) the experiment (project name and ID, subject information, recording date, experimenter name or ID, ...) the data (sampling frequency, applied filter frequencies, bad channels, projectors, ...) The complete list of fields is given in :class:the API documentation &lt;mne.Info&gt;. Querying the Info object ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The fields in a :class:~mne.Info object act like Python :class:dictionary &lt;dict&gt; keys, using square brackets and strings to access the contents of a field: End of explanation """ print(info['chs'][0].keys()) """ Explanation: Most of the fields contain :class:int, :class:float, or :class:list data, but the chs field bears special mention: it contains a list of dictionaries (one :class:dict per channel) containing everything there is to know about a channel other than the data it recorded. Normally it is not necessary to dig into the details of the chs field — various MNE-Python functions can extract the information more cleanly than iterating over the list of dicts yourself — but it can be helpful to know what is in there. Here we show the keys for the first channel's :class:dict: End of explanation """ print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005'])) print(mne.pick_channels(info['ch_names'], include=[], exclude=['MEG 0312', 'EEG 005'])) """ Explanation: Obtaining subsets of channels ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is often useful to convert between channel names and the integer indices identifying rows of the data array where those channels' measurements are stored. The :class:~mne.Info object is useful for this task; two convenience functions that rely on the :class:mne.Info object for picking channels are :func:mne.pick_channels and :func:mne.pick_types. :func:~mne.pick_channels minimally takes a list of all channel names and a list of channel names to include; it is also possible to provide an empty list to include and specify which channels to exclude instead: End of explanation """ print(mne.pick_types(info, meg=False, eeg=True, exclude=[])) """ Explanation: :func:~mne.pick_types works differently, since channel type cannot always be reliably determined from channel name alone. Consequently, :func:~mne.pick_types needs an :class:~mne.Info object instead of just a list of channel names, and has boolean keyword arguments for each channel type. Default behavior is to pick only MEG channels (and MEG reference channels if present) and exclude any channels already marked as "bad" in the bads field of the :class:~mne.Info object. Therefore, to get all and only the EEG channel indices (including the "bad" EEG channels) we must pass meg=False and exclude=[]: End of explanation """ print(mne.pick_channels_regexp(info['ch_names'], '^E.G')) """ Explanation: Note that the meg and fnirs parameters of :func:~mne.pick_types accept strings as well as boolean values, to allow selecting only magnetometer or gradiometer channels (via meg='mag' or meg='grad') or to pick only oxyhemoglobin or deoxyhemoglobin channels (via fnirs='hbo' or fnirs='hbr', respectively). A third way to pick channels from an :class:~mne.Info object is to apply regular expression_ matching to the channel names using :func:mne.pick_channels_regexp. Here the ^ represents the beginning of the string and . character matches any single character, so both EEG and EOG channels will be selected: End of explanation """ print(mne.channel_type(info, 25)) """ Explanation: :func:~mne.pick_channels_regexp can be especially useful for channels named according to the 10-20 &lt;ten-twenty_&gt;_ system (e.g., to select all channels ending in "z" to get the midline, or all channels beginning with "O" to get the occipital channels). Note that :func:~mne.pick_channels_regexp uses the Python standard module :mod:re to perform regular expression matching; see the documentation of the :mod:re module for implementation details. <div class="alert alert-danger"><h4>Warning</h4><p>Both :func:`~mne.pick_channels` and :func:`~mne.pick_channels_regexp` operate on lists of channel names, so they are unaware of which channels (if any) have been marked as "bad" in ``info['bads']``. Use caution to avoid accidentally selecting bad channels.</p></div> Obtaining channel type information ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Sometimes it can be useful to know channel type based on its index in the data array. For this case, use :func:mne.channel_type, which takes an :class:~mne.Info object and a single integer channel index: End of explanation """ print([mne.channel_type(info, x) for x in (25, 76, 77, 319)]) """ Explanation: To obtain several channel types at once, you could embed :func:~mne.channel_type in a :term:list comprehension: End of explanation """ ch_idx_by_type = mne.channel_indices_by_type(info) print(ch_idx_by_type.keys()) print(ch_idx_by_type['eog']) """ Explanation: Alternatively, you can get the indices of all channels of all channel types present in the data, using :func:~mne.channel_indices_by_type, which returns a :class:dict with channel types as keys, and lists of channel indices as values: End of explanation """ print(info['nchan']) eeg_indices = mne.pick_types(info, meg=False, eeg=True) print(mne.pick_info(info, eeg_indices)['nchan']) """ Explanation: Dropping channels from an Info object ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you want to modify an :class:~mne.Info object by eliminating some of the channels in it, you can use the :func:mne.pick_info function to pick the channels you want to keep and omit the rest: End of explanation """
mana99/machine-playground
kmeans-image_compression.ipynb
mit
from scipy import misc pic = misc.imread('media/irobot.png') """ Explanation: Image compression with K-means K-means is a clustering algorithm which defines K cluster centroids in the feature space and, by making use of an appropriate distance function, iteratively assigns each example to the closest cluster centroid and each cluster centroid to the mean of points previously assigned to it. In the following example we will make use of K-means clustering to reduce the number of colors contained in an image stored using 24-bit RGB encoding. Overview The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. In a 24-bit encoding, each pixel is represented as three 8-bit unsigned integers (ranging from 0 to 255) that specify the red, green and blue intensity values, resulting in a total of 256*256*256=16,777,216 possible colors. To compress the image, we will reduce this number to 16, assign each color to an index and then each pixel to an index. This process will significantly decrease the amount of space occupied by the image, at the cost of introducing some computational effort. For a 128x128 image: * Uncompressed format: 16,384 px * 24 bits/px = 393,216 bits * Compressed format: 16,384 px * 4 bits/px + 16 clusters * 24 bits/cluster = 65,536 + 385 bits = 65,920 bits (17%) Note that we won't implement directly the K-means algorithm, as we are primarily interested in showing its application in a common scenario, but we'll delegate it to the scikit-learn library. Implementation Import First, let's import the image with scipy: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.imshow(pic) """ Explanation: Note that PIL (or Pillow) library is also needed to successfully import the image, so pip install it if you have not it installed. Now let's take a look at the image by plotting it with matplotlib: End of explanation """ pic.shape """ Explanation: The image is stored in a 3-dimensional matrix, where the first and second dimension represent the pixel location on the 2-dimensional plan and the third dimension the RGB intensities: End of explanation """ w = pic.shape[0] h = pic.shape[1] X = pic.reshape((w*h,3)) X.shape """ Explanation: Preprocessing We want to flatten the matrix, in order to give it to the clustering algorithm: End of explanation """ from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=16) kmeans.fit(X) """ Explanation: Fitting Now by fitting the KMeans estimator in scikit-learn we can identify the best clusters for the flattened matrix: End of explanation """ kmeans.labels_ np.unique(kmeans.labels_) """ Explanation: We can verify that each pixel has been assigned to a cluster: End of explanation """ kmeans.cluster_centers_ """ Explanation: And we can visualize each cluster centroid: End of explanation """ import numpy as np plt.imshow(np.floor(kmeans.cluster_centers_.reshape((1,16,3))) * (-1)) """ Explanation: Note that cluster centroids are computed as the mean of the features, so we easily end up on decimal values, which are not admitted in a 24 bit representation (three 8-bit unsigned integers ranging from 0 to 255) of the colors. We decide to round them with a floor operation. Furthermore we have to invert the sign of the clusters to visualize them: End of explanation """ labels = kmeans.labels_ clusters = np.floor(kmeans.cluster_centers_) * (-1) """ Explanation: Reconstructing End of explanation """ # Assigning RGB to clusters and reshaping pic_recovered = clusters[labels,:].reshape((w,h,3)) plt.imshow(pic_recovered) """ Explanation: The data contained in clusters and labels define the compressed image and should be stored in a proper format, in order to effectively realize the data compression: * clusters: 16 clusters * 24 bits/cluster * labels: (width x height) px * 4 bits/px To reconstruct the image we assign RGB values of the cluster centroids to the pixels and we reshape the matrix in the original form: End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5)) axes[0].imshow(pic) axes[1].imshow(pic_recovered) """ Explanation: At the cost of a deterioriation in the color quality, the space occupied by the image will be significantly lesser. We can compare the original and the compressed image in the following figure: End of explanation """
google/automl
efficientnetv2/tfhub.ipynb
apache-2.0
import itertools import os import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub print('TF version:', tf.__version__) print('Hub version:', hub.__version__) print('Phsical devices:', tf.config.list_physical_devices()) def get_hub_url_and_isize(model_name, ckpt_type, hub_type): if ckpt_type == '1k': ckpt_type = '' # json doesn't support empty string else: ckpt_type = '-' + ckpt_type # add '-' as prefix hub_url_map = { 'efficientnetv2-b0': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0/{hub_type}', 'efficientnetv2-b1': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1/{hub_type}', 'efficientnetv2-b2': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2/{hub_type}', 'efficientnetv2-b3': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3/{hub_type}', 'efficientnetv2-s': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s/{hub_type}', 'efficientnetv2-m': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m/{hub_type}', 'efficientnetv2-l': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l/{hub_type}', 'efficientnetv2-b0-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0-21k/{hub_type}', 'efficientnetv2-b1-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1-21k/{hub_type}', 'efficientnetv2-b2-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2-21k/{hub_type}', 'efficientnetv2-b3-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3-21k/{hub_type}', 'efficientnetv2-s-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s-21k/{hub_type}', 'efficientnetv2-m-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m-21k/{hub_type}', 'efficientnetv2-l-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l-21k/{hub_type}', 'efficientnetv2-xl-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-xl-21k/{hub_type}', 'efficientnetv2-b0-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0-21k-ft1k/{hub_type}', 'efficientnetv2-b1-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1-21k-ft1k/{hub_type}', 'efficientnetv2-b2-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2-21k-ft1k/{hub_type}', 'efficientnetv2-b3-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3-21k-ft1k/{hub_type}', 'efficientnetv2-s-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s-21k-ft1k/{hub_type}', 'efficientnetv2-m-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m-21k-ft1k/{hub_type}', 'efficientnetv2-l-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l-21k-ft1k/{hub_type}', 'efficientnetv2-xl-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-xl-21k-ft1k/{hub_type}', # efficientnetv1 'efficientnet_b0': f'https://tfhub.dev/tensorflow/efficientnet/b0/{hub_type}/1', 'efficientnet_b1': f'https://tfhub.dev/tensorflow/efficientnet/b1/{hub_type}/1', 'efficientnet_b2': f'https://tfhub.dev/tensorflow/efficientnet/b2/{hub_type}/1', 'efficientnet_b3': f'https://tfhub.dev/tensorflow/efficientnet/b3/{hub_type}/1', 'efficientnet_b4': f'https://tfhub.dev/tensorflow/efficientnet/b4/{hub_type}/1', 'efficientnet_b5': f'https://tfhub.dev/tensorflow/efficientnet/b5/{hub_type}/1', 'efficientnet_b6': f'https://tfhub.dev/tensorflow/efficientnet/b6/{hub_type}/1', 'efficientnet_b7': f'https://tfhub.dev/tensorflow/efficientnet/b7/{hub_type}/1', } image_size_map = { 'efficientnetv2-b0': 224, 'efficientnetv2-b1': 240, 'efficientnetv2-b2': 260, 'efficientnetv2-b3': 300, 'efficientnetv2-s': 384, 'efficientnetv2-m': 480, 'efficientnetv2-l': 480, 'efficientnetv2-xl': 512, 'efficientnet_b0': 224, 'efficientnet_b1': 240, 'efficientnet_b2': 260, 'efficientnet_b3': 300, 'efficientnet_b4': 380, 'efficientnet_b5': 456, 'efficientnet_b6': 528, 'efficientnet_b7': 600, } hub_url = hub_url_map.get(model_name + ckpt_type) image_size = image_size_map.get(model_name, 224) return hub_url, image_size def get_imagenet_labels(filename): labels = [] with open(filename, 'r') as f: for line in f: labels.append(line.split('\t')[1][:-1]) # split and remove line break. return labels """ Explanation: EfficientNetV2 with tf-hub <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://github.com/google/automl/blob/master/efficientnetv2/tfhub.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on github </a> </td><td> <a target="_blank" href="https://colab.sandbox.google.com/github/google/automl/blob/master/efficientnetv2/tfhub.ipynb"> <img width=32px src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <!----<a href="https://tfhub.dev/google/collections/image/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />TF Hub models</a>---> </td> </table> 1.Introduction EfficientNetV2 is a family of classification models, with better accuracy, smaller size, and faster speed than previous models. This doc describes some examples with EfficientNetV2 tfhub. For more details, please visit the official code: https://github.com/google/automl/tree/master/efficientnetv2 2.Select the TF2 SavedModel module to use End of explanation """ # Build model import tensorflow_hub as hub model_name = 'efficientnetv2-s' #@param {type:'string'} ckpt_type = '1k' # @param ['21k-ft1k', '1k'] hub_type = 'classification' # @param ['classification', 'feature-vector'] hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type) tf.keras.backend.clear_session() m = hub.KerasLayer(hub_url, trainable=False) m.build([None, 224, 224, 3]) # Batch input shape. # Download label map file and image labels_map = '/tmp/imagenet1k_labels.txt' image_file = '/tmp/panda.jpg' tf.keras.utils.get_file(image_file, 'https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG') tf.keras.utils.get_file(labels_map, 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/imagenet1k_labels.txt') # preprocess image. image = tf.keras.preprocessing.image.load_img(image_file, target_size=(224, 224)) image = tf.keras.preprocessing.image.img_to_array(image) image = (image - 128.) / 128. logits = m(tf.expand_dims(image, 0), False) # Output classes and probability pred = tf.keras.layers.Softmax()(logits) idx = tf.argsort(logits[0])[::-1][:5].numpy() classes = get_imagenet_labels(labels_map) for i, id in enumerate(idx): print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ') from IPython import display display.display(display.Image(image_file)) """ Explanation: 3.Inference with ImageNet 1k/2k checkpoints 3.1 ImageNet1k checkpoint End of explanation """ # Build model import tensorflow_hub as hub model_name = 'efficientnetv2-s' #@param {type:'string'} ckpt_type = '21k' # @param ['21k'] hub_type = 'classification' # @param ['classification', 'feature-vector'] hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type) tf.keras.backend.clear_session() m = hub.KerasLayer(hub_url, trainable=False) m.build([None, 224, 224, 3]) # Batch input shape. # Download label map file and image labels_map = '/tmp/imagenet21k_labels.txt' image_file = '/tmp/panda2.jpeg' tf.keras.utils.get_file(image_file, 'https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG') tf.keras.utils.get_file(labels_map, 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/imagenet21k_labels.txt') # preprocess image. image = tf.keras.preprocessing.image.load_img(image_file, target_size=(224, 224)) image = tf.keras.preprocessing.image.img_to_array(image) image = (image - 128.) / 128. logits = m(tf.expand_dims(image, 0), False) # Output classes and probability pred = tf.keras.activations.sigmoid(logits) # 21k uses sigmoid for multi-label idx = tf.argsort(logits[0])[::-1][:20].numpy() classes = get_imagenet_labels(labels_map) for i, id in enumerate(idx): print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ') if pred[0][id] < 0.5: break from IPython import display display.display(display.Image(image_file)) """ Explanation: 3.2 ImageNet21k checkpoint End of explanation """ # Build model import tensorflow_hub as hub model_name = 'efficientnetv2-b0' #@param {type:'string'} ckpt_type = '1k' # @param ['21k', '21k-ft1k', '1k'] hub_type = 'feature-vector' # @param ['feature-vector'] batch_size = 32#@param {type:"integer"} hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type) """ Explanation: 4.Finetune with Flowers dataset. Get hub_url and image_size End of explanation """ data_dir = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) datagen_kwargs = dict(rescale=1./255, validation_split=.20) dataflow_kwargs = dict(target_size=(image_size, image_size), batch_size=batch_size, interpolation="bilinear") valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator( **datagen_kwargs) valid_generator = valid_datagen.flow_from_directory( data_dir, subset="validation", shuffle=False, **dataflow_kwargs) do_data_augmentation = False #@param {type:"boolean"} if do_data_augmentation: train_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rotation_range=40, horizontal_flip=True, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, **datagen_kwargs) else: train_datagen = valid_datagen train_generator = train_datagen.flow_from_directory( data_dir, subset="training", shuffle=True, **dataflow_kwargs) """ Explanation: Get dataset End of explanation """ # whether to finetune the whole model or just the top layer. do_fine_tuning = True #@param {type:"boolean"} num_epochs = 2 #@param {type:"integer"} tf.keras.backend.clear_session() model = tf.keras.Sequential([ # Explicitly define the input shape so the model can be properly # loaded by the TFLiteConverter tf.keras.layers.InputLayer(input_shape=[image_size, image_size, 3]), hub.KerasLayer(hub_url, trainable=do_fine_tuning), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense(train_generator.num_classes, kernel_regularizer=tf.keras.regularizers.l2(0.0001)) ]) model.build((None, image_size, image_size, 3)) model.summary() model.compile( optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1), metrics=['accuracy']) steps_per_epoch = train_generator.samples // train_generator.batch_size validation_steps = valid_generator.samples // valid_generator.batch_size hist = model.fit( train_generator, epochs=num_epochs, steps_per_epoch=steps_per_epoch, validation_data=valid_generator, validation_steps=validation_steps).history def get_class_string_from_index(index): for class_string, class_index in valid_generator.class_indices.items(): if class_index == index: return class_string x, y = next(valid_generator) image = x[0, :, :, :] true_index = np.argmax(y[0]) plt.imshow(image) plt.axis('off') plt.show() # Expand the validation image to (1, 224, 224, 3) before predicting the label prediction_scores = model.predict(np.expand_dims(image, axis=0)) predicted_index = np.argmax(prediction_scores) print("True label: " + get_class_string_from_index(true_index)) print("Predicted label: " + get_class_string_from_index(predicted_index)) """ Explanation: Training the model End of explanation """ saved_model_path = f"/tmp/saved_flowers_model_{model_name}" tf.saved_model.save(model, saved_model_path) """ Explanation: Finally, the trained model can be saved for deployment to TF Serving or TF Lite (on mobile) as follows. End of explanation """ optimize_lite_model = True #@param {type:"boolean"} #@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount. num_calibration_examples = 81 #@param {type:"slider", min:0, max:1000, step:1} representative_dataset = None if optimize_lite_model and num_calibration_examples: # Use a bounded number of training examples without labels for calibration. # TFLiteConverter expects a list of input tensors, each with batch size 1. representative_dataset = lambda: itertools.islice( ([image[None, ...]] for batch, _ in train_generator for image in batch), num_calibration_examples) converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path) if optimize_lite_model: converter.optimizations = [tf.lite.Optimize.DEFAULT] if representative_dataset: # This is optional, see above. converter.representative_dataset = representative_dataset lite_model_content = converter.convert() with open(f"/tmp/lite_flowers_model_{model_name}.tflite", "wb") as f: f.write(lite_model_content) print("Wrote %sTFLite model of %d bytes." % ("optimized " if optimize_lite_model else "", len(lite_model_content))) interpreter = tf.lite.Interpreter(model_content=lite_model_content) # This little helper wraps the TF Lite interpreter as a numpy-to-numpy function. def lite_model(images): interpreter.allocate_tensors() interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images) interpreter.invoke() return interpreter.get_tensor(interpreter.get_output_details()[0]['index']) #@markdown For rapid experimentation, start with a moderate number of examples. num_eval_examples = 50 #@param {type:"slider", min:0, max:700} eval_dataset = ((image, label) # TFLite expects batch size 1. for batch in train_generator for (image, label) in zip(*batch)) count = 0 count_lite_tf_agree = 0 count_lite_correct = 0 for image, label in eval_dataset: probs_lite = lite_model(image[None, ...])[0] probs_tf = model(image[None, ...]).numpy()[0] y_lite = np.argmax(probs_lite) y_tf = np.argmax(probs_tf) y_true = np.argmax(label) count +=1 if y_lite == y_tf: count_lite_tf_agree += 1 if y_lite == y_true: count_lite_correct += 1 if count >= num_eval_examples: break print("TF Lite model agrees with original model on %d of %d examples (%g%%)." % (count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count)) print("TF Lite model is accurate on %d of %d examples (%g%%)." % (count_lite_correct, count, 100.0 * count_lite_correct / count)) """ Explanation: Optional: Deployment to TensorFlow Lite TensorFlow Lite for mobile. Here we also runs tflite file in the TF Lite Interpreter to examine the resulting quality. End of explanation """
leezu/mxnet
example/bi-lstm-sort/bi-lstm-sort.ipynb
apache-2.0
import random import string import mxnet as mx from mxnet import gluon, nd import numpy as np """ Explanation: Using a bi-lstm to sort a sequence of integers End of explanation """ max_num = 999 dataset_size = 60000 seq_len = 5 split = 0.8 batch_size = 512 ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu() """ Explanation: Data Preparation End of explanation """ X = mx.random.uniform(low=0, high=max_num, shape=(dataset_size, seq_len)).astype('int32').asnumpy() Y = X.copy() Y.sort() #Let's sort X to get the target print("Input {}\nTarget {}".format(X[0].tolist(), Y[0].tolist())) """ Explanation: We are getting a dataset of dataset_size sequences of integers of length seq_len between 0 and max_num. We use split*100% of them for training and the rest for testing. For example: 50 10 200 999 30 Should return 10 30 50 200 999 End of explanation """ vocab = string.digits + " " print(vocab) vocab_idx = { c:i for i,c in enumerate(vocab)} print(vocab_idx) """ Explanation: For the purpose of training, we encode the input as characters rather than numbers End of explanation """ max_len = len(str(max_num))*seq_len+(seq_len-1) print("Maximum length of the string: %s" % max_len) def transform(x, y): x_string = ' '.join(map(str, x.tolist())) x_string_padded = x_string + ' '*(max_len-len(x_string)) x = [vocab_idx[c] for c in x_string_padded] y_string = ' '.join(map(str, y.tolist())) y_string_padded = y_string + ' '*(max_len-len(y_string)) y = [vocab_idx[c] for c in y_string_padded] return mx.nd.one_hot(mx.nd.array(x), len(vocab)), mx.nd.array(y) split_idx = int(split*len(X)) train_dataset = gluon.data.ArrayDataset(X[:split_idx], Y[:split_idx]).transform(transform) test_dataset = gluon.data.ArrayDataset(X[split_idx:], Y[split_idx:]).transform(transform) print("Input {}".format(X[0])) print("Transformed data Input {}".format(train_dataset[0][0])) print("Target {}".format(Y[0])) print("Transformed data Target {}".format(train_dataset[0][1])) train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=20, last_batch='rollover') test_data = gluon.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=5, last_batch='rollover') """ Explanation: We write a transform that will convert our numbers into text of maximum length max_len, and one-hot encode the characters. For example: "30 10" corresponding indices are [3, 0, 10, 1, 0] We then one hot encode that and get a matrix representation of our input. We don't need to encode our target as the loss we are going to use support sparse labels End of explanation """ net = gluon.nn.HybridSequential() with net.name_scope(): net.add( gluon.rnn.LSTM(hidden_size=128, num_layers=2, layout='NTC', bidirectional=True), gluon.nn.Dense(len(vocab), flatten=False) ) net.initialize(mx.init.Xavier(), ctx=ctx) loss = gluon.loss.SoftmaxCELoss() """ Explanation: Creating the network End of explanation """ schedule = mx.lr_scheduler.FactorScheduler(step=len(train_data)*10, factor=0.75) schedule.base_lr = 0.01 trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':0.01, 'lr_scheduler':schedule}) """ Explanation: We use a learning rate schedule to improve the convergence of the model End of explanation """ epochs = 100 for e in range(epochs): epoch_loss = 0. for i, (data, label) in enumerate(train_data): data = data.as_in_context(ctx) label = label.as_in_context(ctx) with mx.autograd.record(): output = net(data) l = loss(output, label) l.backward() trainer.step(data.shape[0]) epoch_loss += l.mean() print("Epoch [{}] Loss: {}, LR {}".format(e, epoch_loss.asscalar()/(i+1), trainer.learning_rate)) """ Explanation: Training loop End of explanation """ n = random.randint(0, len(test_data)-1) x_orig = X[split_idx+n] y_orig = Y[split_idx+n] def get_pred(x): x, _ = transform(x, x) output = net(x.as_in_context(ctx).expand_dims(axis=0)) # Convert output back to string pred = ''.join([vocab[int(o)] for o in output[0].argmax(axis=1).asnumpy().tolist()]) return pred """ Explanation: Testing We get a random element from the testing set End of explanation """ x_ = ' '.join(map(str,x_orig)) label = ' '.join(map(str,y_orig)) print("X {}\nPredicted {}\nLabel {}".format(x_, get_pred(x_orig), label)) """ Explanation: Printing the result End of explanation """ print(get_pred(np.array([500, 30, 999, 10, 130]))) """ Explanation: We can also pick our own example, and the network manages to sort it without problem: End of explanation """ print("Only four numbers:", get_pred(np.array([105, 302, 501, 202]))) """ Explanation: The model has even learned to generalize to examples not on the training set End of explanation """ print("Small digits:", get_pred(np.array([10, 3, 5, 2, 8]))) print("Small digits, 6 numbers:", get_pred(np.array([10, 33, 52, 21, 82, 10]))) """ Explanation: However we can see it has trouble with other edge cases: End of explanation """
tiagoantao/biopython-notebook
notebooks/11 - Going 3D - The PDB module.ipynb
mit
from Bio.PDB.PDBParser import PDBParser p = PDBParser(PERMISSIVE=1) """ Explanation: Source of the materials: Biopython cookbook (adapted) <font color='red'>Status: Draft</font> Going 3D: The PDB module Bio.PDB is a Biopython module that focuses on working with crystal structures of biological macromolecules. Among other things, Bio.PDB includes a PDBParser class that produces a Structure object, which can be used to access the atomic data in the file in a convenient manner. There is limited support for parsing the information contained in the PDB header. Reading and writing crystal structure files Reading a PDB file First we create a PDBParser object: End of explanation """ structure_id = "1fat" filename = "data/pdb1fat.ent" structure = p.get_structure(structure_id, filename) """ Explanation: The <span>PERMISSIVE</span> flag indicates that a number of common problems (see [problem structures]) associated with PDB files will be ignored (but note that some atoms and/or residues will be missing). If the flag is not present a <span>PDBConstructionException</span> will be generated if any problems are detected during the parse operation. The Structure object is then produced by letting the PDBParser object parse a PDB file (the PDB file in this case is called ’pdb1fat.ent’, ’1fat’ is a user defined name for the structure): End of explanation """ resolution = structure.header['resolution'] keywords = structure.header['keywords'] """ Explanation: You can extract the header and trailer (simple lists of strings) of the PDB file from the PDBParser object with the <span>get_header</span> and <span>get_trailer</span> methods. Note however that many PDB files contain headers with incomplete or erroneous information. Many of the errors have been fixed in the equivalent mmCIF files. Hence, if you are interested in the header information, it is a good idea to extract information from mmCIF files using the MMCIF2Dict tool described below, instead of parsing the PDB header. Now that is clarified, let’s return to parsing the PDB header. The structure object has an attribute called header which is a Python dictionary that maps header records to their values. Example: End of explanation """ file = open(filename, 'r') header_dict = parse_pdb_header(file) file.close() """ Explanation: The available keys are name, head, deposition_date, release_date, structure_method, resolution, structure_reference (which maps to a list of references), journal_reference, author, and compound (which maps to a dictionary with various information about the crystallized compound). The dictionary can also be created without creating a Structure object, ie. directly from the PDB file: End of explanation """ from Bio.PDB.MMCIFParser import MMCIFParser parser = MMCIFParser() """ Explanation: Reading an mmCIF file Similarly to the case the case of PDB files, first create an MMCIFParser object: End of explanation """ structure = parser.get_structure('1fat', 'data/1fat.cif') """ Explanation: Then use this parser to create a structure object from the mmCIF file: End of explanation """ from Bio.PDB.MMCIF2Dict import MMCIF2Dict mmcif_dict = MMCIF2Dict('data/1fat.cif') """ Explanation: To have some more low level access to an mmCIF file, you can use the MMCIF2Dict class to create a Python dictionary that maps all mmCIF tags in an mmCIF file to their values. If there are multiple values (like in the case of tag _atom_site.Cartn_y, which holds the $y$ coordinates of all atoms), the tag is mapped to a list of values. The dictionary is created from the mmCIF file as follows: End of explanation """ sc = mmcif_dict['_exptl_crystal.density_percent_sol'] """ Explanation: Example: get the solvent content from an mmCIF file: End of explanation """ y_list = mmcif_dict['_atom_site.Cartn_y'] """ Explanation: Example: get the list of the $y$ coordinates of all atoms End of explanation """ from Bio.PDB import PDBIO io = PDBIO() io.set_structure(s) io.save('out.pdb') """ Explanation: Reading files in the PDB XML format That’s not yet supported, but we are definitely planning to support that in the future (it’s not a lot of work). Contact the Biopython developers () if you need this). Writing PDB files Use the PDBIO class for this. It’s easy to write out specific parts of a structure too, of course. Example: saving a structure End of explanation """ from Bio.PDB.PDBIO import Select class GlySelect(Select): def accept_residue(self, residue): if residue.get_name() == 'GLY': return True else: return False io = PDBIO() io.set_structure(s) io.save('gly_only.pdb', GlySelect()) """ Explanation: If you want to write out a part of the structure, make use of the Select class (also in PDBIO). Select has four methods: accept_model(model) accept_chain(chain) accept_residue(residue) accept_atom(atom) By default, every method returns 1 (which means the model/chain/residue/atom is included in the output). By subclassing Select and returning 0 when appropriate you can exclude models, chains, etc. from the output. Cumbersome maybe, but very powerful. The following code only writes out glycine residues: End of explanation """ child_entity = parent_entity[child_id] """ Explanation: If this is all too complicated for you, the Dice module contains a handy extract function that writes out all residues in a chain between a start and end residue. Structure representation The overall layout of a Structure object follows the so-called SMCRA (Structure/Model/Chain/Residue/Atom) architecture: A structure consists of models A model consists of chains A chain consists of residues A residue consists of atoms This is the way many structural biologists/bioinformaticians think about structure, and provides a simple but efficient way to deal with structure. Additional stuff is essentially added when needed. A UML diagram of the Structure object (forget about the Disordered classes for now) is shown in Fig. [fig:smcra]. Such a data structure is not necessarily best suited for the representation of the macromolecular content of a structure, but it is absolutely necessary for a good interpretation of the data present in a file that describes the structure (typically a PDB or MMCIF file). If this hierarchy cannot represent the contents of a structure file, it is fairly certain that the file contains an error or at least does not describe the structure unambiguously. If a SMCRA data structure cannot be generated, there is reason to suspect a problem. Parsing a PDB file can thus be used to detect likely problems. We will give several examples of this in section [problem structures]. Structure, Model, Chain and Residue are all subclasses of the Entity base class. The Atom class only (partly) implements the Entity interface (because an Atom does not have children). For each Entity subclass, you can extract a child by using a unique id for that child as a key (e.g. you can extract an Atom object from a Residue object by using an atom name string as a key, you can extract a Chain object from a Model object by using its chain identifier as a key). Disordered atoms and residues are represented by DisorderedAtom and DisorderedResidue classes, which are both subclasses of the DisorderedEntityWrapper base class. They hide the complexity associated with disorder and behave exactly as Atom and Residue objects. In general, a child Entity object (i.e. Atom, Residue, Chain, Model) can be extracted from its parent (i.e. Residue, Chain, Model, Structure, respectively) by using an id as a key. End of explanation """ child_list = parent_entity.get_list() """ Explanation: You can also get a list of all child Entities of a parent Entity object. Note that this list is sorted in a specific way (e.g. according to chain identifier for Chain objects in a Model object). End of explanation """ parent_entity = child_entity.get_parent() """ Explanation: You can also get the parent from a child: End of explanation """ full_id = residue.get_full_id() print(full_id) """ Explanation: At all levels of the SMCRA hierarchy, you can also extract a full id. The full id is a tuple containing all id’s starting from the top object (Structure) down to the current object. A full id for a Residue object e.g. is something like: End of explanation """ entity.get_id() """ Explanation: This corresponds to: The Structure with id `"1abc`" The Model with id 0 The Chain with id `"A`" The Residue with id (`" `", 10, `"A`"). The Residue id indicates that the residue is not a hetero-residue (nor a water) because it has a blank hetero field, that its sequence identifier is 10 and that its insertion code is `"A`". To get the entity’s id, use the get_id method: End of explanation """ entity.has_id(entity_id) """ Explanation: You can check if the entity has a child with a given id by using the has_id method: End of explanation """ nr_children = len(entity) """ Explanation: The length of an entity is equal to its number of children: End of explanation """ first_model = structure[0] """ Explanation: It is possible to delete, rename, add, etc. child entities from a parent entity, but this does not include any sanity checks (e.g. it is possible to add two residues with the same id to one chain). This really should be done via a nice Decorator class that includes integrity checking, but you can take a look at the code (Entity.py) if you want to use the raw interface. Structure The Structure object is at the top of the hierarchy. Its id is a user given string. The Structure contains a number of Model children. Most crystal structures (but not all) contain a single model, while NMR structures typically consist of several models. Disorder in crystal structures of large parts of molecules can also result in several models. Model The id of the Model object is an integer, which is derived from the position of the model in the parsed file (they are automatically numbered starting from 0). Crystal structures generally have only one model (with id 0), while NMR files usually have several models. Whereas many PDB parsers assume that there is only one model, the Structure class in Bio.PDB is designed such that it can easily handle PDB files with more than one model. As an example, to get the first model from a Structure object, use End of explanation """ chain_A = model["A"] """ Explanation: The Model object stores a list of Chain children. Chain The id of a Chain object is derived from the chain identifier in the PDB/mmCIF file, and is a single character (typically a letter). Each Chain in a Model object has a unique id. As an example, to get the Chain object with identifier “A” from a Model object, use End of explanation """ # Full id residue = chain[(' ', 100, ' ')] residue = chain[100] """ Explanation: The Chain object stores a list of Residue children. Residue A residue id is a tuple with three elements: The hetero-field (hetfield): this is 'W' in the case of a water molecule; 'H_' followed by the residue name for other hetero residues (e.g. 'H_GLC' in the case of a glucose molecule); blank for standard amino and nucleic acids. This scheme is adopted for reasons described in section [hetero problems]. The sequence identifier (resseq), an integer describing the position of the residue in the chain (e.g., 100); The insertion code (icode); a string, e.g. ’A’. The insertion code is sometimes used to preserve a certain desirable residue numbering scheme. A Ser 80 insertion mutant (inserted e.g. between a Thr 80 and an Asn 81 residue) could e.g. have sequence identifiers and insertion codes as follows: Thr 80 A, Ser 80 B, Asn 81. In this way the residue numbering scheme stays in tune with that of the wild type structure. The id of the above glucose residue would thus be (’H_GLC’, 100, ’A’). If the hetero-flag and insertion code are blank, the sequence identifier alone can be used: End of explanation """ # use full id res10 = chain[(' ', 10, ' ')] res10 = chain[10] """ Explanation: The reason for the hetero-flag is that many, many PDB files use the same sequence identifier for an amino acid and a hetero-residue or a water, which would create obvious problems if the hetero-flag was not used. Unsurprisingly, a Residue object stores a set of Atom children. It also contains a string that specifies the residue name (e.g. “ASN”) and the segment identifier of the residue (well known to X-PLOR users, but not used in the construction of the SMCRA data structure). Let’s look at some examples. Asn 10 with a blank insertion code would have residue id <span>(’ ’, 10, ’ ’)</span>. Water 10 would have residue id <span>(’W’, 10, ’ ’)</span>. A glucose molecule (a hetero residue with residue name GLC) with sequence identifier 10 would have residue id <span>(’H_GLC’, 10, ’ ’)</span>. In this way, the three residues (with the same insertion code and sequence identifier) can be part of the same chain because their residue id’s are distinct. In most cases, the hetflag and insertion code fields will be blank, e.g. <span>(’ ’, 10, ’ ’)</span>. In these cases, the sequence identifier can be used as a shortcut for the full id: End of explanation """ residue.get_resname() # returns the residue name, e.g. "ASN" residue.is_disordered() # returns 1 if the residue has disordered atoms residue.get_segid() # returns the SEGID, e.g. "CHN1" residue.has_id(name) # test if a residue has a certain atom """ Explanation: Each Residue object in a Chain object should have a unique id. However, disordered residues are dealt with in a special way, as described in section [point mutations]. A Residue object has a number of additional methods: End of explanation """ a.get_name() # atom name (spaces stripped, e.g. "CA") a.get_id() # id (equals atom name) a.get_coord() # atomic coordinates a.get_vector() # atomic coordinates as Vector object a.get_bfactor() # isotropic B factor a.get_occupancy() # occupancy a.get_altloc() # alternative location specifier a.get_sigatm() # standard deviation of atomic parameters a.get_siguij() # standard deviation of anisotropic B factor a.get_anisou() # anisotropic B factor a.get_fullname() # atom name (with spaces, e.g. ".CA.") """ Explanation: You can use is_aa(residue) to test if a Residue object is an amino acid. Atom The Atom object stores the data associated with an atom, and has no children. The id of an atom is its atom name (e.g. “OG” for the side chain oxygen of a Ser residue). An Atom id needs to be unique in a Residue. Again, an exception is made for disordered atoms, as described in section [disordered atoms]. The atom id is simply the atom name (eg. ’CA’). In practice, the atom name is created by stripping all spaces from the atom name in the PDB file. However, in PDB files, a space can be part of an atom name. Often, calcium atoms are called ’CA..’ in order to distinguish them from C$\alpha$ atoms (which are called ’.CA.’). In cases were stripping the spaces would create problems (ie. two atoms called ’CA’ in the same residue) the spaces are kept. In a PDB file, an atom name consists of 4 chars, typically with leading and trailing spaces. Often these spaces can be removed for ease of use (e.g. an amino acid C$ \alpha $ atom is labeled “.CA.” in a PDB file, where the dots represent spaces). To generate an atom name (and thus an atom id) the spaces are removed, unless this would result in a name collision in a Residue (i.e. two Atom objects with the same atom name and id). In the latter case, the atom name including spaces is tried. This situation can e.g. happen when one residue contains atoms with names “.CA.” and “CA..”, although this is not very likely. The atomic data stored includes the atom name, the atomic coordinates (including standard deviation if present), the B factor (including anisotropic B factors and standard deviation if present), the altloc specifier and the full atom name including spaces. Less used items like the atom element number or the atomic charge sometimes specified in a PDB file are not stored. To manipulate the atomic coordinates, use the transform method of the Atom object. Use the set_coord method to specify the atomic coordinates directly. An Atom object has the following additional methods: End of explanation """ # get atom coordinates as vectors n = residue['N'].get_vector() c = residue['C'].get_vector() ca = residue['CA'].get_vector() n = n - ca c = c - ca rot = rotaxis(-pi * 120.0/180.0, c) cb_at_origin = n.left_multiply(rot) cb = cb_at_origin + ca """ Explanation: To represent the atom coordinates, siguij, anisotropic B factor and sigatm Numpy arrays are used. The get_vector method returns a Vector object representation of the coordinates of the Atom object, allowing you to do vector operations on atomic coordinates. Vector implements the full set of 3D vector operations, matrix multiplication (left and right) and some advanced rotation-related operations as well. As an example of the capabilities of Bio.PDB’s Vector module, suppose that you would like to find the position of a Gly residue’s C$\beta$ atom, if it had one. Rotating the N atom of the Gly residue along the C$\alpha$-C bond over -120 degrees roughly puts it in the position of a virtual C$\beta$ atom. Here’s how to do it, making use of the rotaxis method (which can be used to construct a rotation around a certain axis) of the Vector module: End of explanation """ model = structure[0] chain = model['A'] residue = chain[100] atom = residue['CA'] """ Explanation: This example shows that it’s possible to do some quite nontrivial vector operations on atomic data, which can be quite useful. In addition to all the usual vector operations (cross (use **), and dot (use *) product, angle, norm, etc.) and the above mentioned rotaxis function, the Vector module also has methods to rotate (rotmat) or reflect (refmat) one vector on top of another. Extracting a specific Atom/Residue/Chain/Model from a Structure These are some examples: End of explanation """ atom = structure[0]['A'][100]['CA'] """ Explanation: Note that you can use a shortcut: End of explanation """ atom.disordered_select('A') # select altloc A atom print(atom.get_altloc()) atom.disordered_select('B') # select altloc B atom print(atom.get_altloc()) """ Explanation: Disorder Bio.PDB can handle both disordered atoms and point mutations (i.e. a Gly and an Ala residue in the same position). General approach[disorder problems] Disorder should be dealt with from two points of view: the atom and the residue points of view. In general, we have tried to encapsulate all the complexity that arises from disorder. If you just want to loop over all C$\alpha$ atoms, you do not care that some residues have a disordered side chain. On the other hand it should also be possible to represent disorder completely in the data structure. Therefore, disordered atoms or residues are stored in special objects that behave as if there is no disorder. This is done by only representing a subset of the disordered atoms or residues. Which subset is picked (e.g. which of the two disordered OG side chain atom positions of a Ser residue is used) can be specified by the user. Disordered atoms[disordered atoms] Disordered atoms are represented by ordinary Atom objects, but all Atom objects that represent the same physical atom are stored in a DisorderedAtom object (see Fig. [fig:smcra]). Each Atom object in a DisorderedAtom object can be uniquely indexed using its altloc specifier. The DisorderedAtom object forwards all uncaught method calls to the selected Atom object, by default the one that represents the atom with the highest occupancy. The user can of course change the selected Atom object, making use of its altloc specifier. In this way atom disorder is represented correctly without much additional complexity. In other words, if you are not interested in atom disorder, you will not be bothered by it. Each disordered atom has a characteristic altloc identifier. You can specify that a DisorderedAtom object should behave like the Atom object associated with a specific altloc identifier: End of explanation """ residue = chain[10] residue.disordered_select('CYS') """ Explanation: Disordered residues Common case {#common-case .unnumbered} The most common case is a residue that contains one or more disordered atoms. This is evidently solved by using DisorderedAtom objects to represent the disordered atoms, and storing the DisorderedAtom object in a Residue object just like ordinary Atom objects. The DisorderedAtom will behave exactly like an ordinary atom (in fact the atom with the highest occupancy) by forwarding all uncaught method calls to one of the Atom objects (the selected Atom object) it contains. Point mutations[point mutations] {#point-mutationspoint-mutations .unnumbered} A special case arises when disorder is due to a point mutation, i.e. when two or more point mutants of a polypeptide are present in the crystal. An example of this can be found in PDB structure 1EN2. Since these residues belong to a different residue type (e.g. let’s say Ser 60 and Cys 60) they should not be stored in a single Residue object as in the common case. In this case, each residue is represented by one Residue object, and both Residue objects are stored in a single DisorderedResidue object (see Fig. [fig:smcra]). The DisorderedResidue object forwards all uncaught methods to the selected Residue object (by default the last Residue object added), and thus behaves like an ordinary residue. Each Residue object in a DisorderedResidue object can be uniquely identified by its residue name. In the above example, residue Ser 60 would have id “SER” in the DisorderedResidue object, while residue Cys 60 would have id “CYS”. The user can select the active Residue object in a DisorderedResidue object via this id. Example: suppose that a chain has a point mutation at position 10, consisting of a Ser and a Cys residue. Make sure that residue 10 of this chain behaves as the Cys residue. End of explanation """ from Bio.PDB.PDBParser import PDBParser parser = PDBParser() structure = parser.get_structure("test", "data/pdb1fat.ent") model = structure[0] chain = model["A"] residue = chain[1] atom = residue["CA"] """ Explanation: In addition, you can get a list of all Atom objects (ie. all DisorderedAtom objects are ’unpacked’ to their individual Atom objects) using the get_unpacked_list method of a (Disordered)Residue object. Hetero residues Associated problems[hetero problems] A common problem with hetero residues is that several hetero and non-hetero residues present in the same chain share the same sequence identifier (and insertion code). Therefore, to generate a unique id for each hetero residue, waters and other hetero residues are treated in a different way. Remember that Residue object have the tuple (hetfield, resseq, icode) as id. The hetfield is blank (“ ”) for amino and nucleic acids, and a string for waters and other hetero residues. The content of the hetfield is explained below. Water residues The hetfield string of a water residue consists of the letter “W”. So a typical residue id for a water is (“W”, 1, “ ”). Other hetero residues The hetfield string for other hetero residues starts with “H_” followed by the residue name. A glucose molecule e.g. with residue name “GLC” would have hetfield “H_GLC”. Its residue id could e.g. be (“H_GLC”, 1, “ ”). Navigating through a Structure object Parse a PDB file, and extract some Model, Chain, Residue and Atom objects {#parse-a-pdb-file-and-extract-some-model-chain-residue-and-atom-objects .unnumbered} End of explanation """ p = PDBParser() structure = p.get_structure('X', 'data/pdb1fat.ent') for model in structure: for chain in model: for residue in chain: for atom in residue: print(atom) """ Explanation: Iterating through all atoms of a structure {#iterating-through-all-atoms-of-a-structure .unnumbered} End of explanation """ atoms = structure.get_atoms() for atom in atoms: print(atom) """ Explanation: There is a shortcut if you want to iterate over all atoms in a structure: End of explanation """ atoms = chain.get_atoms() for atom in atoms: print(atom) """ Explanation: Similarly, to iterate over all atoms in a chain, use End of explanation """ residues = model.get_residues() for residue in residues: print(residue) """ Explanation: Iterating over all residues of a model {#iterating-over-all-residues-of-a-model .unnumbered} or if you want to iterate over all residues in a model: End of explanation """ from Bio.PDB import Selection res_list = Selection.unfold_entities(structure, 'R') """ Explanation: You can also use the Selection.unfold_entities function to get all residues from a structure: End of explanation """ atom_list = Selection.unfold_entities(chain, 'A') """ Explanation: or to get all atoms from a chain: End of explanation """ residue_list = Selection.unfold_entities(atom_list, 'R') chain_list = Selection.unfold_entities(atom_list, 'C') """ Explanation: Obviously, A=atom, R=residue, C=chain, M=model, S=structure. You can use this to go up in the hierarchy, e.g. to get a list of (unique) Residue or Chain parents from a list of Atoms: End of explanation """ residue_id = ("H_GLC", 10, " ") residue = chain[residue_id] """ Explanation: For more info, see the API documentation. Extract a hetero residue from a chain (e.g. a glucose (GLC) moiety with resseq 10) {#extract-a-hetero-residue-from-a-chain-e.g.-a-glucose-glc-moiety-with-resseq-10 .unnumbered} End of explanation """ for residue in chain.get_list(): residue_id = residue.get_id() hetfield = residue_id[0] if hetfield[0]=="H": print(residue_id) """ Explanation: Print all hetero residues in chain {#print-all-hetero-residues-in-chain .unnumbered} End of explanation """ for model in structure.get_list(): for chain in model.get_list(): for residue in chain.get_list(): if residue.has_id("CA"): ca = residue["CA"] if ca.get_bfactor() > 50.0: print(ca.get_coord()) """ Explanation: Print out the coordinates of all CA atoms in a structure with B factor greater than 50 {#print-out-the-coordinates-of-all-ca-atoms-in-a-structure-with-b-factor-greater-than-50 .unnumbered} End of explanation """ for model in structure.get_list(): for chain in model.get_list(): for residue in chain.get_list(): if residue.is_disordered(): resseq = residue.get_id()[1] resname = residue.get_resname() model_id = model.get_id() chain_id = chain.get_id() print(model_id, chain_id, resname, resseq) """ Explanation: Print out all the residues that contain disordered atoms {#print-out-all-the-residues-that-contain-disordered-atoms .unnumbered} End of explanation """ for model in structure.get_list(): for chain in model.get_list(): for residue in chain.get_list(): if residue.is_disordered(): for atom in residue.get_list(): if atom.is_disordered() and atom.disordered_has_id("A"): atom.disordered_select("A") """ Explanation: Loop over all disordered atoms, and select all atoms with altloc A (if present) {#loop-over-all-disordered-atoms-and-select-all-atoms-with-altloc-a-if-present .unnumbered} This will make sure that the SMCRA data structure will behave as if only the atoms with altloc A are present. End of explanation """ model_nr = 1 polypeptide_list = build_peptides(structure, model_nr) for polypeptide in polypeptide_list: print(polypeptide) """ Explanation: Extracting polypeptides from a Structure object[subsubsec:extracting_polypeptides] {#extracting-polypeptides-from-a-structure-objectsubsubsecextracting_polypeptides .unnumbered} To extract polypeptides from a structure, construct a list of Polypeptide objects from a Structure object using PolypeptideBuilder as follows: End of explanation """ # Using C-N ppb = PPBuilder() for pp in ppb.build_peptides(structure): print(pp.get_sequence()) ppb = CaPPBuilder() for pp in ppb.build_peptides(structure): print(pp.get_sequence()) """ Explanation: A Polypeptide object is simply a UserList of Residue objects, and is always created from a single Model (in this case model 1). You can use the resulting Polypeptide object to get the sequence as a Seq object or to get a list of C$\alpha$ atoms as well. Polypeptides can be built using a C-N or a C$\alpha$-C$\alpha$ distance criterion. Example: End of explanation """ seq = polypeptide.get_sequence() print(seq) """ Explanation: Note that in the above case only model 0 of the structure is considered by PolypeptideBuilder. However, it is possible to use PolypeptideBuilder to build Polypeptide objects from Model and Chain objects as well. Obtaining the sequence of a structure {#obtaining-the-sequence-of-a-structure .unnumbered} The first thing to do is to extract all polypeptides from the structure (as above). The sequence of each polypeptide can then easily be obtained from the Polypeptide objects. The sequence is represented as a Biopython Seq object, and its alphabet is defined by a ProteinAlphabet object. Example: End of explanation """ # Get some atoms ca1 = residue1['CA'] ca2 = residue2['CA'] distance = ca1-ca2 """ Explanation: Analyzing structures Measuring distances The minus operator for atoms has been overloaded to return the distance between two atoms. End of explanation """ vector1 = atom1.get_vector() vector2 = atom2.get_vector() vector3 = atom3.get_vector() angle = calc_angle(vector1, vector2, vector3) """ Explanation: Measuring angles Use the vector representation of the atomic coordinates, and the calc_angle function from the Vector module: End of explanation """ vector1 = atom1.get_vector() vector2 = atom2.get_vector() vector3 = atom3.get_vector() vector4 = atom4.get_vector() angle = calc_dihedral(vector1, vector2, vector3, vector4) """ Explanation: Measuring torsion angles Use the vector representation of the atomic coordinates, and the calc_dihedral function from the Vector module: End of explanation """ from Bio.PDB import Superimposer sup = Superimposer() sup.set_atoms(fixed, moving) print(sup.rotran) print(sup.rms) sup.apply(moving) """ Explanation: Determining atom-atom contacts Use NeighborSearch to perform neighbor lookup. The neighbor lookup is done using a KD tree module written in C (see Bio.KDTree), making it very fast. It also includes a fast method to find all point pairs within a certain distance of each other. Superimposing two structures Use a Superimposer object to superimpose two coordinate sets. This object calculates the rotation and translation matrix that rotates two lists of atoms on top of each other in such a way that their RMSD is minimized. Of course, the two lists need to contain the same number of atoms. The Superimposer object can also apply the rotation/translation to a list of atoms. The rotation and translation are stored as a tuple in the rotran attribute of the Superimposer object (note that the rotation is right multiplying!). The RMSD is stored in the rmsd attribute. The algorithm used by Superimposer comes from @golub1989 [Golub & Van Loan] and makes use of singular value decomposition (this is implemented in the general Bio.SVDSuperimposer module). Example: End of explanation """ from Bio.PDB import HSExposure model = structure[0] hse = HSExposure() exp_ca = hse.calc_hs_exposure(model, option='CA3') exp_cb=hse.calc_hs_exposure(model, option='CB') exp_fs = hse.calc_fs_exposure(model) print(exp_ca[some_residue]) """ Explanation: To superimpose two structures based on their active sites, use the active site atoms to calculate the rotation/translation matrices (as above), and apply these to the whole molecule. Mapping the residues of two related structures onto each other First, create an alignment file in FASTA format, then use the StructureAlignment class. This class can also be used for alignments with more than two structures. Calculating the Half Sphere Exposure Half Sphere Exposure (HSE) is a new, 2D measure of solvent exposure @hamelryck2005. Basically, it counts the number of C$\alpha$ atoms around a residue in the direction of its side chain, and in the opposite direction (within a radius of $13 \AA$). Despite its simplicity, it outperforms many other measures of solvent exposure. HSE comes in two flavors: HSE$\alpha$ and HSE$\beta$. The former only uses the C$\alpha$ atom positions, while the latter uses the C$\alpha$ and C$\beta$ atom positions. The HSE measure is calculated by the HSExposure class, which can also calculate the contact number. The latter class has methods which return dictionaries that map a Residue object to its corresponding HSE$\alpha$, HSE$\beta$ and contact number values. Example: End of explanation """ from Bio.PDB import ResidueDepth model = structure[0] rd = ResidueDepth(model, pdb_file) residue_depth, ca_depth=rd[some_residue] """ Explanation: Determining the secondary structure For this functionality, you need to install DSSP (and obtain a license for it — free for academic use, see http://www.cmbi.kun.nl/gv/dssp/). Then use the DSSP class, which maps Residue objects to their secondary structure (and accessible surface area). The DSSP codes are listed in Table [cap:DSSP-codes]. Note that DSSP (the program, and thus by consequence the class) cannot handle multiple models! Code Secondary structure H $\alpha$-helix B Isolated $\beta$-bridge residue E Strand G 3-10 helix I $\Pi$-helix T Turn S Bend - Other : [cap:DSSP-codes]DSSP codes in Bio.PDB. The DSSP class can also be used to calculate the accessible surface area of a residue. But see also section [subsec:residue_depth]. Calculating the residue depth[subsec:residue_depth] Residue depth is the average distance of a residue’s atoms from the solvent accessible surface. It’s a fairly new and very powerful parameterization of solvent accessibility. For this functionality, you need to install Michel Sanner’s MSMS program (http://www.scripps.edu/pub/olson-web/people/sanner/html/msms_home.html). Then use the ResidueDepth class. This class behaves as a dictionary which maps Residue objects to corresponding (residue depth, C$\alpha$ depth) tuples. The C$\alpha$ depth is the distance of a residue’s C$\alpha$ atom to the solvent accessible surface. Example: End of explanation """ # Permissive parser parser = PDBParser(PERMISSIVE=1) parser = PDBParser() # The same (default) strict_parser = PDBParser(PERMISSIVE=0) """ Explanation: You can also get access to the molecular surface itself (via the get_surface function), in the form of a Numeric Python array with the surface points. Common problems in PDB files It is well known that many PDB files contain semantic errors (not the structures themselves, but their representation in PDB files). Bio.PDB tries to handle this in two ways. The PDBParser object can behave in two ways: a restrictive way and a permissive way, which is the default. Example: End of explanation """ from Bio.PDB import PDBList pdbl = PDBList() pdbl.retrieve_pdb_file('1FAT') """ Explanation: In the permissive state (DEFAULT), PDB files that obviously contain errors are “corrected” (i.e. some residues or atoms are left out). These errors include: Multiple residues with the same identifier Multiple atoms with the same identifier (taking into account the altloc identifier) These errors indicate real problems in the PDB file (for details see @hamelryck2003a [Hamelryck and Manderick, 2003]). In the restrictive state, PDB files with errors cause an exception to occur. This is useful to find errors in PDB files. Some errors however are automatically corrected. Normally each disordered atom should have a non-blank altloc identifier. However, there are many structures that do not follow this convention, and have a blank and a non-blank identifier for two disordered positions of the same atom. This is automatically interpreted in the right way. Sometimes a structure contains a list of residues belonging to chain A, followed by residues belonging to chain B, and again followed by residues belonging to chain A, i.e. the chains are ’broken’. This is also correctly interpreted. Examples[problem structures] The PDBParser/Structure class was tested on about 800 structures (each belonging to a unique SCOP superfamily). This takes about 20 minutes, or on average 1.5 seconds per structure. Parsing the structure of the large ribosomal subunit (1FKK), which contains about 64000 atoms, takes 10 seconds on a 1000 MHz PC. Three exceptions were generated in cases where an unambiguous data structure could not be built. In all three cases, the likely cause is an error in the PDB file that should be corrected. Generating an exception in these cases is much better than running the chance of incorrectly describing the structure in a data structure. Duplicate residues One structure contains two amino acid residues in one chain with the same sequence identifier (resseq 3) and icode. Upon inspection it was found that this chain contains the residues Thr A3, …, Gly A202, Leu A3, Glu A204. Clearly, Leu A3 should be Leu A203. A couple of similar situations exist for structure 1FFK (which e.g. contains Gly B64, Met B65, Glu B65, Thr B67, i.e. residue Glu B65 should be Glu B66). Duplicate atoms Structure 1EJG contains a Ser/Pro point mutation in chain A at position 22. In turn, Ser 22 contains some disordered atoms. As expected, all atoms belonging to Ser 22 have a non-blank altloc specifier (B or C). All atoms of Pro 22 have altloc A, except the N atom which has a blank altloc. This generates an exception, because all atoms belonging to two residues at a point mutation should have non-blank altloc. It turns out that this atom is probably shared by Ser and Pro 22, as Ser 22 misses the N atom. Again, this points to a problem in the file: the N atom should be present in both the Ser and the Pro residue, in both cases associated with a suitable altloc identifier. Automatic correction Some errors are quite common and can be easily corrected without much risk of making a wrong interpretation. These cases are listed below. A blank altloc for a disordered atom Normally each disordered atom should have a non-blank altloc identifier. However, there are many structures that do not follow this convention, and have a blank and a non-blank identifier for two disordered positions of the same atom. This is automatically interpreted in the right way. Broken chains Sometimes a structure contains a list of residues belonging to chain A, followed by residues belonging to chain B, and again followed by residues belonging to chain A, i.e. the chains are “broken”. This is correctly interpreted. Fatal errors Sometimes a PDB file cannot be unambiguously interpreted. Rather than guessing and risking a mistake, an exception is generated, and the user is expected to correct the PDB file. These cases are listed below. Duplicate residues All residues in a chain should have a unique id. This id is generated based on: The sequence identifier (resseq). The insertion code (icode). The hetfield string (“W” for waters and “H_” followed by the residue name for other hetero residues) The residue names of the residues in the case of point mutations (to store the Residue objects in a DisorderedResidue object). If this does not lead to a unique id something is quite likely wrong, and an exception is generated. Duplicate atoms All atoms in a residue should have a unique id. This id is generated based on: The atom name (without spaces, or with spaces if a problem arises). The altloc specifier. If this does not lead to a unique id something is quite likely wrong, and an exception is generated. Accessing the Protein Data Bank Downloading structures from the Protein Data Bank Structures can be downloaded from the PDB (Protein Data Bank) by using the retrieve_pdb_file method on a PDBList object. The argument for this method is the PDB identifier of the structure. End of explanation """ pl = PDBList(pdb='/tmp/data/pdb') pl.update_pdb() """ Explanation: The PDBList class can also be used as a command-line tool: ``` python PDBList.py 1fat ``` The downloaded file will be called pdb1fat.ent and stored in the current working directory. Note that the retrieve_pdb_file method also has an optional argument pdir that specifies a specific directory in which to store the downloaded PDB files. The retrieve_pdb_file method also has some options to specify the compression format used for the download, and the program used for local decompression (default .Z format and gunzip). In addition, the PDB ftp site can be specified upon creation of the PDBList object. By default, the server of the Worldwide Protein Data Bank (ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/) is used. See the API documentation for more details. Thanks again to Kristian Rother for donating this module. Downloading the entire PDB The following commands will store all PDB files in the /data/pdb directory: ``` python PDBList.py all /data/pdb python PDBList.py all /data/pdb -d ``` The API method for this is called download_entire_pdb. Adding the -d option will store all files in the same directory. Otherwise, they are sorted into PDB-style subdirectories according to their PDB ID’s. Depending on the traffic, a complete download will take 2-4 days. Keeping a local copy of the PDB up to date This can also be done using the PDBList object. One simply creates a PDBList object (specifying the directory where the local copy of the PDB is present) and calls the update_pdb method: End of explanation """
jdhp-docs/python-notebooks
notebook_snippets_en.ipynb
mit
%matplotlib notebook # As an alternative, one may use: %pylab notebook # For old Matplotlib and Ipython versions, use the non-interactive version: # %matplotlib inline or %pylab inline # To ignore warnings (http://stackoverflow.com/questions/9031783/hide-all-warnings-in-ipython) import warnings warnings.filterwarnings('ignore') import math import numpy as np import matplotlib.pyplot as plt import ipywidgets from ipywidgets import interact """ Explanation: Notebook snippets, tips and tricks TODO: * Read https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/ * Read http://blog.juliusschulz.de/blog/ultimate-ipython-notebook * howto avoid loosing matplotlib interactive rendering when a document is converted to HTML ? * https://www.reddit.com/r/IPython/comments/36p360/try_matplotlib_notebook_for_interactive_plots/ * http://stackoverflow.com/questions/36151181/exporting-interactive-jupyter-notebook-to-html * https://jakevdp.github.io/blog/2013/12/05/static-interactive-widgets/ * table of contents (JS) * matplotlib / D3.js interaction * matplotlib animations: how to make it faster * inspiration * http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-notebooks/ * https://github.com/ltiao/notebooks * https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/ * Howto make (personalized) Reveal.js slides from this notebook: https://forum.poppy-project.org/t/utiliser-jupyter-pour-des-presentations-etape-par-etape-use-jupyter-to-present-step-by-step/2271/2 * See https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/ Extension wishlist and todo: - Table of content - Hide some blocks in the HTML export - See https://github.com/jupyter/notebook/issues/534 - Customize CSS in HTML export - Add disqus in HTML export - See: https://github.com/jupyter/nbviewer/issues/80 - Example: http://nbviewer.jupyter.org/gist/tonyfast/977184c1243287e7e55e - Add metadata header/footer (initial publication date, last revision date, author, email, website, license, ...) - Vim like editor/navigation shortcut keys (search, search+edit, ...) - Spell checking - See https://github.com/ipython/ipython/issues/3216#issuecomment-59507673 and http://www.simulkade.com/posts/2015-04-07-spell-checking-in-jupyter-notebooks.html Inspiration: - https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks Import directives End of explanation """ x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) y = np.sin(x) plt.plot(x, y) """ Explanation: Useful keyboard shortcuts Enter edit mode: Enter Enter command mode: Escape In command mode: Show keyboard shortcuts: h Find and replace: f Insert a cell above the selection: a Insert a cell below the selection: b Switch to Markdown: m Delete the selected cells: dd (type twice 'd' quickly) Undo cell deletion: z Execute the selected cell: Ctrl + Enter Execute the selected cell and select the next cell: Shift + Enter Execute the selected cell and insert below: Alt + Enter Toggle output: o Toggle line number: l Copy selected cells: c Paste copied cells below: v Select the previous cell: k Select the next cell: j Merge selected cells, or current cell with cell below if only one cell selected: Shift + m In edit mode: Code completion or indent: Tab Tooltip: Shift + Tab Type "Shift + Tab" twice to see the online documentation of the selected element Type "Shift + Tab" 4 times to the online documentation in a dedicated frame Indent: ⌘] (on MacOS) Dedent: ⌘[ (on MacOS) Execute the selected cell: Ctrl + Enter Execute the selected cell and select the next cell: Shift + Enter Execute the selected cell and insert below: Alt + Enter Cut a cell at the current cursor position: Ctrl + Shift + - Matplotlib To plot a figure within a notebook, insert the %matplotlib notebook (or %pylab notebook) directive at the begining of the document. As an alternative, one may use %matplotlib inline (or %pylab inline) for non-interactive plots on old Matplotlib/Ipython versions. 2D plots End of explanation """ from mpl_toolkits.mplot3d import axes3d # Build datas ############### x = np.arange(-5, 5, 0.25) y = np.arange(-5, 5, 0.25) xx,yy = np.meshgrid(x, y) z = np.sin(np.sqrt(xx**2 + yy**2)) # Plot data ################# fig = plt.figure() ax = axes3d.Axes3D(fig) ax.plot_wireframe(xx, yy, z) plt.show() """ Explanation: 3D plots End of explanation """ from matplotlib.animation import FuncAnimation # Plots fig, ax = plt.subplots() def update(frame): x = np.arange(frame/10., frame/10. + 2. * math.pi, 0.1) ax.clear() ax.plot(x, np.cos(x)) # Optional: save plots filename = "img_{:03}.png".format(frame) plt.savefig(filename) # Note: "interval" is in ms anim = FuncAnimation(fig, update, interval=100) plt.show() """ Explanation: Animations End of explanation """ %%html <div id="toc"></div> %%javascript var toc = document.getElementById("toc"); toc.innerHTML = "<b>Table of contents:</b>"; toc.innerHTML += "<ol>" var h_list = $("h2, h3"); //$("h2"); // document.getElementsByTagName("h2"); for(var i = 0 ; i < h_list.length ; i++) { var h = h_list[i]; var h_str = h.textContent.slice(0, -1); // "slice(0, -1)" remove the last character if(h_str.length > 0) { if(h.tagName == "H2") { // https://stackoverflow.com/questions/10539419/javascript-get-elements-tag toc.innerHTML += "<li><a href=\"#" + h_str.replace(/\s+/g, '-') + "\">" + h_str + "</a></li>"; } else if(h.tagName == "H3") { // https://stackoverflow.com/questions/10539419/javascript-get-elements-tag toc.innerHTML += "<li> &nbsp;&nbsp;&nbsp; <a href=\"#" + h_str.replace(/\s+/g, '-') + "\">" + h_str + "</a></li>"; } } } toc.innerHTML += "</ol>" """ Explanation: Interactive plots with Plotly TODO: https://plot.ly/ipython-notebooks/ Interactive plots with Bokeh TODO: http://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html Embedded HTML and Javascript End of explanation """ %run ./notebook_snippets_run_test.py %run ./notebook_snippets_run_mpl_test.py """ Explanation: IPython built-in magic commands See http://ipython.readthedocs.io/en/stable/interactive/magics.html Execute an external python script End of explanation """ # %load ./notebook_snippets_run_mpl_test.py #!/usr/bin/env python3 # Copyright (c) 2012 Jérémie DECOCK (http://www.jdhp.org) # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ This module has been written to illustrate the ``%run`` magic command in ``notebook_snippets.ipynb``. """ import numpy as np import matplotlib.pyplot as plt def main(): x = np.arange(-10, 10, 0.1) y = np.cos(x) plt.plot(x, y) plt.grid(True) plt.show() if __name__ == '__main__': main() """ Explanation: Load an external python script Load the full script End of explanation """ # %load -s main ./notebook_snippets_run_mpl_test.py def main(): x = np.arange(-10, 10, 0.1) y = np.cos(x) plt.plot(x, y) plt.grid(True) plt.show() """ Explanation: Load a specific symbol (funtion, class, ...) End of explanation """ # %load -r 22-41 ./notebook_snippets_run_mpl_test.py """ This module has been written to illustrate the ``%run`` magic command in ``notebook_snippets.ipynb``. """ import numpy as np import matplotlib.pyplot as plt def main(): x = np.arange(-10, 10, 0.1) y = np.cos(x) plt.plot(x, y) plt.grid(True) plt.show() if __name__ == '__main__': main() """ Explanation: Load specific lines End of explanation """ %%time plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50) """ Explanation: Time measurement %time End of explanation """ %%timeit plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50) """ Explanation: %timeit End of explanation """ #help(ipywidgets) #dir(ipywidgets) from ipywidgets import IntSlider from IPython.display import display slider = IntSlider(min=1, max=10) display(slider) """ Explanation: ipywidget On jupyter lab, you should install widgets extension first (see https://ipywidgets.readthedocs.io/en/latest/user_install.html#installing-the-jupyterlab-extension): jupyter labextension install @jupyter-widgets/jupyterlab-manager End of explanation """ #help(ipywidgets.interact) """ Explanation: ipywidgets.interact Documentation See http://ipywidgets.readthedocs.io/en/latest/examples/Using%20Interact.html End of explanation """ @interact(text="IPython Widgets") def greeting(text): print("Hello {}".format(text)) """ Explanation: Using interact as a decorator with named parameters To me, this is the best option for single usage functions... Text End of explanation """ @interact(num=5) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0, 100)) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0, 100, 10)) def square(num): print("{} squared is {}".format(num, num*num)) """ Explanation: Integer (IntSlider) End of explanation """ @interact(num=5.) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0., 10.)) def square(num): print("{} squared is {}".format(num, num*num)) @interact(num=(0., 10., 0.5)) def square(num): print("{} squared is {}".format(num, num*num)) """ Explanation: Float (FloatSlider) End of explanation """ @interact(upper=False) def greeting(upper): text = "hello" if upper: print(text.upper()) else: print(text.lower()) """ Explanation: Boolean (Checkbox) End of explanation """ @interact(name=["John", "Bob", "Alice"]) def greeting(name): print("Hello {}".format(name)) """ Explanation: List (Dropdown) End of explanation """ @interact(word={"One": "Un", "Two": "Deux", "Three": "Trois"}) def translate(word): print(word) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) @interact(function={"Sin": np.sin, "Cos": np.cos}) def plot(function): y = function(x) plt.plot(x, y) """ Explanation: Dictionnary (Dropdown) End of explanation """ @interact def greeting(text="World"): print("Hello {}".format(text)) """ Explanation: Using interact as a decorator Text End of explanation """ @interact def square(num=2): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0, 100)): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0, 100, 10)): print("{} squared is {}".format(num, num*num)) """ Explanation: Integer (IntSlider) End of explanation """ @interact def square(num=5.): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0., 10.)): print("{} squared is {}".format(num, num*num)) @interact def square(num=(0., 10., 0.5)): print("{} squared is {}".format(num, num*num)) """ Explanation: Float (FloatSlider) End of explanation """ @interact def greeting(upper=False): text = "hello" if upper: print(text.upper()) else: print(text.lower()) """ Explanation: Boolean (Checkbox) End of explanation """ @interact def greeting(name=["John", "Bob", "Alice"]): print("Hello {}".format(name)) """ Explanation: List (Dropdown) End of explanation """ @interact def translate(word={"One": "Un", "Two": "Deux", "Three": "Trois"}): print(word) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) @interact def plot(function={"Sin": np.sin, "Cos": np.cos}): y = function(x) plt.plot(x, y) """ Explanation: Dictionnary (Dropdown) End of explanation """ def greeting(text): print("Hello {}".format(text)) interact(greeting, text="IPython Widgets") """ Explanation: Using interact as a function To me, this is the best option for multiple usage functions... Text End of explanation """ def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=5) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0, 100)) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0, 100, 10)) """ Explanation: Integer (IntSlider) End of explanation """ def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=5.) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0., 10.)) def square(num): print("{} squared is {}".format(num, num*num)) interact(square, num=(0., 10., 0.5)) """ Explanation: Float (FloatSlider) End of explanation """ def greeting(upper): text = "hello" if upper: print(text.upper()) else: print(text.lower()) interact(greeting, upper=False) """ Explanation: Boolean (Checkbox) End of explanation """ def greeting(name): print("Hello {}".format(name)) interact(greeting, name=["John", "Bob", "Alice"]) """ Explanation: List (Dropdown) End of explanation """ def translate(word): print(word) interact(translate, word={"One": "Un", "Two": "Deux", "Three": "Trois"}) x = np.arange(-2 * np.pi, 2 * np.pi, 0.1) def plot(function): y = function(x) plt.plot(x, y) interact(plot, function={"Sin": np.sin, "Cos": np.cos}) """ Explanation: Dictionnary (Dropdown) End of explanation """ @interact(upper=False, name=["john", "bob", "alice"]) def greeting(upper, name): text = "hello {}".format(name) if upper: print(text.upper()) else: print(text.lower()) """ Explanation: Example of using multiple widgets on one function End of explanation """ from IPython.display import Image Image("fourier.gif") """ Explanation: Display images (PNG, JPEG, GIF, ...) Within a code cell (using IPython.display) End of explanation """ from IPython.display import Audio """ Explanation: Within a Markdown cell Sound player widget See: https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.Audio End of explanation """ framerate = 44100 t = np.linspace(0, 5, framerate*5) data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) Audio(data, rate=framerate) """ Explanation: Generate a sound End of explanation """ data_left = np.sin(2 * np.pi * 220 * t) data_right = np.sin(2 * np.pi * 224 * t) Audio([data_left, data_right], rate=framerate) """ Explanation: Generate a multi-channel (stereo or more) sound End of explanation """ Audio("http://www.nch.com.au/acm/8k16bitpcm.wav") Audio(url="http://www.w3schools.com/html/horse.ogg") """ Explanation: From URL End of explanation """ #Audio('/path/to/sound.wav') #Audio(filename='/path/to/sound.ogg') """ Explanation: From file End of explanation """ #Audio(b'RAW_WAV_DATA..) #Audio(data=b'RAW_WAV_DATA..) """ Explanation: From bytes End of explanation """ from IPython.display import YouTubeVideo vid = YouTubeVideo("0HlRtU8clt4") display(vid) """ Explanation: Youtube widget Class for embedding a YouTube Video in an IPython session, based on its video id. e.g. to embed the video from https://www.youtube.com/watch?v=0HlRtU8clt4 , you would do: See https://ipython.org/ipython-doc/dev/api/generated/IPython.display.html#IPython.display.YouTubeVideo End of explanation """
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
notebooks/07_02_scipy_stats.ipynb
mit
%matplotlib inline import numpy as np from scipy import stats import matplotlib.pyplot as plt import pandas as pd """ Explanation: scipy stats This notebook focuses on the use of the scipy.stats module It is built based on a learn-by-example approach So it only covers a little part of the module's functionalities but provides a practical application. Some knowledge of numpy and matplotlib is needed to fully understand the content. Introduction The scipy.stats module provides mainly: * probability distributions: continuous, discrete and multivariate * statistical functions such as statistics and tests For further details you can check the official documentation Imports End of explanation """ N_SAMPLES = 1000 pds = [('Normal', stats.norm(), (-4., 4.)), ('LogNormal', stats.lognorm(1.), (0., 4.)), ('Students T', stats.t(3.), (-10., 10.)), ('Chi Squared', stats.chi2(1.), (0., 10.))] n_pds = len(pds) fig, ax_list = plt.subplots(n_pds, 3) fig.set_size_inches((5.*n_pds, 10.)) for ind, elem in enumerate(pds): pd_name, pd_func, pd_range = elem x_range = np.linspace(*pd_range, 101) # Probability Density Function ax_list[ind, 0].plot(x_range, pd_func.pdf(x_range)) ax_list[ind, 0].set_ylabel(pd_name) # Cumulative Distribution Function ax_list[ind, 1].plot(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].fill_between(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].set_ylim([0., 1.]) # Random Variable Sample ax_list[ind, 2].hist(pd_func.rvs(size=N_SAMPLES), bins=50) if ind == 0: _ = ax_list[ind, 0].set_title('Probability Density Function') _ = ax_list[ind, 1].set_title('Cumulative Distribution Function') _ = ax_list[ind, 2].set_title('Random Sample') """ Explanation: Probability distributions The scipy.stats module provides a very complete set of probability distributions. There are three types of distributions: * Continuous * Discrete * Multivariate Each of the univariate types is inherited from the same class, so they all have a common API. Continuos distributions There are ~100 different continuous distributions. Some of the methods in the API: * cdf: Cumulative Distribution Function * pdf: Probability Density Function * rvs: Random Variable Sample * ppf: Percent Point Function (inverse of the CDF) * fit: return MLE estimations of location, scale and shape, given a set of data End of explanation """ N_SAMPLES = 1000 pds = [('Binomial', stats.binom(20, 0.7), (0., 21.)), ('Poisson', stats.poisson(10.), (0., 21.))] n_pds = len(pds) fig, ax_list = plt.subplots(n_pds, 3) fig.set_size_inches((8.*n_pds, 8.)) for ind, elem in enumerate(pds): pd_name, pd_func, pd_range = elem x_range = np.arange(*pd_range) # Probability Mass Function ax_list[ind, 0].bar(x_range, pd_func.pmf(x_range)) ax_list[ind, 0].set_ylabel(pd_name) # Cumulative Distribution Function ax_list[ind, 1].plot(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].fill_between(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].set_ylim([0., 1.]) # Random Variable Sample ax_list[ind, 2].hist(pd_func.rvs(size=N_SAMPLES), bins=x_range - 0.5) if ind == 0: _ = ax_list[ind, 0].set_title('Probability Mass Function') _ = ax_list[ind, 1].set_title('Cumulative Distribution Function') _ = ax_list[ind, 2].set_title('Random Sample') """ Explanation: Discrete Distributions Discrete distributions have quite the same API. Having pmf= Probability Mass Function (instead of pdf) End of explanation """ df_prices = pd.read_csv('../resources/stock.csv') df_prices.head(10) df_prices.plot(no) _ = df_prices[['Apple', 'Microsoft']].plot(title='2016 stock prices') # Compute the daily relative increments df_incs = df_prices.drop('Date', axis=1) df_incs = ((df_incs - df_incs.shift(1))/df_incs.shift(1)).loc[1:, :] df_incs['Date'] = df_prices.Date df_incs.head(10) _ = df_incs[['Apple', 'Microsoft']].plot(title='2016 stock prices variations') m = np.mean(df_incs) print(m) s = np.std(df_incs, ddof=1) print(s) c = df_incs.cov() c """ Explanation: Example: creating a financial product Load and manipulate the data End of explanation """ # we can use the fit method to get the MLE of the mean and the std stats.norm.fit(df_incs.Apple) # Create estimated distributions based on the sample app_dist = stats.norm(m['Apple'], s['Apple']) win_dist = stats.norm(m['Microsoft'], s['Microsoft']) intl_dist = stats.norm(m['Intel'], s['Intel']) # We can test if this data fits a normal distribution (Kolmogorov-Smirnov test) app_KS = stats.kstest(df_incs['Apple'], 'norm', [m['Apple'], s['Apple']]) win_KS = stats.kstest(df_incs['Microsoft'], 'norm', [m['Microsoft'], s['Microsoft']]) intl_KS = stats.kstest(df_incs['Intel'], 'norm', [m['Intel'], s['Intel']]) print('''Apple: {} Microsoft: {} Intel: {}'''.format(app_KS, win_KS, intl_KS)) """ Explanation: Create a Normal distribution Let's assume that the stock prices follow a Normal distribution End of explanation """ # Compare histogram with estimated distribution x_range = np.arange(-0.05, +0.0501, 0.001) x_axis = (x_range[1:] + x_range[:-1])/2. n_incs = df_incs.shape[0] y_app = (app_dist.cdf(x_range[1:]) - app_dist.cdf(x_range[:-1]))*n_incs y_win = (win_dist.cdf(x_range[1:]) - win_dist.cdf(x_range[:-1]))*n_incs y_intl = (intl_dist.cdf(x_range[1:]) - intl_dist.cdf(x_range[:-1]))*n_incs fig = plt.figure(figsize=(16., 6.)) ax_app = fig.add_subplot(131) _ = ax_app.hist(df_incs['Apple'], bins=x_range, color='powderblue') _ = ax_app.set_xlabel('Apple') _ = ax_app.plot(x_axis, y_app, color='blue', linewidth=3) ax_win = fig.add_subplot(132) _ = ax_win.hist(df_incs['Microsoft'], bins=x_range, color='navajowhite') _ = ax_win.set_xlabel('Microsoft') _ = ax_win.plot(x_axis, y_win, color='orange', linewidth=3) ax_intl = fig.add_subplot(133) _ = ax_intl.hist(df_incs['Intel'], bins=x_range, color='lightgreen') _ = ax_intl.set_xlabel('Intel') _ = ax_intl.plot(x_axis, y_win, color='green', linewidth=3) """ Explanation: End of explanation """ # Create a multivariate normal distribution object m_norm = stats.multivariate_normal(m[['Apple', 'Microsoft']], df_incs[['Apple', 'Microsoft']].cov()) # Show the contour plot of the pdf x_range = np.arange(-0.05, +0.0501, 0.001) x, y = np.meshgrid(x_range, x_range) pos = np.dstack((x, y)) fig_m_norm = plt.figure(figsize=(6., 6.)) ax_m_norm = fig_m_norm.add_subplot(111) ax_m_norm.contourf(x, y, m_norm.pdf(pos), 50) _ = ax_m_norm.set_xlabel('Apple') _ = ax_m_norm.set_ylabel('Microsoft') """ Explanation: Exercise: Imagine you are a product designer in a finantial company. You want to create a new investment product to be "sold" to your clients based on the future stock prices of some IT companies. The profit the client gets from his investement is calculated like this: * At the time of the investment we check the initial stock prices * 12 months later (let's say 240 work days), the client gets 100% of the investement back. Additionally if all stock prices are higher than the initial ones, the client earns half the lowest increment (in %). What is the expected profit of this investment? What is the 5% highest risk that the finantial company is assuming? First we will try to create a finantial product based on the stock prices of Apple and Microsoft Create a multinormal distribution End of explanation """ # Create N (e.g 1000) random simulations of the daily relative increments with 240 samples N_SIMS = 1000 daily_incs = m_norm.rvs(size=[240, N_SIMS]) # Calculate yearly increments (from the composition of the daily increments) year_incs = (daily_incs + 1.).prod(axis=0) # calculate the amount payed for each simulation def amount_to_pay(a): if np.all( a >= 1.): return (a.min() - 1)/2 else: return 0. earnings = np.apply_along_axis(amount_to_pay, 1, year_incs) _ = plt.hist(earnings, bins=50) print('Expected profit of the investment: {:.2%}'.format(earnings.mean())) # To compute the 5% higher profit use the stats.scoreatpercentile function print('%5 higher profit of the investment: {:.2%}'.format(stats.scoreatpercentile(earnings, 95))) print('%1 higher profit of the investment: {:.2%}'.format(stats.scoreatpercentile(earnings, 99))) """ Explanation: Compute the expected profit and top 5% risk End of explanation """ # %load -r 2:10 solutions/07_02_scipy_stats.py """ Explanation: Both the expected profit and the risk assessed are too high!! Try adding Intel to the product in order to lower them down End of explanation """
ernestyalumni/MLgrabbag
kaggle/HOG_SVM32.ipynb
mit
def load_feat_vec(patientid,sub_name="stage1_feat"): f=file("./2017datascibowl/"+sub_name+"/"+patientid+"feat_vec","rb") arr = np.load(f) f.close() return arr def prepare_inputX(sub_name="stage1_feat_lowres64", ratio_of_train_to_total = 0.4, ratio_valid_to_rest = 0.2): patients_stage1_feat = os.listdir('./2017datascibowl/'+sub_name) patients_stage1_feat = [id.replace("feat_vec","") for id in patients_stage1_feat] # remove the suffix "feat_vec" # get y labels y_ids = pd.read_csv('./2017datascibowl/stage1_labels.csv') y_ids_found=y_ids.loc[y_ids['id'].isin(patients_stage1_feat)] m = len(patients_stage1_feat) found_indices =[] for i in range(m): if patients_stage1_feat[i] in y_ids_found['id'].as_matrix(): found_indices.append(i) patients_stage1_feat_found = [patients_stage1_feat[i] for i in found_indices] y_found=[] for i in range(len(patients_stage1_feat_found)): if (patients_stage1_feat_found[i] in y_ids_found['id'].as_matrix()): cancer_val = y_ids_found.loc[y_ids_found['id']==patients_stage1_feat_found[i]]['cancer'].as_matrix() y_found.append( cancer_val ) y_found=np.array(y_found).flatten() assert (len(y_found)==len(patients_stage1_feat_found)) numberofexamples = len(patients_stage1_feat_found) numberoftrainingexamples = int(numberofexamples*ratio_of_train_to_total) numbertovalidate = int((numberofexamples - numberoftrainingexamples)*ratio_valid_to_rest) numbertotest= numberofexamples - numberoftrainingexamples - numbertovalidate shuffledindices = np.random.permutation( numberofexamples) patients_train = [patients_stage1_feat_found[id] for id in shuffledindices[:numberoftrainingexamples]] patients_valid = [patients_stage1_feat_found[id] for id in shuffledindices[numberoftrainingexamples:numberoftrainingexamples+numbertovalidate]] patients_test = [patients_stage1_feat_found[id] for id in shuffledindices[numberoftrainingexamples+numbertovalidate:]] y_train = y_found[shuffledindices[:numberoftrainingexamples]] y_valid = y_found[shuffledindices[numberoftrainingexamples:numberoftrainingexamples+numbertovalidate]] y_test = y_found[shuffledindices[numberoftrainingexamples+numbertovalidate:]] patients_train_vecs = [load_feat_vec(id,sub_name) for id in patients_train] patients_train_vecs = np.array(patients_train_vecs) patients_valid_vecs = [load_feat_vec(id,sub_name) for id in patients_valid] patients_valid_vecs = np.array(patients_valid_vecs) patients_test_vecs = [load_feat_vec(id,sub_name) for id in patients_test] patients_test_vecs = np.array(patients_test_vecs) patient_ids = {"train":patients_train,"valid":patients_valid,"test":patients_test} ys = {"train":y_train,"valid":y_valid,"test":y_test} Xs = {"train":patients_train_vecs,"valid":patients_valid_vecs,"test":patients_test_vecs} return patient_ids, ys, Xs patient_ids32, ys32,Xs32=prepare_inputX("stage1_HOG32",0.275,0.2) y_train_rep2 = np.copy(ys32["train"]) # 2nd representation y_train_rep2[y_train_rep2<=0]=-1 y_valid_rep2 = np.copy(ys32["valid"]) # 2nd representation y_valid_rep2[y_valid_rep2<=0]=-1 y_test_rep2 = np.copy(ys32["test"]) # 2nd representation y_test_rep2[y_test_rep2<=0]=-1 C_trial=[0.1,1.0,10.,100.] sigma_trial=[0.1,1.0,10.] C_trial[3] SVM_stage1 = SVM_parallel(Xs32["train"],y_train_rep2,len(y_train_rep2), C_trial[3],sigma_trial[1],0.005 ) # C=100.,sigma=1.0, alpha=0.001 SVM_stage1.build_W(); SVM_stage1.build_update(); %time SVM_stage1.train_model_full(3) # iterations=3,CPU times: user 3min 50s, sys: 7min 19s, total: 11min 9s %time SVM_stage1.train_model_full(100) SVM_stage1.build_b() yhat32_valid = SVM_stage1.make_predictions_parallel( Xs32["valid"] ) accuracy_score_temp=(np.sign(yhat32_valid[0]) == y_valid_rep2).sum()/float(len(y_valid_rep2)) print(accuracy_score_temp) y_valid_rep2 """ Explanation: Training, (Cross-)Validation, Test Set randomization and processing End of explanation """ stage1_sample_submission_csv = pd.read_csv("./2017datascibowl/stage1_sample_submission.csv") sub_name="stage1_HOG32" patients_sample_vecs = np.array( [load_feat_vec(id,sub_name) for id in stage1_sample_submission_csv['id'].as_matrix()] ) print(len(patients_sample_vecs)) %time yhat_sample = SVM_stage1.make_predictions_parallel( patients_sample_vecs[:2] ) """ Explanation: Predictions Predictions on valid set To go out to competition, over sample only End of explanation """ f32=open("./2017datascibowl/lambda_multHOG32_C100sigma1","wb") np.save(f32,SVM_stage1.lambda_mult.get_value()) f32.close() yhat_sample_rep2 = np.copy(yhat_sample[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1} yhat_sample_rep2 = np.sign( yhat_sample_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1} yhat_sample_rep1 = np.copy(yhat_sample_rep2) np.place(yhat_sample_rep1,yhat_sample_rep1<0.,0.) f32load=open("./2017datascibowl/lambda_multHOG32_C100sigma1","rb") testload32=np.load(f32load) f32load.close() SVM_stage1_reloaded = SVM_parallel(Xs32["train"],y_train_rep2,len(y_train_rep2), C_trial[3],sigma_trial[1],0.005 ) # C=100.,sigma=1.0, alpha=0.001 SVM_stage1_reloaded.lambda_mult.get_value()[:20] testload32[:20] SVM_stage1_reloaded.lambda_mult.set_value( testload32 ) SVM_stage1_reloaded.lambda_mult.get_value()[:20] SVM_stage1_reloaded.build_b() %time yhat_sample = SVM_stage1_reloaded.make_predictions_parallel( patients_sample_vecs ) np.sign(yhat_sample[0]) yhat_sample_rep2 = np.copy(yhat_sample[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1} yhat_sample_rep2 = np.sign( yhat_sample_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1} yhat_sample_rep1 = np.copy(yhat_sample_rep2) np.place(yhat_sample_rep1,yhat_sample_rep1<0.,0.) Prattscaling_results = SVM_stage1_reloaded.make_prob_Pratt(yhat_sample_rep1) Prattscaling_results """ Explanation: steps towards persisting (saving) SVM models End of explanation """ stage2_sample_submission_csv = pd.read_csv("./2017datascibowl/stage2_sample_submission.csv") sub_name="stage2_HOG32" patients_sample2_vecs = np.array( [load_feat_vec(id,sub_name) for id in stage2_sample_submission_csv['id'].as_matrix()] ) print(len(patients_sample2_vecs)) %time yhat_sample2 = SVM_stage1_reloaded.make_predictions_parallel( patients_sample2_vecs ) patients_sample2_vecs.shape Xs32["train"].shape np.sign(yhat_sample2[0]) yhat_sample2_rep2 = np.copy(yhat_sample2[0]) # representation 2, {-1,1}, not representation of binary classes as {0,1} yhat_sample2_rep2 = np.sign( yhat_sample2_rep2); # representation 1, {0,1}, not representation of binary classes as {-1,1} yhat_sample2_rep1 = np.copy(yhat_sample2_rep2) np.place(yhat_sample2_rep1,yhat_sample2_rep1<0.,0.) Prattscaling_results2 = SVM_stage1_reloaded.make_prob_Pratt(yhat_sample2_rep1) Prattscaling_results2 sample2_out = pd.DataFrame(zip(stage2_sample_submission_csv['id'].as_matrix(),Prattscaling_results2[0])) sample2_out.columns=["id","cancer"] sample2_out.to_csv("./2017datascibowl/sample2submit00.csv",index=False) """ Explanation: Submissions 2 End of explanation """
mne-tools/mne-tools.github.io
0.20/_downloads/82590448493c884f52ea0c7ddc5b446b/plot_publication_figure.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # Daniel McCloy <dan.mccloy@gmail.com> # # License: BSD (3-clause) import os.path as op import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable, ImageGrid import mne """ Explanation: Make figures more publication ready In this example, we take some MNE plots and make some changes to make a figure closer to publication-ready. End of explanation """ data_path = mne.datasets.sample.data_path() subjects_dir = op.join(data_path, 'subjects') fname_stc = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-eeg-lh.stc') fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif') evoked = mne.read_evokeds(fname_evoked, 'Left Auditory') evoked.pick_types(meg='grad').apply_baseline((None, 0.)) max_t = evoked.get_peak()[1] stc = mne.read_source_estimate(fname_stc) """ Explanation: Suppose we want a figure with an evoked plot on top, and the brain activation below, with the brain subplot slightly bigger than the evoked plot. Let's start by loading some example data &lt;sample-dataset&gt;. End of explanation """ evoked.plot() stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample', subjects_dir=subjects_dir, initial_time=max_t, time_viewer=False, show_traces=False) """ Explanation: During interactive plotting, we might see figures like this: End of explanation """ colormap = 'viridis' clim = dict(kind='value', lims=[4, 8, 12]) # Plot the STC, get the brain image, crop it: brain = stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample', subjects_dir=subjects_dir, initial_time=max_t, background='w', colorbar=False, clim=clim, colormap=colormap, time_viewer=False, show_traces=False) screenshot = brain.screenshot() brain.close() """ Explanation: To make a publication-ready figure, first we'll re-plot the brain on a white background, take a screenshot of it, and then crop out the white margins. While we're at it, let's change the colormap, set custom colormap limits and remove the default colorbar (so we can add a smaller, vertical one later): End of explanation """ nonwhite_pix = (screenshot != 255).any(-1) nonwhite_row = nonwhite_pix.any(1) nonwhite_col = nonwhite_pix.any(0) cropped_screenshot = screenshot[nonwhite_row][:, nonwhite_col] # before/after results fig = plt.figure(figsize=(4, 4)) axes = ImageGrid(fig, 111, nrows_ncols=(2, 1), axes_pad=0.5) for ax, image, title in zip(axes, [screenshot, cropped_screenshot], ['Before', 'After']): ax.imshow(image) ax.set_title('{} cropping'.format(title)) """ Explanation: Now let's crop out the white margins and the white gap between hemispheres. The screenshot has dimensions (h, w, 3), with the last axis being R, G, B values for each pixel, encoded as integers between 0 and 255. (255, 255, 255) encodes a white pixel, so we'll detect any pixels that differ from that: End of explanation """ # Tweak the figure style plt.rcParams.update({ 'ytick.labelsize': 'small', 'xtick.labelsize': 'small', 'axes.labelsize': 'small', 'axes.titlesize': 'medium', 'grid.color': '0.75', 'grid.linestyle': ':', }) """ Explanation: A lot of figure settings can be adjusted after the figure is created, but many can also be adjusted in advance by updating the :data:~matplotlib.rcParams dictionary. This is especially useful when your script generates several figures that you want to all have the same style: End of explanation """ # figsize unit is inches fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4.5, 3.), gridspec_kw=dict(height_ratios=[3, 4])) # alternate way #1: using subplot2grid # fig = plt.figure(figsize=(4.5, 3.)) # axes = [plt.subplot2grid((7, 1), (0, 0), rowspan=3), # plt.subplot2grid((7, 1), (3, 0), rowspan=4)] # alternate way #2: using figure-relative coordinates # fig = plt.figure(figsize=(4.5, 3.)) # axes = [fig.add_axes([0.125, 0.58, 0.775, 0.3]), # left, bot., width, height # fig.add_axes([0.125, 0.11, 0.775, 0.4])] # we'll put the evoked plot in the upper axes, and the brain below evoked_idx = 0 brain_idx = 1 # plot the evoked in the desired subplot, and add a line at peak activation evoked.plot(axes=axes[evoked_idx]) peak_line = axes[evoked_idx].axvline(max_t, color='#66CCEE', ls='--') # custom legend axes[evoked_idx].legend( [axes[evoked_idx].lines[0], peak_line], ['MEG data', 'Peak time'], frameon=True, columnspacing=0.1, labelspacing=0.1, fontsize=8, fancybox=True, handlelength=1.8) # remove the "N_ave" annotation axes[evoked_idx].texts = [] # Remove spines and add grid axes[evoked_idx].grid(True) axes[evoked_idx].set_axisbelow(True) for key in ('top', 'right'): axes[evoked_idx].spines[key].set(visible=False) # Tweak the ticks and limits axes[evoked_idx].set( yticks=np.arange(-200, 201, 100), xticks=np.arange(-0.2, 0.51, 0.1)) axes[evoked_idx].set( ylim=[-225, 225], xlim=[-0.2, 0.5]) # now add the brain to the lower axes axes[brain_idx].imshow(cropped_screenshot) axes[brain_idx].axis('off') # add a vertical colorbar with the same properties as the 3D one divider = make_axes_locatable(axes[brain_idx]) cax = divider.append_axes('right', size='5%', pad=0.2) cbar = mne.viz.plot_brain_colorbar(cax, clim, colormap, label='Activation (F)') # tweak margins and spacing fig.subplots_adjust( left=0.15, right=0.9, bottom=0.01, top=0.9, wspace=0.1, hspace=0.5) # add subplot labels for ax, label in zip(axes, 'AB'): ax.text(0.03, ax.get_position().ymax, label, transform=fig.transFigure, fontsize=12, fontweight='bold', va='top', ha='left') """ Explanation: Now let's create our custom figure. There are lots of ways to do this step. Here we'll create the figure and the subplot axes in one step, specifying overall figure size, number and arrangement of subplots, and the ratio of subplot heights for each row using :mod:GridSpec keywords &lt;matplotlib.gridspec&gt;. Other approaches (using :func:~matplotlib.pyplot.subplot2grid, or adding each axes manually) are shown commented out, for reference. End of explanation """
eriksalt/jupyter
Python Quick Reference/Data Algorithms.ipynb
mit
items = [1, 2, 3] # Get the iterator it = iter(items) # Invokes items.__iter__() # Run the iterator next(it) # Invokes it.__next__() next(it) next(it) # if you uncomment this line it would throw a StopOperation exception # next(it) """ Explanation: Python Data Algorithms Quick Reference Table Of Contents <a href="#1.-Manually-Consuming-an-Iterator">Manually Consuming an Iterator</a> <a href="#2.-Delegating-Iterator">Delegating Iterator</a> <a href="#3.-Map">Map</a> <a href="#4.-Filter">Filter</a> <a href="#5.-Named-Slices">Named Slices</a> <a href="#6.-zip">zip</a> <a href="#7.-itemgetter">itemgetter</a> <a href="#8.-attrgetter">attrgetter</a> <a href="#9.-groupby">groupby</a> <a href="#10.-Generator-Expressions">Generator Expressions</a> <a href="#11.-compress">compress</a> <a href="#12.-reversed">reversed</a> <a href="#13.-Generators-with-State">Generators with State</a> <a href="#14.-slice-and-dropwhile">slice and dropwhile</a> <a href="#15.-Permutations-and-Combinations-of-Elements">Permutations and Combinations of Elements</a> <a href="#16.-Iterating-with-Indexes">Iterating with Indexes</a> <a href="#17.-chain">chain</a> <a href="#18.-Flatten-a-Nested-Sequence">Flatten a Nested Sequence</a> <a href="#19.-Merging-Presorted-Iterables">Merging Presorted Iterables</a> 1. Manually Consuming an Iterator End of explanation """ # if you write a container class, and want to expose an iterator over an internal collection use the __iter()__ method class Node: def __init__(self): self._children = [1,2,3] def __iter__(self): return iter(self._children) root = Node() for x in root: print(x) """ Explanation: 2. Delegating Iterator End of explanation """ simpsons = ['homer', 'marge', 'bart'] map(len, simpsons) # returns [0, 2, 4] #equivalent list comprehension [len(word) for word in simpsons] map(lambda word: word[-1], simpsons) # returns ['r','e', 't'] #equivalent list comprehension [word[-1] for word in simpsons] """ Explanation: 3. Map map applies a function to every element of a sequence and returns an iterator of elements End of explanation """ nums = range(5) filter(lambda x: x % 2 == 0, nums) # returns [0, 2, 4] # equivalent list comprehension [num for num in nums if num % 2 == 0] """ Explanation: 4. Filter filter returns an iterator containing the elements from a sequence for which a condition is True: End of explanation """ ###### 0123456789012345678901234567890123456789012345678901234567890' record = '....................100 .......513.25 ..........' SHARES = slice(20,32) PRICE = slice(40,48) cost = int(record[SHARES]) * float(record[PRICE]) cost """ Explanation: 5. Named Slices End of explanation """ # zip() allows you to create an iterable view over a tuple created out of two separate iterable views prices = { 'ACME' : 45.23, 'AAPL': 612.78, 'IBM': 205.55, 'HPQ' : 37.20, 'FB' : 10.75 } min_price = min(zip(prices.values(), prices.keys())) #(10.75, 'FB') max((zip(prices.values(), prices.keys()))) """ Explanation: 6. zip End of explanation """ prices_and_names = zip(prices.values(), prices.keys()) print(min(prices_and_names)) # running the following code would fail #print(min(prices_and_names)) # zip usually stops when any individual iterator ends (it iterates only until the end of the shortest sequence) a = [1, 2, 3] b = ['w', 'x', 'y', 'z'] for i in zip(a,b): print(i) # use zip_longest to keep iterating through longer sequences from itertools import zip_longest for i in zip_longest(a,b): print(i) # zip can run over more then 2 sequences c = ['aaa', 'bbb', 'ccc'] for i in zip(a,b,c): print(i) """ Explanation: zip can only be iterated over once! End of explanation """ from operator import itemgetter rows = [ {'fname': 'Brian', 'lname': 'Jones', 'uid': 1003}, {'fname': 'David', 'lname': 'Beazley', 'uid': 1002}, {'fname': 'John', 'lname': 'Cleese', 'uid': 1001}, {'fname': 'Big', 'lname': 'Jones', 'uid': 1004} ] rows_by_fname = sorted(rows, key=itemgetter('fname')) rows_by_fname rows_by_uid = sorted(rows, key=itemgetter('uid')) rows_by_uid # itemgetter() function can also accept multiple keys rows_by_lfname = sorted(rows, key=itemgetter('lname','fname')) rows_by_lfname """ Explanation: 7. itemgetter End of explanation """ from operator import attrgetter #used to sort objects that dont natively support comparison class User: def __init__(self, user_id): self.user_id = user_id def __repr__(self): return 'User({})'.format(self.user_id) users = [User(23), User(3), User(99)] users sorted(users, key=attrgetter('user_id')) min(users, key=attrgetter('user_id')) """ Explanation: 8. attrgetter End of explanation """ from operator import itemgetter from itertools import groupby rows = [ {'address': '5412 N CLARK', 'date': '07/01/2012'}, {'address': '5148 N CLARK', 'date': '07/04/2012'}, {'address': '5800 E 58TH', 'date': '07/02/2012'}, {'address': '2122 N CLARK', 'date': '07/03/2012'}, {'address': '5645 N RAVENSWOOD', 'date': '07/02/2012'}, {'address': '1060 W ADDISON', 'date': '07/02/2012'}, {'address': '4801 N BROADWAY', 'date': '07/01/2012'}, {'address': '1039 W GRANVILLE', 'date': '07/04/2012'}, ] # important! must sort data on key field first! rows.sort(key=itemgetter('date')) #iterate in groups for date, items in groupby(rows, key=itemgetter('date')): print(date) for i in items: print(' %s' % i) """ Explanation: 9. groupby The groupby() function works by scanning a sequence and finding sequential “runs” of identical values (or values returned by the given key function). On each iteration, it returns the value along with an iterator that produces all of the items in a group with the same value. End of explanation """ mylist = [1, 4, -5, 10, -7, 2, 3, -1] positives = (n for n in mylist if n > 0) positives for x in positives: print(x) nums = [1, 2, 3, 4, 5] sum(x * x for x in nums) # Output a tuple as CSV s = ('ACME', 50, 123.45) ','.join(str(x) for x in s) # Determine if any .py files exist in a directory import os files = os.listdir('.') if any(name.endswith('.py') for name in files): print('There be python!') else: print('Sorry, no python.') # Data reduction across fields of a data structure portfolio = [ {'name':'GOOG', 'shares': 50}, {'name':'YHOO', 'shares': 75}, {'name':'AOL', 'shares': 20}, {'name':'SCOX', 'shares': 65} ] min(s['shares'] for s in portfolio) s = sum((x * x for x in nums)) # Pass generator-expr as argument s = sum(x * x for x in nums) # More elegant syntax s """ Explanation: 10. Generator Expressions End of explanation """ from itertools import compress addresses = [ '5412 N CLARK', '5148 N CLARK', '5800 E 58TH', '2122 N CLARK' '5645 N RAVENSWOOD', '1060 W ADDISON', '4801 N BROADWAY', '1039 W GRANVILLE', ] counts = [ 0, 3, 10, 4, 1, 7, 6, 1] more5 = [n > 5 for n in counts] more5 list(compress(addresses, more5)) """ Explanation: 11. compress itertools.compress() takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True. End of explanation """ #iterates in reverse a = [1, 2, 3, 4] for x in reversed(a): print(x) #you can customize the behavior of reversed for your class by implementing __reversed()__ method class Counter: def __init__(self, start): self.start = start # Forward iterator def __iter__(self): n = 1 while n <= self.start: yield n n += 1 # Reverse iterator def __reversed__(self): n = self.start while n > 0: yield n n -= 1 foo = Counter(5) for x in reversed(foo): print(x) """ Explanation: 12. reversed End of explanation """ # To expose state available at each step of iteration, use a classs that implements __iter__() class countingiterator: def __init__(self, items): self.items=items def __iter__(self): self.clear_count() for item in self.items: self.count+=1 yield item def clear_count(self): self.count=0 foo = countingiterator(["aaa","bbb","ccc"]) for i in foo: print("{}:{}".format(foo.count, i)) """ Explanation: 13. Generators with State End of explanation """ # itertools.islice allows slicing of iterators def count(n): while True: yield n n += 1 c=count(0) #the next line would fail # c[10:20] import itertools for x in itertools.islice(c,10,15): print(x) c=count(0) for x in itertools.islice(c, 10, 15, 2): print(x) # if you don't know how many to skip, but can define a skip condition, use dropwhile() from itertools import dropwhile foo = ['#','#','#','#','aaa','bbb','#','ccc'] def getstrings(f): for i in f: yield i for ch in dropwhile(lambda ch: ch.startswith('#'), getstrings(foo)): print(ch) """ Explanation: 14. islice and dropwhile End of explanation """ from itertools import permutations items = ['a', 'b', 'c'] for p in permutations(items): print(p) # for smaller subset permutations for p in permutations(items,2): print(p) # itertools.combinations ignores element order in creating unique sets from itertools import combinations for c in combinations(items, 3): print(c) for c in combinations(items, 2): print(c) # itertools.combinations_with_replacement() will not remove an item from the list of possible candidates after it is chosen # in other words, the same value can occur more then once from itertools import combinations_with_replacement for c in combinations_with_replacement(items, 3): print(c) """ Explanation: 15. Permutations and Combinations of Elements End of explanation """ # enumerate returns the iterated item and an index my_list = ['a', 'b', 'c'] for idx, val in enumerate(my_list): print(idx, val) # pass a starting index to enumerate for idx, val in enumerate(my_list, 7): print(idx, val) """ Explanation: 16. Iterating with Indexes End of explanation """ # chain iterates over several sequences, one after the other # making them look like one long sequence from itertools import chain a = [1, 2] b = ['x', 'y', 'z'] for x in chain(a, b): print(x) """ Explanation: 17. chain End of explanation """ # you want to traverse a sequence with nested sub sequences as one big sequence from collections import Iterable def flatten(items, ignore_types=(str, bytes)): for x in items: if isinstance(x, Iterable) and not isinstance(x, ignore_types): # ignore types treats iterable string/bytes as simple values yield from flatten(x) else: yield x items = [1, 2, [3, 4, [5, 6], 7], 8] for x in flatten(items): print(x) """ Explanation: 18. Flatten a Nested Sequence End of explanation """ import heapq a = [1, 4, 7] b = [2, 5, 6] for c in heapq.merge(a, b): print(c) """ Explanation: 19. Merging Presorted Iterables End of explanation """
rishuatgithub/MLPy
tf/Text Classification.ipynb
apache-2.0
imdb = keras.datasets.imdb (train_data, train_label),(test_data,test_label) = imdb.load_data(num_words=10000) """ Explanation: Import wiki dataset End of explanation """ print("Train data shape:",train_data.shape) print("Test data shape:",test_data.shape) print("Train label :",len(train_label)) print("First Imdb review: ",train_data[0]) ## review data for the first review ## notice the difference in length of 2 reviews print("length of first and second review:",len(train_data[0])," ",len(test_data[1])) """ Explanation: The argument num_words=10000 keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable. End of explanation """ ## A dictionary mapping of a word to a integer index word_index = imdb.get_word_index() ## The first indices are reserved word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 ## unknown word_index["<UNUSED>"] = 3 word_index = {k:(v+3) for k,v in word_index.items()} reverse_word_index = dict([(value, key) for (key,value) in word_index.items()]) def decode_review(text): return ' '.join([reverse_word_index.get(i,'?') for i in text]) decode_review(train_data[0]) """ Explanation: Convert integers to String from the dictonary of words End of explanation """ train_data = keras.preprocessing.sequence.pad_sequences(train_data, value = word_index["<PAD>"], padding='post', maxlen = 256) test_data = keras.preprocessing.sequence.pad_sequences(test_data, value = word_index["<PAD>"], padding = 'post', maxlen = 256) print(len(train_data[0])," ",len(test_data[1])) print(train_data[0]) """ Explanation: Preparing the data we can pad the arrays so they all have the same length, then create an integer tensor of shape max_length * num_reviews. We can use an embedding layer capable of handling this shape as the first layer in our network. Since the movie reviews must be the same length, we will use the pad_sequences function to standardize the lengths End of explanation """ # input shape is the vocabulary count used in the reviews i.e. word count = 10,000 vocab_size = 10000 model = keras.Sequential() model.add(keras.layers.Embedding(vocab_size, 16)) model.add(keras.layers.GlobalAveragePooling1D()) model.add(keras.layers.Dense(16, activation = tf.nn.relu)) model.add(keras.layers.Dense(1, activation = tf.nn.sigmoid)) model.summary() ### adding the loss function and optimizer model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['acc']) ### creating a validation data set to test the training accuracy x_val = train_data[:10000] partial_x_train = train_data[10000:] y_val = train_label[:10000] partial_y_train = train_label[10000:] """ Explanation: Building the model End of explanation """ history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) """ Explanation: Training the model End of explanation """ results = model.evaluate(test_data, test_label) print(results) """ Explanation: Evaluate the model End of explanation """ history_dict = history.history history_dict.keys() import matplotlib.pyplot as plt acc = history_dict['acc'] val_acc = history_dict['val_acc'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() """ Explanation: Create a graph of accuracy over time End of explanation """
dsacademybr/PythonFundamentos
Cap08/DesafioDSA_Solucao/Missao2/missao2_solucao.ipynb
gpl-3.0
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font> Download: http://github.com/dsacademybr End of explanation """ import math class PrimeGenerator(object): def generate_primes(self, max_num): if max_num is None: raise TypeError('max_num não pode ser None') array = [True] * max_num array[0] = False array[1] = False prime = 2 while prime <= math.sqrt(max_num): self._cross_off(array, prime) prime = self._next_prime(array, prime) return array def _cross_off(self, array, prime): for index in range(prime*prime, len(array), prime): array[index] = False def _next_prime(self, array, prime): next = prime + 1 while next < len(array) and not array[next]: next += 1 return next """ Explanation: Missão: Gerar uma lista de números primos. Nível de Dificuldade: Médio Premissas É correto que 1 não seja considerado um número primo?      * Sim Podemos assumir que as entradas são válidas?      * Não Podemos supor que isso se encaixa na memória?      * Sim Teste Cases None -> Exception Not an int -> Exception 20 -> [False, False, True, True, False, True, False, True, False, False, False, True, False, True, False, False, False, True, False, True] Algoritmo Para um número ser primo, ele deve ser 2 ou maior e não pode ser divisível por outro número diferente de si mesmo (e 1). Todos os números não-primos são divisíveis por um número primo. Use uma matriz (array) para manter o controle de cada número inteiro até o máximo Comece em 2, termine em sqrt (max)      * Podemos usar o sqrt (max) em vez do max porque:          * Para cada valor que divide o número de entrada uniformemente, há um complemento b onde a * b = n          * Se a> sqrt (n) então b <sqrt (n) porque sqrt (n ^ 2) = n      * "Cross off" todos os números divisíveis por 2, 3, 5, 7, ... configurando array [index] para False Animação do Wikipedia: Solução End of explanation """ %%writefile missao2.py from nose.tools import assert_equal, assert_raises class TestMath(object): def test_generate_primes(self): prime_generator = PrimeGenerator() assert_raises(TypeError, prime_generator.generate_primes, None) assert_raises(TypeError, prime_generator.generate_primes, 98.6) assert_equal(prime_generator.generate_primes(20), [False, False, True, True, False, True, False, True, False, False, False, True, False, True, False, False, False, True, False, True]) print('Sua solução foi executada com sucesso! Parabéns!') def main(): test = TestMath() test.test_generate_primes() if __name__ == '__main__': main() %run -i missao2.py """ Explanation: Teste da Solução End of explanation """
facaiy/book_notes
Mining_of_Massive_Datasets/Large-Scale_Machine_Learning/note.ipynb
cc0-1.0
#exercise """ Explanation: 12 Large-Scale Machine Learning All algorithms for analysis of data are designed to produce a useful summary of the data, from which decisions are made. "machine learning" not only summarize our data; they are perceived as learning a model or classifier from the data, and thus discover something about data that will be seen in the future. 12.1 The Machine-Learning Model Training sets: + feature vector: $x$ + label: $y$ - real number, regression - boolean value, binary classification - finite set, multiclass classification - infinite set, eg: parse tree 12.1.3 Approaches to Machine Learning Decision trees suitable for binary and multiclass classification small features Perceptrons $\sum w x \geq \theta$ binary classification, very large features Neural nets binary or multiclass classification Instance-based learning use the entire training set to represent the function $f$. eg: k-nearest-neighbor Support-vector machines accureate on unseen data 12.1.4 Machine-Learning Architecture Training and Testing Batch Versus On-Line Learning on-line: very large training sets adapt to changes as times goes active learning Feature Selection Creating a Training Set End of explanation """ show_image('fig12_5.png') """ Explanation: 12.2 Perceptrons The output of the perceptron is: + $+1$, if $w x > 0$ + $-1$, if $w x < 0$ + wrong, if $w x = 0$ The weight vector $w$ defines a hyperplane of dimension $d-1$ where $w x = 0$. A perceptron classifier works only for data that is linearly separable. 12.2.1 Training a Perceptron with Zero Threshold Initialize $w = 0$. Pick a learning-rate parameter $\eta > 0$. Consider each training example $t = (x, y)$ in turn. $y' = w x$. if $y'$ and $y$ have the same sign, then do nothing. otherwise, replace $w$ by $w + \eta y x$. That is, adjust $w$ slightly in the direction of $x$. End of explanation """ show_image('fig12_10.png') """ Explanation: 12.2.2 Convergence of Perceptrons data point are not linearly separable $\to$ loop infinitely. some common tests for termination: 1. Terminate after a fixed number of rounds. Terminate when the number of misclassified training points stops changing. Terminate when the number of errors on the test set stops changing. Another technique that will aid convergence is to lower the traing rate as the number of rounds increases. eg: $\eta = \eta / (1 + ct)$. 12.2.3 The Winnow Algorithm Winnow assumes that the feature vectors consist of 0's and 1's, and the labels are $+1$ or $-1$. And Winnow produce only positive weithts $w$. idea: there is a positive threshold $\theta$. if $w x > \theta$ and $y = +1$, or $w x \theta$ and $y = -1$, then do nothing. if $w x \leq \theta$, but $y = +1$, then the weights for the components where $x$ has 1 are too low as a group, increase them by a factor, say 2. if $w x \geq \theta$, but $y = -1$, then the weights for the components where $x$ has 1 are too high as a group, decreae them by a factor, say 2. 12.2.4 Allowing the Threshold to Vary At the cost of adding another dimension to the feature vectors, we can treate $\theta$ as one of the components of $w$. $w' = [w \theta]$. $x' = [x -1]$. We allow a $-1$ in $x'$ for $\theta$ if we treat it in the manner opposite to the way we treat components that are 1. 12.2.5 Multiclass Perceptrons one VS all 12.2.6 Transforming the Training Set turn to linear separable set. End of explanation """ show_image('fig12_11.png') show_image('fig12_12.png') """ Explanation: 12.2.7 Problems With Perceptrons The biggest problem is that sometimes the data is inherently not separable by a hyperplane. transforming $\to$ overfitting. many different hyperplanes that will separate the points. End of explanation """ #Exercise """ Explanation: 12.2.8 Parallel Implementation of Perceptrons training examples are used with the same $w$. map: calculate $w$ independently. reduce: sum all $w$. End of explanation """
thewtex/TubeTK
examples/Demo-ConvertTubesToPolyData.ipynb
apache-2.0
import os import sys import numpy import itk from itk import TubeTK as ttk """ Explanation: Convert Tubes To PolyData This notebook contains a few examples of how to call wrapped methods in itk and ITKTubeTK. ITK and TubeTK must be installed on your system for this notebook to work. Typically, this is accomplished by python -m pip install itk-tubetk End of explanation """ PixelType = itk.F Dimension = 3 ImageType = itk.Image[PixelType, Dimension] # Read tre file TubeFileReaderType = itk.SpatialObjectReader[Dimension] tubeFileReader = TubeFileReaderType.New() tubeFileReader.SetFileName("Data/MRI-Normals/Normal071-VascularNetwork.tre") tubeFileReader.Update() tubes = tubeFileReader.GetGroup() """ Explanation: Load the tubes End of explanation """ ttk.WriteTubesAsPolyData.New(Input=tubes, FileName="Tube.vtp").Update() """ Explanation: Generate the polydata representation of the tubes and save it to the file "Tube.vtp". The Tube.vtp file can be displayed by dragging-and-dropping it onto ParaView Glance: https://kitware.github.io/paraview-glance/app/ End of explanation """
eggie5/UCSD-MAS-DSE230
hmwk2/HW-2.ipynb
mit
import findspark findspark.init() import pyspark sc = pyspark.SparkContext() # %install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py %load_ext autotime def print_count(rdd): print 'Number of elements:', rdd.count() env="local" files='' path = "Data/hw2-files.txt" if env=="prod": path = '../Data/hw2-files-1gb.txt' with open(path) as f: files=','.join(f.readlines()).replace('\n','') rdd = sc.textFile(files).cache() print_count(rdd) """ Explanation: Homework 2 In this homework, we are going to play with Twitter data. The data is represented as rows of of JSON strings. It consists of tweets, messages, and a small amount of broken data (cannot be parsed as JSON). For this homework, we will only focus on tweets and ignore all other messages. UPDATES Announcement We changed the test files size and the corresponding file paths. In order to avoid long waiting queue, we decided to limit the input files size for the Playground submissions. Please read the following files to get the input file paths: * 1GB test: ../Data/hw2-files-1gb.txt * 5GB test: ../Data/hw2-files-5gb.txt * 20GB test: ../Data/hw2-files-20gb.txt We updated the json parsing section of this notebook. Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using ujson instead of json. It is at least 15x faster than the built-in json library according to our tests. Important Reminders The tokenizer in this notebook contains UTF-8 characters. So the first line of your .py source code must be # -*- coding: utf-8 -*- to define its encoding. Learn more about this topic here. The input files (the tweets) contain UTF-8 characters. So you have to correctly encode your input with some function like lambda text: text.encode('utf-8'). ../Data/hw2-files-&lt;param&gt; may contain multiple lines, one line for one input file. You can use a single textFile call to read multiple files: sc.textFile(','.join(files)). The input file paths in ../Data/hw2-files-&lt;param&gt; contains trailing spaces (newline etc.), which may confuse HDFS if not removed. Your program will be killed if it cannot finish in 5 minutes. The running time of last 100 submissions (yours and others) can be checked at the "View last 100 jobs" tab. For your information, here is the running time of our solution: 1GB test: 53 seconds, 5GB test: 60 seconds, 20GB test: 114 seconds. Tweets A tweet consists of many data fields. Here is an example. You can learn all about them in the Twitter API doc. We are going to briefly introduce only the data fields that will be used in this homework. created_at: Posted time of this tweet (time zone is included) id_str: Tweet ID - we recommend using id_str over using id as Tweet IDs, becauase id is an integer and may bring some overflow problems. text: Tweet content user: A JSON object for information about the author of the tweet id_str: User ID name: User name (may contain spaces) screen_name: User screen name (no spaces) retweeted_status: A JSON object for information about the retweeted tweet (i.e. this tweet is not original but retweeteed some other tweet) All data fields of a tweet except retweeted_status entities: A JSON object for all entities in this tweet hashtags: An array for all the hashtags that are mentioned in this tweet urls: An array for all the URLs that are mentioned in this tweet Data source All tweets are collected using the Twitter Streaming API. Users partition Besides the original tweets, we will provide you with a Pickle file, which contains a partition over 452,743 Twitter users. It contains a Python dictionary {user_id: partition_id}. The users are partitioned into 7 groups. Part 0: Load data to a RDD The tweets data is stored on AWS S3. We have in total a little over 1 TB of tweets. We provide 10 MB of tweets for your local development. For the testing and grading on the homework server, we will use different data. Testing on the homework server In the Playground, we provide three different input sizes to test your program: 1 GB, 10 GB, and 100 GB. To test them, read files list from ../Data/hw2-files-1gb.txt, ../Data/hw2-files-5gb.txt, ../Data/hw2-files-20gb.txt, respectively. For final submission, make sure to read files list from ../Data/hw2-files-final.txt. Otherwise your program will receive no points. Local test For local testing, read files list from ../Data/hw2-files.txt. Now let's see how many lines there are in the input files. Make RDD from the list of files in hw2-files.txt. Mark the RDD to be cached (so in next operation data will be loaded in memory) call the print_count method to print number of lines in all these files It should print Number of elements: 2193 End of explanation """ import ujson json_example = ''' { "id": 1, "name": "A green door", "price": 12.50, "tags": ["home", "green"] } ''' json_obj = ujson.loads(json_example) json_obj """ Explanation: Part 1: Parse JSON strings to JSON objects Python has built-in support for JSON. UPDATE: Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using ujson instead of json. It is at least 15x faster than the built-in json library according to our tests. End of explanation """ import ujson def safe_parse(raw_json): tweet={} try: tweet = ujson.loads(raw_json) except ValueError: pass return tweet #filter out rate limites {"limit":{"track":77,"timestamp_ms":"1457610531879"}} tweets = rdd.map(lambda json_str: safe_parse(json_str))\ .filter(lambda h: "text" in h)\ .map(lambda tweet: (tweet["user"]["id_str"], tweet["text"]))\ .map(lambda (x,y): (x, y.encode("utf-8"))).cache() """ Explanation: Broken tweets and irrelevant messages The data of this assignment may contain broken tweets (invalid JSON strings). So make sure that your code is robust for such cases. In addition, some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as limit notices). Your program should also ignore these messages. Hint: Catch the ValueError (1) Parse raw JSON tweets to obtain valid JSON objects. From all valid tweets, construct a pair RDD of (user_id, text), where user_id is the id_str data field of the user dictionary (read Tweets section above), text is the text data field. End of explanation """ def print_users_count(count): print 'The number of unique users is:', count print_users_count(tweets.map(lambda x:x[0]).distinct().count()) """ Explanation: (2) Count the number of different users in all valid tweets (hint: the distinct() method). It should print The number of unique users is: 2083 End of explanation """ import cPickle as pickle path = 'Data/users-partition.pickle' if env=="prod": path = '../Data/users-partition.pickle' partitions = pickle.load(open(path, 'rb')) #{user_Id, partition_id} - {'583105596': 6} partition_bc = sc.broadcast(partitions) """ Explanation: Part 2: Number of posts from each user partition Load the Pickle file ../Data/users-partition.pickle, you will get a dictionary which represents a partition over 452,743 Twitter users, {user_id: partition_id}. The users are partitioned into 7 groups. For example, if the dictionary is loaded into a variable named partition, the partition ID of the user 59458445 is partition["59458445"]. These users are partitioned into 7 groups. The partition ID is an integer between 0-6. Note that the user partition we provide doesn't cover all users appear in the input data. (1) Load the pickle file. End of explanation """ count = tweets.map(lambda x:partition_bc.value.get(x[0], 7)).countByValue().items() """ Explanation: (2) Count the number of posts from each user partition Count the number of posts from group 0, 1, ..., 6, plus the number of posts from users who are not in any partition. Assign users who are not in any partition to the group 7. Put the results of this step into a pair RDD (group_id, count) that is sorted by key. End of explanation """ def print_post_count(counts): for group_id, count in counts: print 'Group %d posted %d tweets' % (group_id, count) print print_post_count(count) """ Explanation: (3) Print the post count using the print_post_count function we provided. It should print Group 0 posted 81 tweets Group 1 posted 199 tweets Group 2 posted 45 tweets Group 3 posted 313 tweets Group 4 posted 86 tweets Group 5 posted 221 tweets Group 6 posted 400 tweets Group 7 posted 798 tweets End of explanation """ # %load happyfuntokenizing.py #!/usr/bin/env python """ This code implements a basic, Twitter-aware tokenizer. A tokenizer is a function that splits a string of text into words. In Python terms, we map string and unicode objects into lists of unicode objects. There is not a single right way to do tokenizing. The best method depends on the application. This tokenizer is designed to be flexible and this easy to adapt to new domains and tasks. The basic logic is this: 1. The tuple regex_strings defines a list of regular expression strings. 2. The regex_strings strings are put, in order, into a compiled regular expression object called word_re. 3. The tokenization is done by word_re.findall(s), where s is the user-supplied string, inside the tokenize() method of the class Tokenizer. 4. When instantiating Tokenizer objects, there is a single option: preserve_case. By default, it is set to True. If it is set to False, then the tokenizer will downcase everything except for emoticons. The __main__ method illustrates by tokenizing a few examples. I've also included a Tokenizer method tokenize_random_tweet(). If the twitter library is installed (http://code.google.com/p/python-twitter/) and Twitter is cooperating, then it should tokenize a random English-language tweet. Julaiti Alafate: I modified the regex strings to extract URLs in tweets. """ __author__ = "Christopher Potts" __copyright__ = "Copyright 2011, Christopher Potts" __credits__ = [] __license__ = "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License: http://creativecommons.org/licenses/by-nc-sa/3.0/" __version__ = "1.0" __maintainer__ = "Christopher Potts" __email__ = "See the author's website" ###################################################################### import re import htmlentitydefs ###################################################################### # The following strings are components in the regular expression # that is used for tokenizing. It's important that phone_number # appears first in the final regex (since it can contain whitespace). # It also could matter that tags comes after emoticons, due to the # possibility of having text like # # <:| and some text >:) # # Most imporatantly, the final element should always be last, since it # does a last ditch whitespace-based tokenization of whatever is left. # This particular element is used in a couple ways, so we define it # with a name: emoticon_string = r""" (?: [<>]? [:;=8] # eyes [\-o\*\']? # optional nose [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth | [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth [\-o\*\']? # optional nose [:;=8] # eyes [<>]? )""" # The components of the tokenizer: regex_strings = ( # Phone numbers: r""" (?: (?: # (international) \+?[01] [\-\s.]* )? (?: # (area code) [\(]? \d{3} [\-\s.\)]* )? \d{3} # exchange [\-\s.]* \d{4} # base )""" , # URLs: r"""http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+""" , # Emoticons: emoticon_string , # HTML tags: r"""<[^>]+>""" , # Twitter username: r"""(?:@[\w_]+)""" , # Twitter hashtags: r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)""" , # Remaining word types: r""" (?:[a-z][a-z'\-_]+[a-z]) # Words with apostrophes or dashes. | (?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals. | (?:[\w_]+) # Words without apostrophes or dashes. | (?:\.(?:\s*\.){1,}) # Ellipsis dots. | (?:\S) # Everything else that isn't whitespace. """ ) ###################################################################### # This is the core tokenizing regex: word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE) # The emoticon string gets its own regex so that we can preserve case for them as needed: emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE) # These are for regularizing HTML entities to Unicode: html_entity_digit_re = re.compile(r"&#\d+;") html_entity_alpha_re = re.compile(r"&\w+;") amp = "&amp;" ###################################################################### class Tokenizer: def __init__(self, preserve_case=False): self.preserve_case = preserve_case def tokenize(self, s): """ Argument: s -- any string or unicode object Value: a tokenize list of strings; conatenating this list returns the original string if preserve_case=False """ # Try to ensure unicode: try: s = unicode(s) except UnicodeDecodeError: s = str(s).encode('string_escape') s = unicode(s) # Fix HTML character entitites: s = self.__html2unicode(s) # Tokenize: words = word_re.findall(s) # Possible alter the case, but avoid changing emoticons like :D into :d: if not self.preserve_case: words = map((lambda x : x if emoticon_re.search(x) else x.lower()), words) return words def tokenize_random_tweet(self): """ If the twitter library is installed and a twitter connection can be established, then tokenize a random tweet. """ try: import twitter except ImportError: print "Apologies. The random tweet functionality requires the Python twitter library: http://code.google.com/p/python-twitter/" from random import shuffle api = twitter.Api() tweets = api.GetPublicTimeline() if tweets: for tweet in tweets: if tweet.user.lang == 'en': return self.tokenize(tweet.text) else: raise Exception("Apologies. I couldn't get Twitter to give me a public English-language tweet. Perhaps try again") def __html2unicode(self, s): """ Internal metod that seeks to replace all the HTML entities in s with their corresponding unicode characters. """ # First the digits: ents = set(html_entity_digit_re.findall(s)) if len(ents) > 0: for ent in ents: entnum = ent[2:-1] try: entnum = int(entnum) s = s.replace(ent, unichr(entnum)) except: pass # Now the alpha versions: ents = set(html_entity_alpha_re.findall(s)) ents = filter((lambda x : x != amp), ents) for ent in ents: entname = ent[1:-1] try: s = s.replace(ent, unichr(htmlentitydefs.name2codepoint[entname])) except: pass s = s.replace(amp, " and ") return s from math import log tok = Tokenizer(preserve_case=False) def get_rel_popularity(c_k, c_all): return log(1.0 * c_k / c_all) / log(2) def print_tokens(tokens, gid = None): group_name = "overall" if gid is not None: group_name = "group %d" % gid print '=' * 5 + ' ' + group_name + ' ' + '=' * 5 for t, n in tokens: print "%s\t%.4f" % (t, n) print """ Explanation: Part 3: Tokens that are relatively popular in each user partition In this step, we are going to find tokens that are relatively popular in each user partition. We define the number of mentions of a token $t$ in a specific user partition $k$ as the number of users from the user partition $k$ that ever mentioned the token $t$ in their tweets. Note that even if some users might mention a token $t$ multiple times or in multiple tweets, a user will contribute at most 1 to the counter of the token $t$. Please make sure that the number of mentions of a token is equal to the number of users who mentioned this token but NOT the number of tweets that mentioned this token. Let $N_t^k$ be the number of mentions of the token $t$ in the user partition $k$. Let $N_t^{all} = \sum_{i=0}^7 N_t^{i}$ be the number of total mentions of the token $t$. We define the relative popularity of a token $t$ in a user partition $k$ as the log ratio between $N_t^k$ and $N_t^{all}$, i.e. \begin{equation} p_t^k = \log \frac{N_t^k}{N_t^{all}}. \end{equation} You can compute the relative popularity by calling the function get_rel_popularity. (0) Load the tweet tokenizer. End of explanation """ # unique_tokens = tweets.flatMap(lambda tweet: tok.tokenize(tweet[1])).distinct() splitter = lambda x: [(x[0],t) for t in x[1]] unique_tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ .flatMap(lambda t: splitter(t))\ .distinct() ut1 = unique_tokens.map(lambda x: ((partition_bc.value.get(x[0],7), x[1]), 1)).cache() utr = ut1.reduceByKey(lambda x,y: x+y).cache() group_tokens = utr.map(lambda (x,y):(x[1],y)).reduceByKey(lambda x,y:x+y) ##format: (token, k_all) print_count(group_tokens) """ Explanation: (1) Tokenize the tweets using the tokenizer we provided above named tok. Count the number of mentions for each tokens regardless of specific user group. Call print_count function to show how many different tokens we have. It should print Number of elements: 8979 End of explanation """ # splitter = lambda x: [(x[0],t) for t in x[1]] # tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ # .flatMap(lambda t: splitter(t))\ # .distinct() popular_tokens = group_tokens.filter(lambda x: x[1]>100).cache() # .sortBy(lambda x: x[1], ascending=False).cache() print_count(popular_tokens) print_tokens(popular_tokens.top(20, lambda x:x[1])) """ Explanation: (2) Tokens that are mentioned by too few users are usually not very interesting. So we want to only keep tokens that are mentioned by at least 100 users. Please filter out tokens that don't meet this requirement. Call print_count function to show how many different tokens we have after the filtering. Call print_tokens function to show top 20 most frequent tokens. It should print Number of elements: 52 ===== overall ===== : 1386.0000 rt 1237.0000 . 865.0000 \ 745.0000 the 621.0000 trump 595.0000 x80 545.0000 xe2 543.0000 to 499.0000 , 489.0000 xa6 457.0000 a 403.0000 is 376.0000 in 296.0000 ' 294.0000 of 292.0000 and 287.0000 for 280.0000 ! 269.0000 ? 210.0000 End of explanation """ # i want to join the partion on the top100 tweets!, so ineed to get it in the form (uid, tweet) twg = sc.parallelize(partitions.items()).rightOuterJoin(tweets)\ .map(lambda (uid,(gid,tweet)): (uid,(7,tweet)) if gid<0 or gid>6 else (uid,(gid,tweet))).cache() def group_score(gid): group_counts = utr.filter(lambda (x,y): x[0]==gid).map(lambda (x,y): (x[1], y)) merged = group_counts.join(popular_tokens) group_scores = merged.map(lambda (token,(V,W)): (token, get_rel_popularity(V,W))) return group_scores for _gid in range(0,8): _rdd = group_score(_gid) print_tokens(_rdd.top(10, lambda a:a[1]), gid=_gid) """ Explanation: (3) For all tokens that are mentioned by at least 100 users, compute their relative popularity in each user group. Then print the top 10 tokens with highest relative popularity in each user group. In case two tokens have same relative popularity, break the tie by printing the alphabetically smaller one. Hint: Let the relative popularity of a token $t$ be $p$. The order of the items will be satisfied by sorting them using (-p, t) as the key. End of explanation """
martinjrobins/hobo
examples/sampling/population-mcmc.ipynb
bsd-3-clause
import pints import pints.toy as toy import pints.plot import numpy as np import matplotlib.pyplot as plt # Load a multi-modal logpdf log_pdf = pints.toy.MultimodalGaussianLogPDF( [ [2, 2], [16, 12], [24, 24], ], [ [[1.2, 0.0], [0.0, 1.2]], [[0.8, 0.2], [0.2, 1.4]], [[1.0, -0.5], [-0.5, 1.0]], ] ) # Contour plot of pdf x = np.linspace(0, 32, 80) y = np.linspace(0, 32, 80) X, Y = np.meshgrid(x, y) Z = np.exp([[log_pdf([i, j]) for i in x] for j in y]) plt.contour(X, Y, Z) plt.xlabel('x') plt.ylabel('y') plt.show() """ Explanation: Inference: Population MCMC This example shows you how to use Population MCMC, also known as simulated tempering. It follows on from the first sampling example. First, we create a multi-modal distribution: End of explanation """ # Choose starting points for 3 mcmc chains xs = [[1, 1], [15, 13], [25, 23]] # Create mcmc routine mcmc = pints.MCMCController(log_pdf, 3, xs, method=pints.HaarioBardenetACMC) # Add stopping criterion mcmc.set_max_iterations(8000) # Disable logging mode mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Show traces and histograms pints.plot.trace(chains) plt.show() # Discard warm up chains = chains[:, 2000:, :] # Check convergence and other properties of chains results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['mean_x', 'mean_y']) print(results) """ Explanation: Exploration with adaptive MCMC Now let's try exploring this landscape with adaptive covariance MCMC. In this example we use three chains, each started off near one of the modes. End of explanation """ # Create mcmc routine mcmc = pints.MCMCController(log_pdf, 3, xs, method=pints.PopulationMCMC) # Add stopping criterion mcmc.set_max_iterations(8000) # Disable logging mode mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') # Show traces and histograms pints.plot.trace(chains) # Discard warm up chains = chains[:, 2000:, :] # Look at distribution in chain 0 pints.plot.pairwise(chains[0], kde=True) # Show graphs plt.show() # Check convergence and other properties of chains results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['mean_x', 'mean_y']) print(results) """ Explanation: In this run, each chain only explored its own mode! If you re-run, it can happen that one of the chains finds 2 or 3 modes, but the result shown above occurs quite often. Now, we try and do the same thing with population MCMC: End of explanation """
phobson/paramnormal
docs/tutorial/fitting.ipynb
mit
%matplotlib inline import warnings warnings.simplefilter('ignore') import numpy as np import matplotlib.pyplot as plt import seaborn import paramnormal clean_bkgd = {'axes.facecolor':'none', 'figure.facecolor':'none'} seaborn.set(style='ticks', rc=clean_bkgd) """ Explanation: Fitting distributions to data with paramnormal. In addition to explicitly creating distributions from known parameters, paramnormal.[dist].fit provides a similar, interface to scipy.stats maximum-likelihood estimatation methods. Again, we'll demonstrate with a lognormal distribution and compare parameter estimatation with scipy. End of explanation """ np.random.seed(0) x = paramnormal.lognormal(mu=1.75, sigma=0.75).rvs(370) """ Explanation: Let's start by generating a reasonably-sized random dataset and plotting a histogram. The primary method of creating a distribution from named parameters is shown below. The call to paramnormal.lognornal translates the parameter to be compatible with scipy. We then chain a call to the rvs (random variates) method of the returned scipy distribution. End of explanation """ bins = np.logspace(-0.5, 1.75, num=25) fig, ax = plt.subplots() _ = ax.hist(x, bins=bins, normed=True) ax.set_xscale('log') ax.set_xlabel('$X$') ax.set_ylabel('Probability') seaborn.despine() fig """ Explanation: Here's a histogram to illustrate the distribution. End of explanation """ from scipy import stats print(stats.lognorm.fit(x)) """ Explanation: Pretending for a moment that we didn't generate this dataset with explicit distribution parameters, how would we go about estimating them? Scipy provides a maximum-likelihood estimation for estimating parameters: End of explanation """ params = paramnormal.lognormal.fit(x) print(params) """ Explanation: Unfortunately those parameters don't really make any sense based on what we know about our articifical dataset. That's where paramnormal comes in: End of explanation """ dist = paramnormal.lognormal.from_params(params) # theoretical PDF x_hat = np.logspace(-0.5, 1.75, num=100) y_hat = dist.pdf(x_hat) bins = np.logspace(-0.5, 1.75, num=25) fig, ax = plt.subplots() _ = ax.hist(x, bins=bins, normed=True, alpha=0.375) ax.plot(x_hat, y_hat, zorder=2, color='g') ax.set_xscale('log') ax.set_xlabel('$X$') ax.set_ylabel('Probability') seaborn.despine() """ Explanation: This matches well with our understanding of the distribution. The returned params variable is a namedtuple that we can easily use to create a distribution via the .from_params methods. From there, we can create a nice plot of the probability distribution function with our histogram. End of explanation """ params = paramnormal.lognormal.fit(x) print(params) """ Explanation: Recap Fitting data End of explanation """ paramnormal.lognormal(mu=1.75, sigma=0.75, offset=0) """ Explanation: Creating distributions The manual way: End of explanation """ paramnormal.lognormal.from_params(params) """ Explanation: From fit parameters: End of explanation """
revspete/self-driving-car-nd
sem1/p3-behavioural-learning/P3-Behavioural-Cloning.ipynb
mit
import csv from PIL import Image import cv2 import numpy as np import h5py import os from random import shuffle import sklearn """ Explanation: Behavioral Cloning Notebook Overview This notebook contains project files for the Behavioral Cloning Project. In this project, I use my knowledge on deep neural networks and convolutional neural networks to clone driving behaviors. I train, validate and test a model using Keras. The model will output a steering angles for an autonomous vehicle given images collected from the car. Udacity has provided a car simulator where you can steer a car around a track for data collection. The image data and steering angles are used to train a neural network and then a trained model is used to drive the car autonomously around the track. Import Packages End of explanation """ samples = [] with open('data/driving_log.csv') as csvfile: reader = csv.reader(csvfile) # if we added headers to row 1 we better skip this line #iterlines = iter(reader) #next(iterlines) for line in reader: samples.append(line) """ Explanation: Read and store lines from driving log csv file End of explanation """ def gray_scale(image): return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) def clahe_normalise(image): # create a CLAHE object (Arguments are optional). clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(5,5)) return clahe.apply(image) def process_image(image): # do some pre processing on the image # TODO: Continue experimenting with colour, brightness adjustments #image = gray_scale(image) #image = clahe_normalise(image) return image # TODO: more testing with ImageDataGenerator # from keras.preprocessing.image import ImageDataGenerator # https://keras.io/preprocessing/image/ #train_datagen = ImageDataGenerator( # featurewise_center=True, # featurewise_std_normalization=True, # rotation_range=0, # width_shift_range=0.0, # height_shift_range=0.0, # horizontal_flip=True) """ Explanation: Image Processing End of explanation """ import random import matplotlib.pyplot as plt from PIL import Image index = random.randint(0, len(samples)) sample = samples[index] print(sample) print ("Sample Information") print("Centre Image Location: ", sample[0]) print("Centre Image Location: ", sample[1]) print("Centre Image Location: ", sample[2]) print("Steering Centre: ", sample[3]) print ("Throttle: ", sample[4]) path = "data/IMG/" # RGB img_center = np.asarray(Image.open(path+os.path.basename(sample[0]))) img_left = np.asarray(Image.open(path+os.path.basename(sample[1]))) img_right = np.asarray(Image.open(path+os.path.basename(sample[2]))) # Gray gray_img_center = gray_scale(img_center) gray_img_left = gray_scale(img_center) gray_img_right = gray_scale(img_center) # Flipped img_center_flipped = cv2.flip(gray_img_center,1) img_left_flipped = cv2.flip(gray_img_center,1) img_right_flipped = cv2.flip(gray_img_center,1) # Normalised img_center_flipped_normalised = clahe_normalise(img_center_flipped) img_left_flipped_normalised = clahe_normalise(img_left_flipped) img_right_flipped_normalised = clahe_normalise(img_right_flipped) # Crop img_center_cropped = img_center_flipped_normalised[65:160-22,0:320] img_left_cropped = img_left_flipped_normalised[65:160-22,0:320] img_right_cropped = img_right_flipped_normalised[65:160-22,0:320] steering_center = float(sample[3]) # steering measurement for centre image correction = 0.1 # steering offset for left and right images, tune this parameter steering_left = steering_center + correction steering_right = steering_center - correction # And print the results # RGB plt.figure(figsize=(20,20)) plt.subplot(4,3,1) plt.imshow(img_center) plt.axis('off') plt.title('Image Center', fontsize=10) plt.subplot(4,3,2) plt.imshow(img_left) plt.axis('off') plt.title('Image Left', fontsize=10) plt.subplot(4,3,3) plt.imshow(img_right) plt.axis('off') plt.title('Image Right', fontsize=10) ### Gray plt.subplot(4,3,4) plt.imshow(gray_img_center, cmap=plt.cm.gray) plt.axis('off') plt.title('Gray Center', fontsize=10) plt.subplot(4,3,5) plt.imshow(gray_img_left, cmap=plt.cm.gray) plt.axis('off') plt.title('Gray Left', fontsize=10) plt.subplot(4,3,6) plt.imshow(gray_img_right, cmap=plt.cm.gray) plt.axis('off') plt.title('Gray Right', fontsize=10) ### Flipped Images plt.subplot(4,3,7) plt.imshow(img_center_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Center Flipped', fontsize=10) plt.subplot(4,3,8) plt.imshow(img_left_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Left Flipped', fontsize=10) plt.subplot(4,3,9) plt.imshow(img_right_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Right Flipped', fontsize=10) ### Normalised Images plt.subplot(4,3,10) plt.imshow(img_center_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Center Flipped', fontsize=10) plt.subplot(4,3,11) plt.imshow(img_left_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Left Flipped', fontsize=10) plt.subplot(4,3,12) plt.imshow(img_right_flipped_normalised, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Right Flipped', fontsize=10) ### Normalised Images plt.subplot(4,3,10) plt.imshow(img_center_cropped, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Center Cropped', fontsize=10) plt.subplot(4,3,11) plt.imshow(img_left_cropped, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Left Cropped', fontsize=10) plt.subplot(4,3,12) plt.imshow(img_right_cropped, cmap=plt.cm.gray) plt.axis('off') plt.title('Image Right Cropped', fontsize=10) plt.subplots_adjust(wspace=0.2, hspace=0.2, top=0.5, bottom=0, left=0, right=0.5) plt.savefig('plots/data_visualisation.png') plt.show() """ Explanation: Data visualisation End of explanation """ from sklearn.model_selection import train_test_split train_samples, validation_samples = train_test_split(samples, test_size=0.2) """ Explanation: Take a validation set End of explanation """ from numpy import newaxis def generator(samples, batch_size=32): # Create empty arrays to contain batch of features and labels num_samples = len(samples) while True: shuffle(samples) for offset in range(0, num_samples, batch_size): batch_samples = samples[offset:offset+batch_size] batch_features = [] batch_labels = [] for batch_sample in batch_samples: path = "data/IMG/" img_center = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[0])))) img_left = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[1])))) img_right = process_image(np.asarray(Image.open(path+os.path.basename(batch_sample[2])))) #We now want to create adjusted steering measurement for the side camera images steering_center = float(batch_sample[3]) # steering measurement for centre image correction = 0.1 # steering offset for left and right images, tune this parameter steering_left = steering_center + correction steering_right = steering_center - correction # TODO: Add throttle information batch_features.extend([img_center, img_left, img_right, cv2.flip(img_center,1), cv2.flip(img_left,1), cv2.flip(img_right,1)]) batch_labels.extend([steering_center, steering_left, steering_right, steering_center*-1.0, steering_left*-1.0, steering_right*-1.0]) X_train = np.array(batch_features) # X_train = X_train[..., newaxis] # if converting to gray scale and normalising, may need to add another axis # Do some image processing on the data #train_datagen.fit(X_train) y_train = np.array(batch_labels) yield sklearn.utils.shuffle(X_train, y_train) # once we've got our processed batch send them off """ Explanation: Define Data Generator Refer to https://medium.com/@fromtheast/implement-fit-generator-in-keras-61aa2786ce98 for a good tutorial End of explanation """ train_generator = generator(train_samples, batch_size=32) validation_generator = generator(validation_samples, batch_size=32) # Imports to build the model Architecture import matplotlib.pyplot as plt from keras.models import Model from keras.models import Sequential from keras.layers import Flatten, Dense, Lambda from keras.layers import Conv2D from keras.layers.core import Dropout from keras.layers.noise import GaussianDropout from keras.layers.pooling import MaxPooling2D from keras.layers.convolutional import Cropping2D # In the architecture we add a crop layer crop_top = 65 crop_bottom = 22 # The input image dimensions input_height = 160 input_width = 320 new_height = input_height - crop_bottom - crop_top # Build the model architecture # Based on http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf model = Sequential() model.add(Cropping2D(cropping=((crop_top,crop_bottom),(0,0)), input_shape=(input_height,input_width, 3))) model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(new_height,input_width,3))) model.add(Conv2D(24,kernel_size=5,strides=(2, 2),activation='relu')) model.add(Conv2D(36,kernel_size=5,strides=(2, 2),activation='relu')) model.add(Conv2D(48,kernel_size=5,strides=(2, 2),activation='relu')) model.add(Conv2D(64,kernel_size=3,strides=(1, 1),activation='relu')) model.add(Conv2D(64,kernel_size=3,strides=(1, 1),activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(1164)) model.add(Dense(100)) model.add(Dense(50)) model.add(Dense(10)) model.add(Dense(1)) model.compile(loss='mse', optimizer='adam') print("model summary: ", model.summary()) """ Explanation: Build Model Architecture End of explanation """ batch_size = 100 # Info: https://medium.com/@fromtheast/implement-fit-generator-in-keras-61aa2786ce98 history_object = model.fit_generator( train_generator, steps_per_epoch=len(train_samples)/batch_size, validation_data = validation_generator, validation_steps=len(validation_samples)/batch_size, epochs=5, verbose=1) model.save('model.h5') ### print the keys contained in the history object print(history_object.history.keys()) ### plot the training and validation loss for each epoch plt.plot(history_object.history['loss']) plt.plot(history_object.history['val_loss']) plt.title('model mean squared error loss') plt.ylabel('mean squared error loss') plt.xlabel('epoch') plt.legend(['training set', 'validation set'], loc='upper right') plt.show() """ Explanation: Train the model End of explanation """
mspinaci/deep-learning-examples
Ridiculously overfitting models... or maybe not.ipynb
mit
from __future__ import division, print_function from matplotlib import pyplot as plt %matplotlib inline import bcolz import numpy as np import pandas as pd import os import theano import keras from keras import backend as K from keras.models import Sequential from keras.layers.core import Dense, Dropout, Flatten, Lambda, Activation from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D from keras.optimizers import SGD from keras.preprocessing import image from keras.layers.core import Layer from keras.layers import merge from keras.callbacks import CSVLogger IMAGE_HEIGHT, IMAGE_WIDTH = 227, 227 """ Explanation: Here we'll try very hard to overfit, by using silly models. This notebook should be run after the preprocessing part in dogs_vs_cats_with_AlexNet has already been run. End of explanation """ def load_array(fname): return bcolz.open(fname)[:] img_mean = load_array('input/img_mean.bz') def center(img): return img - img_mean.astype(np.float32).transpose([2,0,1]) linear = Sequential([ Lambda(center, input_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH), output_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH)), Flatten(), Dense(2, activation='softmax') ]) linear.summary() """ Explanation: First, let's do a simple linear model. It should have enough parameters to learn all the images and overfit! End of explanation """ def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical', target_size=(IMAGE_HEIGHT, IMAGE_WIDTH)): return gen.flow_from_directory(dirname, target_size=target_size, class_mode=class_mode, shuffle=shuffle, batch_size=batch_size) def fit_model(model, batches, val_batches, nb_epoch=1, verbose=1, callbacks=None): model.fit_generator(batches, batches.n//batches.batch_size, epochs=nb_epoch, callbacks=callbacks, validation_data=val_batches, validation_steps=val_batches.n//val_batches.batch_size, verbose=verbose) train_path = 'input/train' valid_path = 'input/valid' test_path = 'input/test' batches = get_batches(train_path, batch_size=2000) val_batches = get_batches(valid_path, batch_size=2000) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) linear.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) csv_logger = CSVLogger('training_linear.log') # valid_batches and batches are wrongly named - inverted... fit_model(linear, batches, val_batches, nb_epoch=20, callbacks=[csv_logger], verbose=1) training_results = pd.read_csv('training_linear.log') plt.style.use('ggplot') plt.rcParams.update({'font.size': 22}) training_results['acc'].plot(figsize=(15,10)) plt.ylim([0, 1]) plt.xlabel('Epoch') plt.ylabel('Training Accuracy') """ Explanation: 309k parameters for 23k images. It shouldn't be too hard to learn them all... End of explanation """ two_layers = Sequential([ Lambda(center, input_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH), output_shape=(3, IMAGE_HEIGHT, IMAGE_WIDTH)), Flatten(), Dense(500, activation='relu'), Dense(2, activation='softmax') ]) two_layers.summary() sgd5 = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True) two_layers.compile(optimizer=sgd5, loss='categorical_crossentropy', metrics=['accuracy']) csv_logger = CSVLogger('training_two_layers.log') small_batches = get_batches(valid_path, batch_size=64) two_layers.fit_generator(small_batches, small_batches.n//small_batches.batch_size, epochs=20, callbacks = [csv_logger]) small_batches = get_batches(valid_path, batch_size=256) two_layers.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) two_layers.fit_generator(small_batches, small_batches.n//small_batches.batch_size, epochs=20, callbacks = [CSVLogger('training_two_layers_part_two.log')]) """ Explanation: At least this seemed to learn something... Still, one would expect overfit (even more since the batch size is very big compared to the total amount of images), which never happens. Add one more layer, hence more parameters... Also, let's be more aggressive, train only on the validation dataset (that has 2000 elements), to stimulate even more overfitting... End of explanation """ w1 = two_layers.layers[2].get_weights()[0] b1 = two_layers.layers[2].get_weights()[1] w2 = two_layers.layers[3].get_weights()[0] b2 = two_layers.layers[3].get_weights()[1] def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w') c.flush() def create_dir(path): try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise create_dir('model_weights') save_array('model_weights/w1.bz', w1) save_array('model_weights/b1.bz', b1) save_array('model_weights/w2.bz', w2) save_array('model_weights/b2.bz', b2) w1 = load_array('model_weights/w1.bz') b1 = load_array('model_weights/b1.bz') w2 = load_array('model_weights/w2.bz') b2 = load_array('model_weights/b2.bz') two_layers.layers[2].set_weights((w1, b1)) two_layers.layers[3].set_weights((w2, b2)) small_batches = get_batches(valid_path, batch_size=500) sgd = SGD(lr=0.005, decay=1e-6, momentum=0.9, nesterov=True) two_layers.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) two_layers.fit_generator(small_batches, small_batches.n//small_batches.batch_size, epochs=200, callbacks = [CSVLogger('training_two_layers_part_three.log')]) training_results = pd.concat(( pd.read_csv('training_two_layers.log'), pd.read_csv('training_two_layers_part_two.log'), pd.read_csv('training_two_layers_part_three.log') )).reset_index(drop=True) print(training_results.shape) training_results.head() plt.style.use('ggplot') plt.rcParams.update({'font.size': 22}) training_results['acc'].plot(figsize=(15,10)) plt.ylim([0, 1]) plt.xlabel('Epoch') plt.ylabel('Training Accuracy') """ Explanation: For some reason, needs a restart. Save and reload weights... End of explanation """
ayushmaskey/ayushmaskey.github.io
jupyter/pandas_moving_window_functions.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt %pylab inline pylab.rcParams['figure.figsize'] = (19,6) import numpy as np import pandas as pd """ Explanation: Window function rolling window --> how did i do last three days --> check everyday expanding window --> all data equallu relevant --> old or new End of explanation """ ts = pd.Series(np.random.randn(20), pd.date_range('7/1/16', freq='D', periods=20)) # shift my one period --> here one day ts_lagged = ts.shift() plt.plot(ts, color='blue') plt.plot(ts_lagged, color='red') ts2 = pd.Series(np.random.randn(20), pd.date_range('7/1/16', freq='H', periods=20)) # shift my one period --> here hourly --> but shift 5 hour ts2_lagged = ts2.shift(5) plt.plot(ts2, color='blue') plt.plot(ts2_lagged, color='red') ts3_lagged = ts2.shift(-5) plt.plot(ts2, color='blue') plt.plot(ts2_lagged, color='red') plt.plot(ts3_lagged, color='black') """ Explanation: special thing about timeseries data points relate to one another not independent campare and relate them one way to do that is to look at how they change End of explanation """ df = pd.DataFrame( np.random.randn(600, 3), index = pd.date_range('5/1/16', freq='D', periods=600), columns = ['A', 'B', 'C']) df.head() df.plot() df.index # window averaging over 20 days coz freq days. # window is simply row count r = df.rolling(window=20) r df['A'].plot(color='red') r.mean()['A'].plot(color='blue') r.count().head() r.A.count().head() r.quantile(.5).plot() r.agg( [ 'sum', 'var' ])[15:25] """ Explanation: moving aggregate measures of time series window functions are like aggregate functions it can be used in conjuction with .resample() End of explanation """ df.rolling(window=10, center=False).apply(lambda x: x[1] / x[2])[1:30] """ Explanation: rolling custom function .apply() End of explanation """ ts_long = pd.Series(np.random.rand(200), pd.date_range('7/1/16', freq='D', periods=200)) ts_long.head() # rolling window of 3 month at a time ts_long.resample('M').mean().rolling(window=3).mean() ts_long.resample('M').mean().rolling(window=3).mean().plot() """ Explanation: generate rolling window function of monthly data from daily data End of explanation """ df.expanding(min_periods=1).mean()[1:5] df.expanding(min_periods=1).mean().plot() """ Explanation: expanding window End of explanation """ ts_ewma = pd.Series(np.random.rand(1000), pd.date_range('7/1/16', freq='D', periods=1000)) ts_ewma.ewm(span=60, freq='D', min_periods=0, adjust=True).mean().plot() ts_ewma.ewm(span=60, freq='D', min_periods=0, adjust=True).mean().plot() ts_ewma.rolling(window=60).mean().plot() """ Explanation: exponentially weight moving average End of explanation """
takanory/python-machine-learning
Chapter11.ipynb
mit
# 単純な2次元のデータセットを生成する from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=150, # サンプル点の総数 n_features=2, # 特徴量の個数 centers=3, # クラスタの個数 cluster_std=0.5, # クラスタ内の標準偏差 shuffle=True, # サンプルをシャッフル random_state=0) # 乱数生成器の状態を指定 import matplotlib.pyplot as plt plt.scatter(X[:, 0], X[:, 1], c='white', marker='o', s=50) plt.grid() plt.show() """ Explanation: 第11章 クラスタ分析 - ラベルなしデータの分析 https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch11/ch11.ipynb よく知られている k-means(k平均法)を使って類似点の中心を見つけ出す 階層的クラスタ木をボトムアップ方式で構築する 密度に基づくクラスタリングアプローチを使ってオブジェクトの任意の形状を識別する 11.1 k-means 法を使った類似度によるオブジェクトのグループ化 k-means法(k-means algorithm) プロトタイプベース(prototype-based)クラスタリング セントロイド(centroid): 特徴量が連続値の場合に、類似する点の「中心」を表す メドイド(medoid): 特徴量がカテゴリ値の場合に、最も「代表的」または最も頻度の高い点を表す 階層的(hierarchical)クラスタリング 密度(density-based)ベースクラスタリング クラスタリングの品質を評価する エルボー法(elbow method) シルエット図(silhouette plot) End of explanation """ from sklearn.cluster import KMeans km = KMeans(n_clusters=3, # クラスタの個数 init='random', # セントロイドの初期値をランダムに選択 n_init=10, # 異なるセントロイドの初期値を用いた実行回数 max_iter=300, # 最大イテレーション回数 tol=1e-04, # 収束と判定するための相対的な許容誤差 random_state=0) # セントロイドの初期化に用いる乱数生成器の状態 y_km = km.fit_predict(X) # クラスタの中心の計算と各サンプルのインデックスの予測 """ Explanation: k-means法の手続き クラスタの中心の初期値として、サンプル点からk個のセントロイドをランダムに選び出す 各サンプルを最も近いセントロイドμ(i)に割り当てる セントロイドに割り当てられたサンプルの中心にセントロイドを移動する サンプル点へのクラスタの割り当てが変化しなくなるか、ユーザー定義の許容値またはイテレーションの最大回数に達するまで、ステップ2〜3を繰り返す End of explanation """ plt.scatter(X[y_km == 0, 0], # グラフのxの値 X[y_km == 0, 1], # グラフのyの値 s=50, # プロットのサイズ c='lightgreen', # プロットの色 marker='s', # マーカーの形 label='cluster 1') # ラベル名 plt.scatter(X[y_km == 1, 0], X[y_km == 1, 1], s=50, c='orange', marker='o', label='cluster 2') plt.scatter(X[y_km == 2, 0], X[y_km == 2, 1], s=50, c='lightblue', marker='v', label='cluster 3') plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], s=250, marker='*', c='red', label='centroids') plt.legend() plt.grid() plt.show() """ Explanation: 11.1.1 k-means++ 法 End of explanation """ # 歪み(SSE)の値を取得 print('Distortion: {:.2f}'.format(km.inertia_)) distortions = [] for i in range(1, 11): km = KMeans(n_clusters=i, # クラスタの個数 init='k-means++', # k-measn++ 法によりクラスタ中心を選択 n_init=10, # 異なるセントロイドの初期値を用いた実行回数 max_iter=300, # 最大イテレーション回数 random_state=0) km.fit(X) distortions.append(km.inertia_) plt.plot(range(1, 11), distortions, marker='o') plt.xlabel('Number of clusters') plt.ylabel('Distortion') plt.show() """ Explanation: 11.1.2 ハードクラスタリングとソフトクラスタリング 11.1.1 エルボー法を使ってクラスタの最適な個数を求める End of explanation """ km = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) import numpy as np from matplotlib import cm from sklearn.metrics import silhouette_samples cluster_labels = np.unique(y_km) # y_km の要素の中で重複をなくす n_clusters = cluster_labels.shape[0] # シルエット係数を計算 silhouette_vals = silhouette_samples(X, y_km, metric='euclidean') y_ax_lower, y_ax_upper = 0, 0 yticks = [] for i, c in enumerate(cluster_labels): c_silhouette_vals = silhouette_vals[y_km == c] c_silhouette_vals.sort() y_ax_upper += len(c_silhouette_vals) color = cm.jet(i / n_clusters) # 色の値をセット plt.barh(range(y_ax_lower, y_ax_upper), # 水平の棒グラフを描画 c_silhouette_vals, # 棒の幅 height=1.0, # 棒の高さ edgecolor='none', # 棒の端の色 color=color) # 棒の色 yticks.append((y_ax_lower + y_ax_upper) / 2) # クラスタラベルの表示位置を追加 y_ax_lower += len(c_silhouette_vals) silhouette_avg = np.mean(silhouette_vals) # シルエット係数の平均値 plt.axvline(silhouette_avg, color='red', linestyle='--') # 係数の平均値には線を引く plt.yticks(yticks, cluster_labels + 1) # クラスタラベルを表示 plt.ylabel('Cluster') plt.xlabel('Silhouette coefficient') plt.show() km = KMeans(n_clusters=2, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) plt.scatter(X[y_km == 0, 0], X[y_km == 0, 1], s=50, c='lightgreen', marker='s', label='cluster 1') plt.scatter(X[y_km == 1, 0], X[y_km == 1, 1], s=50, c='orange', marker='o', label='cluster 2') plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], s=250, marker='*', c='red', label='centroids') plt.legend() plt.grid() plt.show() import numpy as np from matplotlib import cm from sklearn.metrics import silhouette_samples cluster_labels = np.unique(y_km) # y_km の要素の中で重複をなくす n_clusters = cluster_labels.shape[0] # シルエット係数を計算 silhouette_vals = silhouette_samples(X, y_km, metric='euclidean') y_ax_lower, y_ax_upper = 0, 0 yticks = [] for i, c in enumerate(cluster_labels): c_silhouette_vals = silhouette_vals[y_km == c] c_silhouette_vals.sort() y_ax_upper += len(c_silhouette_vals) color = cm.jet(i / n_clusters) # 色の値をセット plt.barh(range(y_ax_lower, y_ax_upper), # 水平の棒グラフを描画 c_silhouette_vals, # 棒の幅 height=1.0, # 棒の高さ edgecolor='none', # 棒の端の色 color=color) # 棒の色 yticks.append((y_ax_lower + y_ax_upper) / 2) # クラスタラベルの表示位置を追加 y_ax_lower += len(c_silhouette_vals) silhouette_avg = np.mean(silhouette_vals) # シルエット係数の平均値 plt.axvline(silhouette_avg, color='red', linestyle='--') # 係数の平均値には線を引く plt.yticks(yticks, cluster_labels + 1) # クラスタラベルを表示 plt.ylabel('Cluster') plt.xlabel('Silhouette coefficient') plt.show() """ Explanation: 11.1.4 シルエット図を使ってクラスタリングの性能を数値化する シルエット分析(silhouette analysis) シルエット係数(silhouette coefficient) End of explanation """ import pandas as pd import numpy as np np.random.seed(123) variables = ['X', 'Y', 'Z'] size = 5 # データサイズ labels = ['ID_{}'.format(i) for i in range(size)] X = np.random.random_sample([size, 3]) * 10 # 5行3列のサンプルデータを生成 df = pd.DataFrame(X, columns=variables, index=labels) df """ Explanation: 11.2 クラスタを階層木として構成する 階層的クラスタリング(hierarchical clustering) 樹形図(dendrogram) 2つのアプローチ 凝集型(agglomerative) 単連結法(single linkage) 完全連結法(complete linkage) 分割型(divisive) End of explanation """ from scipy.spatial.distance import pdist, squareform # pdist で距離を計算、squareform で対称行列を作成 row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')), columns=labels, index=labels) row_dist from scipy.cluster.hierarchy import linkage row_clusters = linkage(df.values, method='complete', metric='euclidean') # 1、2列目は最も類似度が低いメンバー # distanceはそれらのメンバーの距離 # 4列目は各クラスタのメンバーの個数 pd.DataFrame(row_clusters, columns=['row label 1', 'row label 2', 'distance', 'no. of items in clus.'], index=['cluster {}'.format(i + 1) for i in range(row_clusters.shape[0])]) # 樹形図を表示 from scipy.cluster.hierarchy import dendrogram import matplotlib.pyplot as plt row_dendr = dendrogram(row_clusters, labels=labels) plt.ylabel('Euclidean distance') plt.show() """ Explanation: 11.2.1 距離行列で階層的クラスタリングを実行する End of explanation """ # plot row dendrogram fig = plt.figure(figsize=(8, 8), facecolor='white') axd = fig.add_axes([0.09, 0.1, 0.2, 0.6]) # note: for matplotlib < v1.5.1, please use orientation='right' row_dendr = dendrogram(row_clusters, orientation='left') # reorder data with respect to clustering df_rowclust = df.ix[row_dendr['leaves'][::-1]] axd.set_xticks([]) axd.set_yticks([]) # remove axes spines from dendrogram for i in axd.spines.values(): i.set_visible(False) # plot heatmap axm = fig.add_axes([0.23, 0.1, 0.6, 0.6]) # x-pos, y-pos, width, height cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r') fig.colorbar(cax) axm.set_xticklabels([''] + list(df_rowclust.columns)) axm.set_yticklabels([''] + list(df_rowclust.index)) # plt.savefig('./figures/heatmap.png', dpi=300) plt.show() """ Explanation: 11.2.2 樹形図をヒートマップと組み合わせる End of explanation """ from sklearn.datasets import make_moons X, y = make_moons(n_samples=200, noise=0.05, random_state=0) plt.scatter(X[:, 0], X[:, 1]) plt.show() f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3)) # k-means 法で分類してプロット km = KMeans(n_clusters=2, random_state=0) y_km = km.fit_predict(X) ax1.scatter(X[y_km == 0, 0], X[y_km == 0, 1], c='lightblue', marker='o', s=40, label='cluster 1') ax1.scatter(X[y_km == 1, 0], X[y_km == 1, 1], c='red', marker='s', s=40, label='cluster 2') ax1.set_title('K-means clustering') # 完全連結法法で分類してプロット from sklearn.cluster import AgglomerativeClustering ac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete') y_ac = ac.fit_predict(X) ax2.scatter(X[y_ac == 0, 0], X[y_ac == 0, 1], c='lightblue', marker='o', s=40, label='cluster 1') ax2.scatter(X[y_ac == 1, 0], X[y_ac == 1, 1], c='red', marker='s', s=40, label='cluster 2') ax2.set_title('Agglomerative clustering') plt.legend() plt.show() from sklearn.cluster import DBSCAN db = DBSCAN(eps=0.2, min_samples=5, metric='euclidean') y_db = db.fit_predict(X) plt.scatter(X[y_db == 0, 0], X[y_db == 0, 1], c='lightblue', marker='o', s=40, label='cluster 1') plt.scatter(X[y_db == 1, 0], X[y_db == 1, 1], c='red', marker='s', s=40, label='cluster 2') plt.legend() plt.show() """ Explanation: 11.3 DBSCANを使って高密度の領域を特定する DBSCAN(Density-based Spatial Clustering of Applications with Noise) コア点(core point) ボーダー点(border point) ノイズ点(noise point) End of explanation """
shawger/uc-dand
P2/UCDAND-P2.ipynb
gpl-3.0
# Start of code. This block is for imports, global variables, common functions and any setup needed for the investigation %matplotlib inline import pandas as pd import matplotlib import numpy as np import matplotlib.pyplot as plt import seaborn as sns #Set some common formatting matplotlib.rcParams.update({'font.size': 20}) matplotlib.rcParams.update({'figure.titlesize': 24}) matplotlib.rcParams.update({'axes.labelsize': 20}) matplotlib.rcParams.update({'figure.figsize': (18,12)}) #For some reason setting 'font.size' does not effect the ytick and xtick font sizes. matplotlib.rcParams.update({'ytick.labelsize': 20}) matplotlib.rcParams.update({'xtick.labelsize': 20}) #Set some color maps to keep common color schemes sexColors = ['limegreen','dodgerblue'] classColors = ['gold','silver','rosybrown'] survivedColors = ['lightcoral','plum'] # The following function is used to create counts and percentages in the pie def make_autopct(values): def my_autopct(pct): total = sum(values) val = int(round(pct*total/100.0)) return '{p:.2f}% ({v:d})'.format(p=pct,v=val) return my_autopct """ Explanation: Titanic Dataset Investigation By: Nick Shaw Date: 2016-07-01 Project: P2 from the Udacity Data Analyst Nano Degree 1. Introduction Data describing passengers on the Titanic will be used to investigate the following questions: How does sex effect passenger class? Does the age of a passenger have any effect on their survival? What effect does sex have? The data used is from the Kaggle Titanic Dataset and can be found here. Python with the help of pandas, numpy and matlibplot will be used for the investigation. This project has a github page here. 1.1 Code End of explanation """ # Open the csv and load into pandas dataframe df = pd.read_csv('train.csv') """ Explanation: 2. Question 1: How does sex effect passenger class? 2.1 Data Wrangling Data loaded from trian.csv. End of explanation """ #Use the pandas.isnull function to find any missing data nullSex = df[pd.isnull(df['Sex'])]['PassengerId'].count() nullClass = df[pd.isnull(df['Pclass'])]['PassengerId'].count() nullSibSp = df[pd.isnull(df['SibSp'])]['PassengerId'].count() print "Rows with no sex: %d\nRows with no pClass: %d\nRows with no SibSp: %d" % (nullSex,nullClass,nullSibSp) """ Explanation: Now that the dataset is loaded, check if any rows contain bad data for the variables we are looking at. Passenger sex Passenger class If the passenger has siblings or spouses (this comes later) End of explanation """ sexNumbers = df.groupby('Sex')['Sex'].count() sexNumbers.plot.pie(subplots=True, figsize=(8, 8), autopct = make_autopct(sexNumbers), title='Passengers on Titanic Sex Distribution', colors = sexColors) """ Explanation: No missing data found, so we don't need to worry about cleaning the data for this investigation. 2.2 1D Investigation For the question, 'How does sex effect passenger class?', independently explore the variables sex and passenger class. 2.2.1 Passenger Sex As sex is a boolean thing (at least in this example) the only useful question we can answer is male vs female: End of explanation """ classNumbers = df.groupby('Pclass')['Pclass'].count() classNumbers.plot.pie(subplots=True, figsize=(8, 8), autopct= make_autopct(classNumbers), title='Passengers on Titanic Class Distribution', labels = ['First Class', 'Second Class', 'Third Class'], colors=classColors) """ Explanation: About 2/3 of the passengers are male and the other 1/3 female. It might be interesting to compare this with passenger data from other ships in that era, or ships/trains/planes today. 2.2.2 Passenger Class There are 3 classes (1, 2, and 3) so lets see how many passengers in each group and what is that as a percent. End of explanation """ # Group passenegers into male and female, and then group by class and count the number of passengers in the groups femaleVsClass = df[df['Sex'] == 'female'].groupby(['Pclass'])['Pclass'].count() maleVsClass = df[df['Sex'] == 'male'].groupby(['Pclass'])['Pclass'].count() # Combine the male and female results (for better graphing) sexVsClass = pd.concat([femaleVsClass, maleVsClass], axis=1, keys=['females','males']) #Plot the results sexVsClass.plot.pie(subplots=True, figsize=(16, 8), autopct='%.2f%%', title='Passengers on Titanic Class Distribution for Males and Females', labels = ['First Class', 'Second Class', 'Third Class'], legend=None, colors=classColors) """ Explanation: 3rd Class makes up the majority of the passengers. There are a similar number of 1st and 2nd class passengers. It might be interesting to compare this with passenger data from other ships in that era, or ships/trains/planes today. 2.3 2D Investigation Investigate the relationship between passenger class and sex. 2.3.1 Class Vs Sex on the Titanic For this we can break the dataset into 2 groups (male and female) then look at how the makeup of the class is. End of explanation """ # Find the amount of males and females in all classes and group by the sibsp (sibblings or spouses on board) # Since there are a different number of males and females in all classes, compare the results using % of total male1AllClassTotal = df[(df['Sex'] == 'male')]['Pclass'].count() maleAllClass = df[(df['Sex'] == 'male')].groupby(['SibSp'])['SibSp'].count()/male1AllClassTotal * 100 femaleAllClassTotal = df[(df['Sex'] == 'female')]['Pclass'].count() femaleAllClass = df[(df['Sex'] == 'female')].groupby(['SibSp'])['SibSp'].count()/femaleAllClassTotal * 100 # Combine the males and females in all class to display on same graph sexVsAllClass = pd.concat([femaleAllClass, maleAllClass], axis=1, keys=['females %','males %']) # Find the amount of males and females in first class and group by the sibsp (sibblings or spouses on board) # Since there are a different number of males and females in first class, compare the results using % of total male1stClassTotal = df[(df['Sex'] == 'male') & (df['Pclass'] == 1)]['Pclass'].count() male1stClass = df[(df['Sex'] == 'male') & (df['Pclass'] == 1)].groupby(['SibSp'])['SibSp'].count()/male1stClassTotal * 100 female1stClassTotal = df[(df['Sex'] == 'female') & (df['Pclass'] == 1)]['Pclass'].count() female1stClass = df[(df['Sex'] == 'female') & (df['Pclass'] == 1)].groupby(['SibSp'])['SibSp'].count()/female1stClassTotal * 100 # Combine the males and females in first class to display on same graph sexVs1stClass = pd.concat([female1stClass, male1stClass], axis=1, keys=['females %','males %']) # Find the amount of males and females in second class and group by the sibsp (sibblings or spouses on board) # Since there are a different number of males and females in second class, compare the results using % of total male2ndClassTotal = df[(df['Sex'] == 'male') & (df['Pclass'] == 2)]['Pclass'].count() male2ndClass = df[(df['Sex'] == 'male') & (df['Pclass'] == 2)].groupby(['SibSp'])['SibSp'].count()/male2ndClassTotal * 100 female2ndClassTotal = df[(df['Sex'] == 'female') & (df['Pclass'] == 2)]['Pclass'].count() female2ndClass = df[(df['Sex'] == 'female') & (df['Pclass'] == 2)].groupby(['SibSp'])['SibSp'].count()/female2ndClassTotal * 100 # Combine the males and females in second class to display on same graph sexVs2ndClass = pd.concat([female2ndClass, male2ndClass], axis=1, keys=['females %','males %']) # Find the amount of males and females in third class and group by the sibsp (sibblings or spouses on board) # Since there are a different number of males and females in third class, compare the results using % of total male3rdClassTotal = df[(df['Sex'] == 'male') & (df['Pclass'] == 3)]['Pclass'].count() male3rdClass = df[(df['Sex'] == 'male') & (df['Pclass'] == 3)].groupby(['SibSp'])['SibSp'].count()/male3rdClassTotal * 100 female3rdClassTotal = df[(df['Sex'] == 'female') & (df['Pclass'] == 3)]['Pclass'].count() female3rdClass = df[(df['Sex'] == 'female') & (df['Pclass'] == 3)].groupby(['SibSp'])['SibSp'].count()/female3rdClassTotal * 100 # Combine the males and females in third class to display on same graph sexVs3rdClass = pd.concat([female3rdClass, male3rdClass], axis=1, keys=['females %','males %']) # Display the results print pd.concat([sexVsAllClass, sexVs1stClass,sexVs2ndClass,sexVs3rdClass], axis=1, keys=['All','First','Second','Third']) a1 = sexVsAllClass.plot.bar(color=sexColors) a1.set_title('All Passengers',fontsize=24) a1.set_xlabel('Number of Siblings or Spouses',fontsize=20) a1.set_ylabel('% Of Total',fontsize=20) a2 = sexVs1stClass.plot.bar(color=sexColors) a2.set_title('1st Class Passengers',fontsize=24) a2.set_xlabel('Number of Siblings or Spouses',fontsize=20) a2.set_ylabel('% Of Total',fontsize=20) a3 = sexVs2ndClass.plot.bar(color=sexColors) a3.set_title('2nd Class Passengers',fontsize=24) a3.set_xlabel('Number of Siblings or Spouses',fontsize=20) a3.set_ylabel('% Of Total',fontsize=20) a4 = sexVs3rdClass.plot.bar(color=sexColors) a4.set_title('3rd Class Passengers',fontsize=24) a4.set_xlabel('Number of Siblings or Spouses',fontsize=20) a4.set_ylabel('% Of Total',fontsize=20) """ Explanation: The biggest difference is that a higher pct of males are in 3rd class then the % of females in third class. One thought I have is that maybe there are more poor single men on the trip trying to get to America to start a new life. Let's see. 2.3.2 Passengers Class and Sex vs Spouses For this analysis take a look at how many males and females in each class have siblings or spouses onboard. Include the breakdown of all males and females with sibling or spouses in all class for reference. End of explanation """ #Use the pandas.isnull function to find any missing data nullSex = df[pd.isnull(df['Sex'])]['PassengerId'].count() nullAge = df[pd.isnull(df['Age'])]['PassengerId'].count() nullSurvived = df[pd.isnull(df['Survived'])]['PassengerId'].count() totalRows = df['PassengerId'].count() print "Rows with no Sex: %d\nRows with no Age: %d\nRows with no Survived: %d\nTotal: %d" % (nullSex,nullAge,nullSurvived,totalRows) """ Explanation: Class does not seem to make much of a difference when it comes to the amount of men and women with siblings or spouses aboard. The % of men aboard with no spouses of siblings aboard is higher then the % of women. It would suggest that men were more likely to travel alone then women regardless of class. One interesting thing (could be an out-lier) is that there are a few larger families in third class. 2.4 Discussion and Conclusions There were more men on the Titanic then women. The majority of passengers would be considered to be of the 3rd class. There is a higher % of men in 3rd class then women. The other classes are closer. The amount of siblings or spouses a passenger has does not seem to effect class or sex. This investigation does not take into account the fact that the number of men with spouses or siblings could be effected by the number of women with siblings and spouses and vs versa. Anything discussed in this section is based on the data in train.csv from the Kaggle website which only includes 891/2224 of the passengers. I can't find out which of the 891 passengers were selected so it is hard to know if there is any bias in the data (eg, was the crew included?). Therefor any conclusions only apply to the passengers included in the set. 3. Question 2: Does the age of a passenger have any effect on their survival? What effect does sex have? For the question, 'Does the age of a passenger have any effect on their survival? What effect does sex have?', the variables will be investigated independently, then see what effect they have on each other. 3.1 Data Wrangling Data loaded from trian.csv. The data has already been loaded in section 2.1. End of explanation """ # Remove rows with a null age (blank age) from the dataframe df = df[pd.notnull(df['Age'])] nullAge = df[pd.isnull(df['Age'])]['PassengerId'].count() totalRows = df['PassengerId'].count() print "Rows with no Age: %d\nTotal: %d" % (nullAge,totalRows) """ Explanation: It appears some rows with passenger age is missing. For this investigation, rows with missing age information will be discarded. End of explanation """ # Add a column called 'Live or Died' with a string representation of the Survivedd column (which is 0 or 1) d = {0: 'Died', 1: 'Lived'} df['Lived or Died'] = df['Survived'].map(d) """ Explanation: For readability, add a column in the dataframe call 'Lived or Died' which has a string representation of whether a passenger survived. End of explanation """ # Use group by to find numbers of passengers how survived. Break down into all, men and women survivedAllNumbers = df.groupby('Lived or Died')['Lived or Died'].count() survivedMenNumbers = df[df['Sex']=='male'].groupby('Lived or Died')['Lived or Died'].count() survivedWomenNumbers = df[df['Sex']=='female'].groupby('Lived or Died')['Lived or Died'].count() # Combine survival numbers for display survivedNumbers = pd.concat([survivedAllNumbers, survivedMenNumbers,survivedWomenNumbers], axis=1, keys=['All','Men','Women']) # Display the not print(survivedNumbers) survivedNumbers.plot.pie(subplots=True, figsize=(18, 6), autopct='%.2f%%', title='Passengers on Titanic Survival Rates', legend=None, colors=survivedColors) """ Explanation: 3.2 1D Investigation Passenger survival and passenger age will be investigated. To make it more interesting both will be investigated with the entire passenger population and then split into sexes. (Maybe not 1D, but splitting into sexes doesn't add a lot of complexity) 3.2.1 Passenger Survival End of explanation """ # Describe the datasets in one chart using a concat of the describes of all, males and females. print pd.concat([df['Age'].describe(), df[df['Sex'] == 'male']['Age'].describe(), df[df['Sex'] == 'female']['Age'].describe()],axis=1, keys=['All','Male',"Female"]) # Show histograms of the total population, then for males and females sepperatly df['Age'].hist() plt.title('Histogram of Passenger Age for all Passengers on the Titanic',fontsize=24) plt.xlabel("Age",fontsize=20) plt.ylabel("Frequency",fontsize=20) plt.show() df[df['Sex']=='female']['Age'].hist(color=sexColors[0]) plt.title('Histogram of Females on Titanic',fontsize=24) plt.xlabel("Age",fontsize=20) plt.ylabel("Frequency",fontsize=20) plt.show() df[df['Sex']=='male']['Age'].hist(color=sexColors[1]) plt.title('Histogram of Males on Titanic',fontsize=24) plt.xlabel("Age",fontsize=20) plt.ylabel("Frequency",fontsize=20) plt.show() """ Explanation: More died then were saved. If you were a man it was much more unfortunate as most perished, with women having a much better (but still not perfect) survival rate. 3.2.2 Passenger Age End of explanation """ #Use seaborn to create graphs that use logistic regression to predict the survival % at different ages #Show both population as a whole and split up males and females. sns.set_context("notebook", font_scale=3) sns.set_style("darkgrid") g = sns.lmplot(x='Age', y='Survived', data=df, y_jitter=.02, logistic=True, size=6, aspect=4) g.set(xlim=(0,80),title='Survival Rate of All Passengers Using Logistic Regression') g = sns.lmplot(x='Age', y='Survived', hue="Sex", data=df, y_jitter=.02, logistic=True, size=6, aspect=4,) g.set(xlim=(0,80),title='Survival Rate of Passengers, Seperated by Sex, Using Logistic Regression') """ Explanation: There are more males then females. The distribution looks pretty close with the average age of males is slightly higher, and there are a higher % of very young females then males. 3.3 2D Investigation 3.3.1 Passenger Age vs Survival Once again we will took at the whole population and then separate into males and females. Use logistic regression to estimate the survival chances of a person at a certain age. End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_eco/TD2A_Eco_Web_Scraping_corrige.ipynb
mit
import urllib import bs4 import collections import pandas as pd # pour le site que nous utilisons, le user agent de python 3 n'est pas bien passé : # on le change donc pour celui de Mozilla req = urllib.request.Request('http://pokemondb.net/pokedex/national', headers={'User-Agent': 'Mozilla/5.0'}) html = urllib.request.urlopen(req).read() page = bs4.BeautifulSoup(html, "lxml") # récupérer la liste des noms de pokémon liste_pokemon =[] for pokemon in page.findAll('span', {'class': 'infocard-lg-img'}) : pokemon = pokemon.find('a').get('href').replace("/pokedex/",'') liste_pokemon.append(pokemon) """ Explanation: 2A.eco - Web-Scraping - correction Correction d'exercices sur le Web Scraping. Pour cet exercice, nous vous demandons d'obtenir 1) les informations personnelles des 721 pokemons sur le site internet pokemondb.net. Les informations que nous aimerions obtenir au final pour les pokemons sont celles contenues dans 4 tableaux : Pokédex data Training Breeding Base stats Pour exemple : Pokemon Database. 2) Nous aimerions que vous récupériez également les images de chacun des pokémons et que vous les enregistriez dans un dossier (indice : utilisez les modules request et shutil) pour cette question ci, il faut que vous cherchiez de vous même certains éléments, tout n'est pas présent dans le TD. End of explanation """ def get_page(pokemon_name): url_pokemon = 'http://pokemondb.net/pokedex/'+ pokemon_name req = urllib.request.Request(url_pokemon, headers = {'User-Agent' : 'Mozilla/5.0'}) html = urllib.request.urlopen(req).read() return bs4.BeautifulSoup(html, "lxml") def get_cara_pokemon(pokemon_name): page = get_page(pokemon_name) data = collections.defaultdict() # table Pokédex data, Training, Breeding, base Stats for table in page.findAll('table', { 'class' : "vitals-table"})[0:4] : table_body = table.find('tbody') for rows in table_body.findChildren(['tr']) : if len(rows) > 1 : # attention aux tr qui ne contiennent rien column = rows.findChild('th').getText() cells = rows.findChild('td').getText() cells = cells.replace('\t','').replace('\n',' ') data[column] = cells data['name'] = pokemon_name return dict(data) items = [] for e, pokemon in enumerate(liste_pokemon) : print(e, pokemon) item = get_cara_pokemon(pokemon) items.append(item) if e > 20: break df = pd.DataFrame(items) df.head() """ Explanation: Fonction pour obtenir les caractéristiques de pokemons End of explanation """ import shutil import requests for e, pokemon in enumerate(liste_pokemon) : print(e,pokemon) url = "https://img.pokemondb.net/artwork/{}.jpg".format(pokemon) response = requests.get(url, stream=True) # avec l'option stream, on ne télécharge pas l'objet de l'url with open('{}.jpg'.format(pokemon), 'wb') as out_file: shutil.copyfileobj(response.raw, out_file) if e > 20: break import os names = [name for name in os.listdir('.') if '.jpg' in name] names[:3] import matplotlib.pyplot as plt import skimage.io as imio fig, ax = plt.subplots(1, 3, figsize=(12,4)) for i, name in enumerate(names[:ax.shape[0]]): img = imio.imread(name) ax[i].imshow(img) ax[i].get_xaxis().set_visible(False) ax[i].get_yaxis().set_visible(False) """ Explanation: les images de pokemon End of explanation """
marioberges/F16-12-752
projects/chingiw and chengchm/Data building.ipynb
gpl-3.0
import pandas as pd import numpy as np import scipy from scipy import stats import matplotlib.pyplot as plt """ Explanation: Machine Learning Regression for Energy efficiency Name: Oscar Wang, Chengcheng Mao Id: chingiw, chengchm Introduction Heating load and Cooling load is a good indicator for building energy efficiency. In this notebook, we get the energy efficiency Data Set from the UCI Machine Learning Repository, implement machine learning model SVC and linear regression to train our datasets. Our goal is to find a pattern between the building shapes and energy efficiency, analyze the predicted result to improve our model. Dataset Description The dataset perform energy analysis using 12 different building shapes simulated in Ecotect. The buildings differ with respect to the glazing area, the glazing area distribution, and the orientation, amongst other parameters. The dataset comprises 768 samples and 8 features, aiming to predict two real valued responses. It can also be used as a multi-class classification problem if the response is rounded to the nearest integer. Continuous features X1 Relative Compactness X2 Surface Area X3 Wall Area X4 Roof Area X5 Overall Height X6 Orientation X7 Glazing Area X8 Glazing Area Distribution y1 Heating Load y2 Cooling Load End of explanation """ df = pd.read_csv('ENB2012_data.csv', na_filter=False) df = df.drop(['Unnamed: 10','Unnamed: 11'], axis=1) df['X1'] = pd.to_numeric(df['X1'], errors='coerce') df['X2'] = pd.to_numeric(df['X2'], errors='coerce') df['X3'] = pd.to_numeric(df['X3'], errors='coerce') df['X4'] = pd.to_numeric(df['X4'], errors='coerce') df['X5'] = pd.to_numeric(df['X5'], errors='coerce') df['X6'] = pd.to_numeric(df['X6'], errors='coerce') df['X7'] = pd.to_numeric(df['X7'], errors='coerce') df['X8'] = pd.to_numeric(df['X8'], errors='coerce') df['Y1'] = pd.to_numeric(df['Y1'], errors='coerce') df['Y2'] = pd.to_numeric(df['Y2'], errors='coerce') df = df.dropna() print (df.dtypes) print (df.head()) plt.show() plt.plot(df.values[:,8]) plt.show() plt.plot(df.values[:,9]) plt.close() """ Explanation: Input and output preparation First, load in the CSV file. Some of these columns are in fact integers or floats, and if you wish to run numerical functions on them (like numpy) you'll need to convert the columns to the correct type. End of explanation """ plt.scatter(df['Y1'], df['Y2']) plt.show() plt.close() """ Explanation: Analyze the two output by ploting In order to figure out the relation ship between two outputs. Use matlabplot to scatter plot the lable Y1 and Y2. The result plot looks like a linear relationship. End of explanation """ from sklearn import svm from sklearn.model_selection import train_test_split from sklearn import linear_model """ Explanation: Model selection In this problem, we are going to use two different machine learning model to train this datasets and compare the result of it. First, we implement the basic linear regression to this datasets. Next we implement the SVR (Support Vector Regression) to see the differece between linear regression model. Last, plot the result and compare the the true label to see whether the model assumption is robust or not. Support Vector Regression The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression. A (linear) support vector machine (SVM) solves the canonical machine learning optimization problem using hinge loss and linear hypothesis, plus an additional regularization term. Unlike least squares, we solve these optimization problems by using gradient descent to update the funtion loss. End of explanation """ train, test = train_test_split(df, test_size = 0.3) X_tr = train.drop(['Y1','Y2'], axis=1) y_tr = train['Y1'] test = test.sort_values('Y1') X_te = test.drop(['Y1','Y2'], axis=1) y_te = test['Y1'] reg_svr = svm.SVR() reg_svr.fit(X_tr, y_tr) reg_lin = linear_model.LinearRegression() reg_lin.fit(X_tr, y_tr) y_pre_svr = reg_svr.predict(X_te) y_lin_svr = reg_lin.predict(X_te) print ("Coefficient R^2 of the SVR prediction: " + str(reg_svr.score(X_tr, y_tr))) print ("Coefficient R^2 of the Linear Regression prediction:" + str(reg_lin.score(X_tr, y_tr))) """ Explanation: Prediction on the Heating load and Cooling load We did a simple holdout cross-validation by seperating the dataset into training set (70%) and validation set (30%). Drop the input label from datasets and create the label vector. Here we sort the validaton set by the label value in order to analyze the result by plot and implement two different model and predict by the validation set. End of explanation """ plt.plot(y_pre_svr, label="Prediction for SVR") plt.plot(y_te.values, label="Heating Load") plt.plot(y_lin_svr, label="Prediction for linear") plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode="expand", borderaxespad=0.) plt.show() train, test = train_test_split(df, test_size = 0.3) X_tr = train.drop(['Y1','Y2'], axis=1) y_tr = train['Y2'] test = test.sort_values('Y2') X_te = test.drop(['Y1','Y2'], axis=1) y_te = test['Y2'] reg_svr = svm.SVR() reg_svr.fit(X_tr, y_tr) reg_lin = linear_model.LinearRegression() reg_lin.fit(X_tr, y_tr) y_pre_svr = reg_svr.predict(X_te) y_lin_svr = reg_lin.predict(X_te) print ("Coefficient R^2 of the SVR prediction: " + str(reg_svr.score(X_tr, y_tr))) print ("Coefficient R^2 of the Linear Regression prediction: " + str(reg_lin.score(X_tr, y_tr))) plt.plot(y_pre_svr, label="Prediction for SVR") plt.plot(y_te.values, label="Cooling Load") plt.plot(y_lin_svr, label="Prediction for linear") plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode="expand", borderaxespad=0.) plt.show() # coefficients of linear model print (reg_lin.coef_) """ Explanation: Analyze the model The R^2 error for both model are pretty similar. The SVR model yield a better result because of lower R^2 rate. To show the difference between these two model compare with true label, we use matplotlib to plot the result of our predictions. End of explanation """
lfairchild/PmagPy
data_files/notebooks/Intro to MagicDataFrames.ipynb
bsd-3-clause
from pmagpy import contribution_builder as cb from pmagpy import ipmag import os import json import numpy as np import sys import pandas as pd from pandas import DataFrame from pmagpy import pmag working_dir = os.path.join("..", "3_0", "Osler") """ Explanation: This notebook demonstrates how to use the Python MagicDataFrame object. A MagicDataFrame contains the data from one MagIC-format table and provides functionality for accessing and editing that data. Getting started End of explanation """ reload(nb) #class MagicDataFrame(object): # """ # Each MagicDataFrame corresponds to one MagIC table. # The MagicDataFrame object consists of a pandas DataFrame, # and assorted methods for manipulating that DataFrame. # """ # def __init__(self, magic_file=None, columns=None, dtype=None): # """ # Provide either a magic_file name or a dtype. # List of columns is optional, # and will only be used if magic_file == None # """ fname = os.path.join("..", '3_0', 'Osler', 'sites.txt') # the MagicDataFrame object: site_container = cb.MagicDataFrame(magic_file=fname) # the actual pandas DataFrame: site_df = site_container.df # show the first 5 site records site_df[:5] # FAILS #print site_df.fillna.__doc__ #site_df.fillna(value=None) #FAILS #print site_df.replace.__doc__ #site_df.replace(np.nan, None) #FAILS #site_df[site_df.astype(str) == ""] = None #site_df[site_df.where(site_df.astype(str) == "").notnull()] = None # WORKS! #site_df.where(site_df.notnull(), None) site_df.head() # make an empty MagicDataFrame with 'Age' and 'Metadata' headers reload(nb) fname = os.path.join("..", '3_0', 'Osler', 'sites.txt') # the MagicDataFrame object: site_container = cb.MagicDataFrame(dtype='sites', groups=['Age', 'Metadata']) # the actual pandas DataFrame: site_df = site_container.df # show the (empty) dataframe site_df """ Explanation: Creating a MagicDataFrame End of explanation """ fname = os.path.join('..', '3_0', 'Osler', 'sites.txt') # the MagicDataFrame object: site_container = cb.MagicDataFrame(fname) # the actual pandas DataFrame: site_df = site_container.df # all sites with site_name (index) of '1' # will return a smaller DataFrame (or a Series if there is only 1 row with that index) site_container.df.ix['1'] # index by position (using an integer), will always return a single record as Series # in this case, get the second record site_container.df.iloc[1] # return all sites with the description column filled in cond = site_container.df['description'].notnull() site_container.df[cond].head() # get list of all sites with the same location_name name = site_df.iloc[0].location site_df[site_df['location'] == name][['location']] # grab out declinations & inclinations # get di block, providing the index (slicing the dataframe will be done in the function) print site_container.get_di_block(do_index=True, item_names=['1', '2'], tilt_corr='100') # get di block, providing a slice of the DataFrame print site_container.get_di_block(site_container.df.loc[['1', '2']]) # Get names of all sites with a particular method code # (returns a pandas Series with the site name and method code) site_container.get_records_for_code('DE-K', incl=True)['method_codes'].head() # Get names of all sites WITHOUT a particular method code site_container.get_records_for_code('DE-K', incl=False)['method_codes'].head() """ Explanation: Indexing and selecting data End of explanation """ # update all sites named '1' to have a 'bed_dip' of 22 (.loc works in place) site_df.loc['1', 'bed_dip'] = '22' site_df.loc['1'] # update any site's value for 'conglomerate_test' to 25 if that value was previously null site_container.df['conglomerate_test'] = np.where(site_container.df['conglomerate_test'].isnull(), 25, \ site_container.df['conglomerate_test']) site_container.df[:5] # contribution_builder function to update a row (by row number) ind = 1 row_data = {"bed_dip": "new_value", "new_col": "new_value"} site_container.update_row(ind, row_data) site_df.head()[["bed_dip", "new_col", "site"]] site_df.head()[['site', 'new_col', 'citations']] # new builder function to update a record # finds self.df row based on a condition # then updates that row with new_data # then deletes any other rows that also meet that condition site_name = "1" col_val = "new_value" # data to add: new_data = {"citations": "new citation"} # condition to find row cond1 = site_df.index.str.contains(site_name) == True cond2 = site_df['new_col'] == col_val condition = (cond1 & cond2) # update record site_container.update_record(site_name, new_data, condition) site_df.head()[["citations", "new_col"]] # initialize a new site with a name but no values, add it to site table site_container.add_blank_row('blank_site') site_container.df = site_container.df site_container.df.tail() # copy a site from the site DataFrame, #change a few values, #then add the new site to the site DataFrame new_site = site_container.df.ix[2] new_site['bed_dip'] = "other" new_site.name = 'new_site' site_container.df = site_container.df.append(new_site) site_container.df.tail() # remove a row site_container.delete_row(3) # this deletes the 4th row site_df.head() # get rid of all rows with index "1" or "2" site_df.drop(["1", "2"]) """ Explanation: Changing values End of explanation """ reload(nb) # create an empty MagicDataFrame with column names cols = ['analyst_names', 'aniso_ftest', 'aniso_ftest12', 'aniso_ftest23', 'aniso_s', 'aniso_s_mean', 'aniso_s_n_measurements', 'aniso_s_sigma', 'aniso_s_unit', 'aniso_tilt_correction', 'aniso_type', 'aniso_v1', 'aniso_v2', 'aniso_v3', 'citations', 'description', 'dir_alpha95', 'dir_comp_name', 'dir_dec', 'dir_inc', 'dir_mad_free', 'dir_n_measurements', 'dir_tilt_correction', 'experiment_names', 'geologic_classes', 'geologic_types', 'hyst_bc', 'hyst_bcr', 'hyst_mr_moment', 'hyst_ms_moment', 'int_abs', 'int_b', 'int_b_beta', 'int_b_sigma', 'int_corr', 'int_dang', 'int_drats', 'int_f', 'int_fvds', 'int_gamma', 'int_mad_free', 'int_md', 'int_n_measurements', 'int_n_ptrm', 'int_q', 'int_rsc', 'int_treat_dc_field', 'lithologies', 'meas_step_max', 'meas_step_min', 'meas_step_unit', 'method_codes', 'sample_name', 'software_packages', 'specimen_name'] dtype = 'specimens' data_container = cb.MagicDataFrame(dtype=dtype, columns=None) df = data_container.df # create fake specimen data fake_data = {col: 1 for col in cols} # include a new column name in the data fake_data['new_one'] = '999' # add one row of specimen data (any addition column headers in will be added automatically) data_container.add_row('name', fake_data) # add another row fake_data['other'] = 'cheese' fake_data.pop('aniso_ftest') data_container.add_row('name2', fake_data) # now the dataframe has two new columns, 'new_one' and 'other' df """ Explanation: Starting from scratch -- making a blank table End of explanation """ # get location DataFrame fname = os.path.join('..', '3_0', 'Osler', 'locations.txt') loc_container = cb.MagicDataFrame(fname) loc_df = loc_container.df loc_df.head() # get all sites belonging to a particular location RECORD (i.e., what used to be a result) # (diferent from getting all sites with the same location name) name = loc_df.ix[1].name loc_record = loc_df.ix[name].ix[1] site_names = loc_record['site_names'] print "All sites belonging to {}:".format(name), loc_record['site_names'] site_names = site_names.split(":") # fancy indexing site_container.df.ix[site_names].head() """ Explanation: Interactions between two MagicDataFrames End of explanation """ # first site print site_container.df.ix[0][:5] print '-' # find site by index value print site_container.df.ix['new_site'][:5] print '-' # return all sites' values for a col site_container.df['bed_dip'][:5] """ Explanation: Gotchas Can't do self.df = self.df.append(blah). Must instead do self.df.loc(blah.name) = blah Beware chained indexing: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy To make a real, independent copy of a DataFrame, use DataFrame.copy() To update inplace: df.loc[:, 'col_name'] = 'blah' http://stackoverflow.com/questions/37175007/pandas-dataframe-logic-operations-with-nan Pandas indexing End of explanation """
jegibbs/phys202-2015-work
assignments/assignment05/MatplotlibEx03.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 3 Imports End of explanation """ def well2d(x, y, nx, ny, L=1.0): """Compute the 2d quantum well wave function.""" # YOUR CODE HERE raise NotImplementedError() psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1) assert len(psi)==10 assert psi.shape==(10,) """ Explanation: Contour plots of 2d wavefunctions The wavefunction of a 2d quantum well is: $$ \psi_{n_x,n_y}(x,y) = \frac{2}{L} \sin{\left( \frac{n_x \pi x}{L} \right)} \sin{\left( \frac{n_y \pi y}{L} \right)} $$ This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well. Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays. End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # use this cell for grading the contour plot """ Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction: Use $n_x=3$, $n_y=2$ and $L=0$. Use the limits $[0,1]$ for the x and y axis. Customize your plot to make it effective and beautiful. Use a non-default colormap. Add a colorbar to you visualization. First make a plot using one of the contour functions: End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # use this cell for grading the pcolor plot """ Explanation: Next make a visualization using one of the pcolor functions: End of explanation """
kit-cel/lecture-examples
ccgbc/ch6_LDPC_Final_Aspects/Repeat_Accumulate.ipynb
gpl-2.0
import numpy as np import matplotlib.pyplot as plot from ipywidgets import interactive from scipy.optimize import fsolve import ipywidgets as widgets import math %matplotlib inline """ Explanation: Repeat Accumulate Codes on the BEC This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods. This code illustrates * Convergence analysis of repeat accumulate codes and regular LDPC codes on the binary erasure channel (BEC) End of explanation """ def threshold_LDPC(dv, dc): # binary search to find fixed point epsilon = 0.5 delta_epsilon = 0.5 while delta_epsilon > 0.00001: fp = lambda x : epsilon * (1 - (1-x)**(dc-1))**(dv-1) - x x_0,_,ier,mesg = fsolve(fp, epsilon, full_output=True) if x_0 > 1e-6 and ier == 1 and np.abs(fp(x_0)) < 1e-6: epsilon = epsilon - delta_epsilon/2 else: epsilon = epsilon + delta_epsilon/2 delta_epsilon = delta_epsilon/2 return epsilon """ Explanation: In this notebook, we look at the performance evaluation of regular repeat-accumulate codes and compare them with regular $[d_{\mathtt{v}},d_{\mathtt{c}}]$ LDPC codes. We first consider the fixed-point equation before looking at the evolution of the message erasure probability as a function of the erasures. We look at both LDPC and RA codes. This code evaluates the fixed point equation for regular $[d_{\mathtt{v}},d_{\mathtt{c}}]$ LDPC codes. The fixed point equation in this case reads $$f(\epsilon,\xi)-\xi <= 0\quad \forall \xi \in (0,1]$$ with $$f(\epsilon,\xi) = \epsilon\left(1-(1-\xi)^{d_{\mathtt{c}}-1}\right)^{d_{\mathtt{v}}-1}$$ End of explanation """ def threshold_RA(dv, dc): # binary search to find fixed point epsilon = 0.5 delta_epsilon = 0.5 while delta_epsilon > 0.000001: fp = lambda x : epsilon * (1 - ((1-epsilon)/(1-epsilon*(1-x)**dc))**2 * (1-x)**(dc-1))**(dv-1) - x x_0,_,ier,mesg = fsolve(fp, epsilon, full_output=True) if x_0 > 1e-6 and ier == 1 and np.abs(fp(x_0)) < 1e-6: epsilon = epsilon - delta_epsilon/2 else: epsilon = epsilon + delta_epsilon/2 delta_epsilon = delta_epsilon/2 return epsilon """ Explanation: This code evaluates the fixed point equation for regular $[d_{\mathtt{v}},d_{\mathtt{c}}]$ repeat-accumulate codes. The fixed point equation in this case reads $$f(\epsilon,\xi)-\xi <= 0\quad \forall \xi \in (0,1]$$ with $$f(\epsilon,\xi) = \epsilon\left(1-\left(\frac{1-\epsilon}{1-\epsilon(1-\xi)^{d_{\mathtt{c}}}}\right)^2(1-\xi)^{d_{\mathtt{c}}-1}\right)^{d_{\mathtt{v}}-1}$$ End of explanation """ dv_range = np.arange(2,12) thresholds_LDPC = [threshold_LDPC(dv,2*dv) for dv in dv_range] thresholds_RA = [threshold_RA(dv,dv) for dv in dv_range] plot.figure(figsize=(10,5)) plot.plot(dv_range, thresholds_LDPC, '-s') plot.plot(dv_range, thresholds_RA, '-o') plot.xlabel(r'Variable node degree $d_v$') plot.ylabel(r'Threshold $\epsilon^\star$') plot.ylim(0.2,0.5) plot.xlim(2,11) plot.grid() plot.show() """ Explanation: The plot below the thresholds of several rate $1/2$ LDPC and repeat accumulate codes. For LDPC codes, check and variable node degrees are related as $d_{\mathtt{c}} = 2d_{\mathtt{v}}$, while for repeat-accumulate codes, we have $d_{\mathtt{c}}=d_{\mathtt{v}}$. End of explanation """