repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
DanielMcAssey/steamSummerMinigame
|
Analysis of WH spam strategy.ipynb
|
mit
|
%pylab inline
import matplotlib.pyplot as plt
n_wormholes = 10
n_games = 20
def calc(n_active, n_game, multiplier=1.0):
return n_wormholes * multiplier *(n_active/1000.0 + n_active/10000.0)**(n_game-1)
title = "WH spam stategy (starting w/ %d WHs for each player)" % n_wormholes
plt.figure(figsize=(8,4), dpi=72, facecolor='w')
plt.title(title)
x = range(1,n_games + 1)
for n_active in reversed([910] + range(1000,1600,100)):
y = map(lambda i: calc(n_active, i), x)
plt.plot(x, y, label='active players: %d' % n_active)
plt.yscale('log')
plt.ylabel("number of WHs at start")
plt.xlabel("N game")
plt.xticks(x)
plt.legend(bbox_to_anchor=(1, 1), loc=2)
plt.grid(True)
plt.show()
"""
Explanation: WH spam strategy
Each player spams Wormhole (WH) as fast as possible.
After all WHs have been used, all players move to a brand new game.
They should now be able to purchase more WHs than the previous game.
All this doesn't take into account levels from killing mobs
The strategy exploits the following flaws:
* growth created by more than 1000 active players
* growth by jumping 10 levels at every 100th level
Both flaws result in the player having more badge points to spend the next game as result of reaching a higher max level.
$$\text{badge points} = \text{max level} : 10$$
$$\text{number of wormholes} = \lfloor\text{badge points} \div 100\rfloor$$
We can calculate the number WHs we will have after N games, given the number of active players, and number of WHs per player at the start.
$$
\text{wormholes at start}
\times
(\frac{\text{number of active players}}{1000} + \frac{\text{number of active players}}{10000})^{\text{Nth game}-1} = \text{number of wormholes at Nth game}
$$
Let's calculate the number WHs each player will have at the start of the second game, given 1000 active players and 10 WHs each for the first game.
$$
10
\times
(\frac{1000}{1000} + \frac{1000}{10000})^{2-1} = 11
$$
$$
10
\times
(1+0.1)^1 = 11
$$
Now with 1500 active players.
$$
10
\times
(\frac{1500}{1000} + \frac{1500}{10000})^{1} = 16.5
$$
What about 10th game?
$$
10
\times
(\frac{1500}{1000} + \frac{1500}{10000})^{10-1} = 906.47
$$
Let's plot this to visualize it better.
End of explanation
"""
plt.figure(figsize=(8,4), dpi=72, facecolor='w')
for n_active in reversed([910] + range(1000,1600,100)):
y = map(lambda i: (calc(n_active, i)) / 60, x)
total = sum(y)
for i, n in reversed(list(enumerate(y))):
total, y[i] = total - y[i], total
plt.plot(x, y, label='active players: %d' % n_active)
plt.title("60s on every WH use")
plt.ylabel('hours to N game')
plt.yscale('log')
plt.yticks(map(lambda x: 10**x, range(-1,6)), map(lambda x: 10**x, range(-1,6)))
plt.xlabel("N game")
plt.xticks(x)
plt.legend(bbox_to_anchor=(1, 1), loc=2)
plt.grid(True)
plt.show()
"""
Explanation: That's crazy and great, but remember WH has 60s cooldown.
End of explanation
"""
chance_one_per_tick = 0.99999
x2 = range(100,1600,50)
y2 = map(lambda n_actives: 1-((1-chance_one_per_tick) ** (1.0/n_actives)), x2)
title = "Achieving %0.3f%% chance to use at least one per server tick" % (chance_one_per_tick * 100.0)
title += "\nbased on number of active users\n"
plt.title(title)
plt.plot(x2, y2)
plt.ylabel("each player use chance per tick")
plt.xlabel("Active players")
plt.grid(True)
plt.show()
"""
Explanation: Nobody has that kind of time. Fortunatelly, there is Like New (LN) ability which resets all cooldowns in the current lane. If we spend say 10% of our badges towards Like New (LN) and activate it based on a chance, we can remove a large portions of those 60s cooldowns on wormholes. This in turn should send us climbing at warp speed.
End of explanation
"""
|
ClaudiaEsp/inet
|
Analysis/misc/Counting inhibitory connectivity motifs.ipynb
|
gpl-2.0
|
# loading python modules
import numpy as np
np.random.seed(0)
from matplotlib.pyplot import figure
from terminaltables import AsciiTable
import matplotlib.pyplot as plt
%matplotlib inline
from __future__ import division
# loading custom inet modules
from inet import DataLoader, __version__
from inet.motifs import iicounter, motifcounter
from inet.utils import chem_squarematrix, elec_squarematrix
print('Inet version {}'.format(__version__))
# use the dataset to create the null hypothesis
mydataset = DataLoader('../data/PV')
"""
Explanation: <H1>Are inhibitory connectivity motifs entirely random?</H1>
We will count the number of inhibitory connectivity motifs (electrical and chemical) and test whether this number is expected assuming an average connectivity found experimentally.
End of explanation
"""
# e.g. mydataset.PV[2].values will return the different configurations with 2 PV cells
nPV = range(9)
for i in nPV:
nPV[i] = np.sum(mydataset.IN[i].values())
for i, experiment in enumerate(nPV):
print('{:<3d} recordings with {:2d} PV-cells'.format(experiment, i))
# for the moment, we only count experiments with 2 or 3 PVs
# later we use mydataset.PV[2:]
PV2 = sum(mydataset.IN[2].values())
PV3 = sum(mydataset.IN[3].values())
PV2, PV3
"""
Explanation: <H2>Collect number of experiments containing PV(+) cells</H2>
End of explanation
"""
PC = mydataset.motif.ii_chem_found/mydataset.motif.ii_chem_tested
PE = mydataset.motif.ii_elec_found/mydataset.motif.ii_elec_tested
PC2 = mydataset.motif.ii_c2_found/mydataset.motif.ii_c2_tested
Pdiv = mydataset.motif.ii_div_found/mydataset.motif.ii_div_tested
Pcon = mydataset.motif.ii_con_found/mydataset.motif.ii_con_tested
Plin = mydataset.motif.ii_lin_found/mydataset.motif.ii_lin_tested
PC1E = mydataset.motif.ii_c1e_found/mydataset.motif.ii_c1e_tested
PC2E = mydataset.motif.ii_c2e_found/mydataset.motif.ii_c2e_tested
info = [
['key', 'Probability', 'Motif', 'Value'],
['ii_chem', 'P(C)', 'chemical synapse',PC ],
['ii_elec', 'P(E)', 'electrical synapse',PE ],
['','',''],
['ii_c2', 'P(C U C)','bidirectional chemical synapse',PC2],
['ii_con', 'Pcon', 'convergent inhibitory motifs', Pcon],
['ii_div', 'Pdiv', 'divergent inhibitory motifs', Pdiv],
['ii_lin', 'Plin', 'linear inhibitory motifs', Plin],
['',''],
['ii_c1e', 'P(C U E)', 'electrical and unidirectional chemical', PC1E],
['ii_c2e', 'P(2C U E):','electrical and bidirectional chemical', PC2E],
]
print(AsciiTable(info).table)
"""
Explanation: <H2> Calculate empirical probabilities </H2>
End of explanation
"""
def mychem_simulation():
"""
simulate inhibitory chemical connections of the dataset
Return
------
A iicounter object
"""
mycount = iicounter()
for _ in range(PV2):
mycount += iicounter(chem_squarematrix(size=2,prob = PC))
for _ in range(PV3):
mycount += iicounter(chem_squarematrix(size=3, prob = PC))
return(mycount)
print(mychem_simulation()) # one simulation, test the number of connection tested
# must contain the same number of tested connections
for key in mychem_simulation().keys():
print(key, mydataset.motif[key])
# simulate the whole data set 1,000 times
n_chem = list()
n_bichem = list()
n_div = list()
n_con = list()
n_chain = list()
for _ in range(1000):
syn_counter = mychem_simulation()
n_chem.append(syn_counter['ii_chem']['found']) # null hypothesis
n_bichem.append(syn_counter['ii_c2']['found'])
n_con.append(syn_counter['ii_con']['found'])
n_div.append(syn_counter['ii_div']['found'])
n_chain.append(syn_counter['ii_lin']['found'])
"""
Explanation: <H2> Simulate random chemical synapses</H2>
from a random distribution whose probability is adjusted to the empirical probability found in the recordings.
End of explanation
"""
np.mean(n_chem) # on average the same number of unidirectional connections
mydataset.motif['ii_chem']['found']
"""
Explanation: If the null hypothesis is correctly implemented, we should see almost the same
number of chemical synases as in the experiments.
End of explanation
"""
np.mean(n_bichem) # on average the same number of bidirectional connections????
"""
Explanation: If we found a number which is different form the empirical, we must revise our
null hypothese.
End of explanation
"""
PC*PC*mydataset.motif['ii_c2']['tested'] # null hypothesis
mydataset.motif['ii_c2']['found'] # however, we found more empirically
# Number of divergent connections found should be very similar to the ones calculates
np.mean(n_div)
PC*PC*mydataset.motif['ii_div']['tested'] # null hypothesis
np.mean(n_con)
PC*PC*mydataset.motif['ii_con']['tested'] # null hypothesis
"""
Explanation: Define analiticaly the null hypothese:
End of explanation
"""
def myelec_simulation():
"""
simulate inhibitory electrical connections of the dataset
Return
------
A iicounter object
"""
mycount = iicounter()
for _ in range(PV2):
mycount += iicounter(elec_squarematrix(size=2,prob = PE))
for _ in range(PV3):
mycount += iicounter(elec_squarematrix(size=3, prob = PE))
return(mycount)
print(myelec_simulation()) # one simulation, test the number of connection tested
# must contain the same number of tested connections
for key in myelec_simulation().keys():
print(key, mydataset.motif[key])
n_elec = list()
for _ in range(1000):
syn_elec = myelec_simulation()
n_elec.append(syn_elec['ii_elec']['found'])
"""
Explanation: <H2> Simulate random electrical synapses</H2>
from a random distribution whose probability is adjusted to the empirical probability found in the recordings.
End of explanation
"""
np.mean(n_elec)
mydataset.motif.ii_elec_found # voila!
"""
Explanation: Similarly, we must see almost the same
number of electrical connections as with the experiments
End of explanation
"""
C = chem_squarematrix(size = 2, prob = PC)
E = elec_squarematrix(size = 2, prob = PE)
C + E # when a chemical (1) and electrical (2) synapse add , they have the motif 3
def myii_simulation():
"""
simulate inhibitory electrical and chemical connections of the dataset
Return
------
A iicounter object
"""
mycount = iicounter()
for _ in range(PV2):
C = chem_squarematrix(size = 2, prob = PC)
E = elec_squarematrix(size = 2, prob = PE)
S = C + E
x, y = np.where(S==2) # test to eliminate '1' from the oposite direction
mycoor = zip(y,x)
for i,j in mycoor:
if S[i,j]==1:
S[i,j]=3
S[j,i]=0
mycount += iicounter( S )
for _ in range(PV3):
C = chem_squarematrix(size = 3, prob = PC)
E = elec_squarematrix(size = 3, prob = PE)
S = C + E
x, y = np.where(S==2) # test to eliminate '1' from the oposite direction
mycoor = zip(y,x)
for i,j in mycoor:
if S[i,j]==1:
S[i,j]=3
S[j,i]=0
mycount += iicounter( S )
return(mycount)
myii_simulation()# one simulation, again, test the number of connections tested
# must contain the same number of tested connections
for key in myii_simulation().keys():
print(key, mydataset.motif[key])
# simulate the whole data set 1,000 times
n_chem = list()
n_elec = list()
n_c1e = list()
n_c2e = list()
n_c2 = list()
n_div = list()
n_con = list()
n_lin = list()
for _ in range(10000):
syn_counter = myii_simulation()
n_chem.append( syn_counter['ii_chem']['found'] ) # null hypothesis
n_elec.append( syn_counter['ii_elec']['found'] ) # null hypothesis
n_c1e.append( syn_counter['ii_c1e']['found'] )
n_c2e.append( syn_counter['ii_c2e']['found'] )
n_c2.append( syn_counter['ii_c2']['found'])
n_con.append( syn_counter['ii_div']['found'])
n_div.append( syn_counter['ii_div']['found'])
n_lin.append( syn_counter['ii_lin']['found'])
info = [
['Syn Motif', 'Simulations', 'Empirical'],
['chemical', np.mean(n_chem), mydataset.motif['ii_chem']['found']],
['electrical', np.mean(n_elec), mydataset.motif['ii_elec']['found']],
[''],
['2 chem',np.mean(n_c2),mydataset.motif['ii_c2']['found']],
['convergent', np.mean(n_con), mydataset.motif['ii_con']['found']],
['divergent', np.mean(n_div), mydataset.motif['ii_div']['found']],
['chains', np.mean(n_chain), mydataset.motif['ii_lin']['found']],
[''],
['1 chem + elec', np.mean(n_c1e), mydataset.motif['ii_c1e']['found']],
['2 chem + elec', np.mean(n_c2e), mydataset.motif['ii_c2e']['found']],
]
print(AsciiTable(info).table)
"""
Explanation: <H2>Simulate electrical and chemical synapses independently</H2>
End of explanation
"""
mydataset.motif['ii_c1e']
PCE1 = mydataset.motif.ii_c1e_found /mydataset.motif.ii_c1e_tested
PCE1
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC*PE)*mydataset.motif.ii_c1e_tested
"""
Explanation: Let's see if the connections found correspond to the theoretical values for the complex motifs.
<H3>A) Unidirectional chemical connections in the presence of an electrical synapse *ii_c1e*</H3>
End of explanation
"""
mydataset.motif['ii_c2e']
PCE2 = mydataset.motif.ii_c2e_found /mydataset.motif.ii_c2e_tested
PCE2
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PE*PC*PC)*mydataset.motif.ii_c2e_tested #
"""
Explanation: <H3>B) Bidirectional chemical connections in the presence of one electrical synapse *ii_c2e*</H3>
End of explanation
"""
mydataset.motif['ii_elec']
PE = mydataset.motif.ii_elec_found /mydataset.motif.ii_elec_tested
PE
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PE)*mydataset.motif.ii_elec_tested #
"""
Explanation: <H3>C) Only electrical synapses *ii_elec*</H3>
End of explanation
"""
mydataset.motif['ii_chem']
PC = mydataset.motif.ii_chem_found /mydataset.motif.ii_chem_tested
PC
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC)*mydataset.motif.ii_chem_tested #
"""
Explanation: <H3>D) Unidirectionaly chemmical only *ii_chem*</H3>
End of explanation
"""
mydataset.motif['ii_c2']
PC1 = mydataset.motif.ii_chem_found /mydataset.motif.ii_chem_tested
PC1
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC1*PC1)*mydataset.motif.ii_c2_tested
# calculate alfa_levels according to Zhao et al., 2001
alpha_rec = (PC1/(PC*PC))-1
print(alpha_rec)
"""
Explanation: <H3>E) Bidirectional chemical connections only *ii_c2*</H3>
End of explanation
"""
mydataset.motif['ii_div']
Pdiv = mydataset.motif.ii_div_found /mydataset.motif.ii_div_tested
Pdiv
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC*PC)*mydataset.motif.ii_div_tested
# calculate alfa_levels according to Zhao et al., 2001
alpha_div = (Pdiv/(PC*PC))-1
print(alpha_div)
"""
Explanation: <H3>F) Divergent inhibitory connections *ii_div*</H3>
End of explanation
"""
mydataset.motif['ii_con']
Pcon = mydataset.motif.ii_con_found / mydataset.motif.ii_con_tested
Pcon
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC*PC)*mydataset.motif.ii_con_tested
# calculate alfa_levels according to Zhao et al., 2001
alpha_con = (Pcon/(PC*PC))-1
print(alpha_con)
"""
Explanation: <H3>G) Convergent inhibitory connections *ii_con*</H3>
End of explanation
"""
mydataset.motif['ii_lin']
Pchain = mydataset.motif.ii_lin_found / mydataset.motif.ii_lin_tested
Pchain
# definition of the null hypothese
# if this value is close to the simulation, we accept the null hypothese
(PC*PC)*mydataset.motif.ii_lin_tested
# calculate alfa_levels according to Zhao et al., 2001
alpha_chain = (Pchain/(PC*PC))-1
print(alpha_chain)
"""
Explanation: <H3>H) Chain inhibitory connection *ii_chain* </H3>
End of explanation
"""
# operate with NumPY arrays rather than with lists
n_chem = np.array(n_chem)
n_elec = np.array(n_elec)
n_c2 = np.array(n_c2)
n_con = np.array(n_con)
n_div = np.array(n_div)
n_chain = np.array(n_chain)
n_c1e = np.array(n_c1e)
n_c2e = np.array(n_c2e)
pii_chem = len(n_chem[n_chem>mydataset.motif.ii_chem_found]) / n_chem.size
pii_elec = len(n_elec[n_elec>mydataset.motif.ii_elec_found])/ n_elec.size
pii_c2 = len(n_c2[n_c2 > mydataset.motif.ii_c2_found])/ n_c2.size
pii_con = len(n_con[n_con < mydataset.motif.ii_con_found])/n_con.size # under-rep
pii_div = len(n_div[n_div > mydataset.motif.ii_div_found])/n_div.size
pii_chain = len(n_chain[n_chain < mydataset.motif.ii_lin_found])/n_chain.size # under-rep
pii_c1e = len(n_c1e[n_c1e > mydataset.motif.ii_c1e_found])/ n_c1e.size
pii_c2e = len(n_c2e[n_c2e > mydataset.motif.ii_c2e_found])/ n_c2e.size
info = [
['Syn Motif', 'Simulations', 'Empirical', 'P(Simulations)', 'alpha'],
['chemical', np.mean(n_chem), mydataset.motif.ii_chem_found, pii_chem],
['electrical', np.mean(n_elec), mydataset.motif.ii_elec_found, pii_elec],
[''],
['2 chem bidirect', np.mean(n_c2), mydataset.motif.ii_c2_found, pii_c2,alpha_rec],
['convergent', np.mean(n_con), mydataset.motif.ii_con_found, pii_con, alpha_con],
['divergent', np.mean(n_div), mydataset.motif.ii_div_found, pii_div, alpha_div],
['chain', np.mean(n_chain), mydataset.motif.ii_lin_found, pii_chain, alpha_chain],
[''],
['1 chem + elec', np.mean(n_c1e), mydataset.motif.ii_c1e_found, pii_c1e],
['2 chem + elec', np.mean(n_c2e), mydataset.motif.ii_c2e_found, pii_c2e],
]
print(AsciiTable(info).table)
from inet.plots import barplot
"""
Explanation: <H2>Calculating P-Values</H2>
End of explanation
"""
# This is our null hypothesis
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_chem, n_found = mydataset.motif.ii_chem_found, larger=1);
ax.set_title('Chemical synapses', size=20);
ax.set_ylim(ymax=40);
ax.tick_params(labelsize=20)
fig.savefig('ii_chem.pdf')
"""
Explanation: <H2> Plot chemical synapses alone</H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_elec, n_found = mydataset.motif.ii_elec_found, larger=1);
ax.set_title('Electrical synapses', size=20);
ax.set_ylim(ymax=40);
ax.tick_params(labelsize=20)
fig.savefig('ii_elec.pdf')
"""
Explanation: <H2>Plot electrical synapses alone </H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_c2, n_found = mydataset.motif.ii_c2_found, larger=1);
ax.set_title('Bidirectional chemical', size=20);
ax.set_ylim(ymax=10);
ax.tick_params(labelsize=20)
fig.savefig('ii_c2.pdf')
"""
Explanation: <H2>Plot bidirectional chemical synapses</H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_con, n_found = mydataset.motif.ii_con_found, larger=False);
ax.set_title('Convergent inhibitory', size=20);
ax.set_ylim(ymin=0, ymax=4);
ax.tick_params(labelsize=20)
fig.savefig('ii_con.pdf')
"""
Explanation: <H2>Plot convergent inhibitory</H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_div, n_found = mydataset.motif.ii_div_found, larger=1);
ax.set_title('Divergent inhibitory', size=20);
ax.set_ylim(ymin=0, ymax=5);
ax.tick_params(labelsize=20)
fig.savefig('ii_div.pdf')
"""
Explanation: <H2> Plot divergent inhibitory </H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_chain, n_found = mydataset.motif.ii_lin_found, larger=False);
ax.set_title('Linear chains', size=20);
ax.set_ylim(ymin=0, ymax=10);
ax.tick_params(labelsize=20)
fig.savefig('ii_chain.pdf')
#pii_chain # change this value in the plot!
"""
Explanation: <H2>Plot linear chains</H2>
End of explanation
"""
fig = figure()
ax = fig.add_subplot(111)
ax = barplot(simulation = n_c1e, n_found = mydataset.motif.ii_c1e_found, larger=1);
ax.set_title('Electrical and one chemical', size=20);
ax.set_ylim(ymax=25);
ax.tick_params(labelsize=20)
fig.savefig('ii_c1e.pdf')
"""
Explanation: <H2>Plot electrical and one chemical synapse alone </H2>
End of explanation
"""
fig = figure(5)
ax = fig.add_subplot(111)
ax = barplot(simulation = n_c2e, n_found = mydataset.motif.ii_c2e_found, larger=1);
ax.set_title('Electrical and two chemical', size=20);
ax.set_ylim(ymin = 0, ymax=10);
ax.tick_params(labelsize=20)
fig.savefig('ii_c2d.pdf')
"""
Explanation: <H2>Plot electrical and two chemical</H2>
End of explanation
"""
|
dnc1994/MachineLearning-UW
|
ml-regression/ridge-regression-l2.ipynb
|
mit
|
import graphlab
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature.apply(lambda x: x ** power)
return poly_sframe
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features, l2_penalty=l2_small_penalty, validation_set = None)
model15.get('coefficients')
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
subdata_1 = polynomial_sframe(set_1['sqft_living'], 15)
features_1 = subdata_1.column_names() # get the name of the features
subdata_1['price'] = set_1['price'] # add price to the data since it's the target
model_1 = graphlab.linear_regression.create(subdata_1, target = 'price', features = features_1, l2_penalty=l2_small_penalty, validation_set = None, verbose=False)
model_1.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_1['power_1'],subdata_1['price'],'.',
subdata_1['power_1'], model_1.predict(subdata_1),'-')
subdata_2 = polynomial_sframe(set_2['sqft_living'], 15)
features_2 = subdata_2.column_names() # get the name of the features
subdata_2['price'] = set_2['price'] # add price to the data since it's the target
model_2 = graphlab.linear_regression.create(subdata_2, target = 'price', features = features_2, l2_penalty=l2_small_penalty, validation_set = None, verbose=False)
model_2.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_2['power_1'],subdata_2['price'],'.',
subdata_2['power_1'], model_2.predict(subdata_2),'-')
subdata_3 = polynomial_sframe(set_3['sqft_living'], 15)
features_3 = subdata_3.column_names() # get the name of the features
subdata_3['price'] = set_3['price'] # add price to the data since it's the target
model_3 = graphlab.linear_regression.create(subdata_3, target = 'price', features = features_3, l2_penalty=l2_small_penalty, validation_set = None, verbose=False)
model_3.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_3['power_1'],subdata_3['price'],'.',
subdata_3['power_1'], model_3.predict(subdata_3),'-')
subdata_4 = polynomial_sframe(set_4['sqft_living'], 15)
features_4 = subdata_4.column_names() # get the name of the features
subdata_4['price'] = set_4['price'] # add price to the data since it's the target
model_4 = graphlab.linear_regression.create(subdata_4, target = 'price', features = features_4, l2_penalty=l2_small_penalty, validation_set = None, verbose=False)
model_4.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_4['power_1'],subdata_4['price'],'.',
subdata_4['power_1'], model_4.predict(subdata_4),'-')
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
subdata_1 = polynomial_sframe(set_1['sqft_living'], 15)
features_1 = subdata_1.column_names() # get the name of the features
subdata_1['price'] = set_1['price'] # add price to the data since it's the target
model_1 = graphlab.linear_regression.create(subdata_1, target = 'price', features = features_1, l2_penalty=1e5, validation_set = None, verbose=False)
model_1.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_1['power_1'],subdata_1['price'],'.',
subdata_1['power_1'], model_1.predict(subdata_1),'-')
subdata_2 = polynomial_sframe(set_2['sqft_living'], 15)
features_2 = subdata_2.column_names() # get the name of the features
subdata_2['price'] = set_2['price'] # add price to the data since it's the target
model_2 = graphlab.linear_regression.create(subdata_2, target = 'price', features = features_2, l2_penalty=1e5, validation_set = None, verbose=False)
model_2.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_2['power_1'],subdata_2['price'],'.',
subdata_2['power_1'], model_2.predict(subdata_2),'-')
subdata_3 = polynomial_sframe(set_3['sqft_living'], 15)
features_3 = subdata_3.column_names() # get the name of the features
subdata_3['price'] = set_3['price'] # add price to the data since it's the target
model_3 = graphlab.linear_regression.create(subdata_3, target = 'price', features = features_3, l2_penalty=1e5, validation_set = None, verbose=False)
model_3.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_3['power_1'],subdata_3['price'],'.',
subdata_3['power_1'], model_3.predict(subdata_3),'-')
subdata_4 = polynomial_sframe(set_4['sqft_living'], 15)
features_4 = subdata_4.column_names() # get the name of the features
subdata_4['price'] = set_4['price'] # add price to the data since it's the target
model_4 = graphlab.linear_regression.create(subdata_4, target = 'price', features = features_4, l2_penalty=1e5, validation_set = None, verbose=False)
model_4.get('coefficients').print_rows(num_rows=16)
plt.plot(subdata_4['power_1'],subdata_4['price'],'.',
subdata_4['power_1'], model_4.predict(subdata_4),'-')
"""
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the extra parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
k = 10
def get_st_ed(i):
start = (n*i)/k
end = (n*(i+1))/k-1
return start, end
st, ed = get_st_ed(3)
validation4 = train_valid_shuffled[st:ed+1]
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def get_st_ed(k, i):
start = (n*i)/k
end = (n*(i+1))/k-1
return start, end
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
total_RSS = 0
for i in range(k):
st, ed = get_st_ed(k, i)
validation_set = data[st:ed+1]
training_set = data[:st].append(data[ed+1:])
model = graphlab.linear_regression.create(training_set, target = output_name, features = features_list,
l2_penalty=l2_penalty, validation_set = None, verbose=False)
predictions = model.predict(validation_set)
residuals = predictions - validation_set['price']
RSS = (residuals * residuals).sum()
total_RSS += RSS
return total_RSS / k
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
import numpy as np
penalty_list = np.logspace(1, 7, num=13)
data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
features_list = data.column_names()
data['price'] = train_valid_shuffled['price']
sort_table = []
for penalty in penalty_list:
avg_RSS = k_fold_cross_validation(k=10, l2_penalty=penalty, data=data, output_name='price', features_list=features_list)
print 'penalty ', penalty
print 'avg_RSS', avg_RSS
sort_table.append((avg_RSS, penalty))
print sorted(sort_table)[0]
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
errors = [x[0] for x in sort_table]
plt.xscale('log')
plt.plot(penalty_list, errors, '-')
"""
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
final_data = polynomial_sframe(train_valid['sqft_living'], 15)
final_features_list = final_data.column_names()
final_data['price'] = train_valid['price']
final_model = graphlab.linear_regression.create(final_data, target = 'price', features = final_features_list,
l2_penalty=penalty_list[3], validation_set = None, verbose=False)
"""
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
"""
tdata = polynomial_sframe(test['sqft_living'], 15)
tdata['price'] = test['price']
predictions = final_model.predict(tdata)
residuals = predictions - tdata['price']
RSS = (residuals * residuals).sum()
print RSS
"""
Explanation: QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
End of explanation
"""
|
unnati-xyz/intro-python-data-science
|
onion/3-Refine.ipynb
|
mit
|
# Import the two library we need, which is Pandas and Numpy
import pandas as pd
import numpy as np
# Read the csv file of Month Wise Market Arrival data that has been scraped.
df = pd.read_csv('MonthWiseMarketArrivals.csv')
df.head()
df.tail()
"""
Explanation: 2. Refine the Data
"Data is messy"
We will be performing the following operation on our Onion price to refine it
- Remove e.g. remove redundant data from the data frame
- Derive e.g. State and City from the market field
- Parse e.g. extract date from year and month column
Other stuff you may need to do to refine are...
- Missing e.g. Check for missing or incomplete data
- Quality e.g. Check for duplicates, accuracy, unusual data
- Convert e.g. free text to coded value
- Calculate e.g. percentages, proportion
- Merge e.g. first and surname for full name
- Aggregate e.g. rollup by year, cluster by area
- Filter e.g. exclude based on location
- Sample e.g. extract a representative data
- Summary e.g. show summary stats like mean
End of explanation
"""
df.dtypes
# Delete the last row from the dataframe
df.tail(1)
# Delete a row from the dataframe
df.drop(df.tail(1).index, inplace = True)
df.tail()
df.dtypes
df.iloc[:,4:7].head()
df.iloc[:,2:7] = df.iloc[:,2:7].astype(int)
df.dtypes
df.head()
df.describe()
"""
Explanation: Remove the redundant data
End of explanation
"""
df.market.value_counts().head()
df['state'] = df.market.str.split('(').str[-1]
df.head()
df['city'] = df.market.str.split('(').str[0]
df.head()
df.state.unique()
df['state'] = df.state.str.split(')').str[0]
df.state.unique()
dfState = df.groupby(['state', 'market'], as_index=False).count()
dfState.market.unique()
state_now = ['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'BANGALORE', 'KNT', 'BHOPAL', 'OR',
'BHR', 'WB', 'CHANDIGARH', 'CHENNAI', 'bellary', 'podisu', 'UTT',
'DELHI', 'MP', 'TN', 'Podis', 'GUWAHATI', 'HYDERABAD', 'JAIPUR',
'WHITE', 'JAMMU', 'HR', 'KOLKATA', 'AP', 'LUCKNOW', 'MUMBAI',
'NAGPUR', 'KER', 'PATNA', 'CHGARH', 'JH', 'SHIMLA', 'SRINAGAR',
'TRIVENDRUM']
state_new =['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'KNT', 'KNT', 'MP', 'OR',
'BHR', 'WB', 'CH', 'TN', 'KNT', 'TN', 'UP',
'DEL', 'MP', 'TN', 'TN', 'ASM', 'AP', 'RAJ',
'MS', 'JK', 'HR', 'WB', 'AP', 'UP', 'MS',
'MS', 'KER', 'BHR', 'HR', 'JH', 'HP', 'JK',
'KEL']
df.state = df.state.replace(state_now, state_new)
df.state.unique()
"""
Explanation: Extracting the states from market names
End of explanation
"""
df.index
pd.to_datetime('January 2012')
df['date'] = df['month'] + '-' + df['year'].map(str)
df.head()
index = pd.to_datetime(df.date)
df.index = pd.PeriodIndex(df.date, freq='M')
df.columns
df.index
df.head()
df.to_csv('MonthWiseMarketArrivals_Clean.csv', index = False)
"""
Explanation: Getting the Dates
End of explanation
"""
|
srcole/qwm
|
burrito/u/UNFINISHED_Burrito_correlations.ipynb
|
mit
|
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
"""
Explanation: San Diego Burrito Analytics
Scott Cole
23 April 2016
This notebook contains analyses on the burrito ratings in San Diego, including:
* How each metric correlates with one another.
* Linear model of how each dimension contributes to the overall rating
Default imports
End of explanation
"""
filename="burrito_current.csv"
df = pd.read_csv(filename)
N = df.shape[0]
"""
Explanation: Load data
End of explanation
"""
# Identify california burritos
def caliburritoidx(x):
import re
idx = []
for b in range(len(x)):
re4str = re.compile('.*cali.*', re.IGNORECASE)
if re4str.match(x[b]) is not None:
idx.append(b)
return idx
caliidx = caliburritoidx(df.Burrito)
Ncaliidx = np.arange(len(df))
Ncaliidx = np.delete(Ncaliidx,caliidx)
met_Cali = ['Hunger','Volume','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
for k in met_Cali:
Mcali = df[k][caliidx].dropna()
MNcali = df[k][Ncaliidx].dropna()
print k
print sp.stats.ttest_ind(Mcali,MNcali)
"""
Explanation: Cali burritos vs. other burritos
End of explanation
"""
df_Scott = df[df.Reviewer=='Scott']
idx_Scott = df2.index.values
idx_NScott = np.arange(len(df))
idx_NScott = np.delete(idx_NScott,idx_Scott)
burritos_Scott = df.loc[df2.index.values]['Burrito']
dfScorr = df_Scott.corr()
metricscorr = ['Yelp','Google','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(metricscorr)
Mcorrmat = np.zeros((M,M))
Mpmat = np.zeros((M,M))
for m1 in range(M):
for m2 in range(M):
if m1 != m2:
Mcorrmat[m1,m2] = dfcorr[metricscorr[m1]][metricscorr[m2]]
Mpmat[m1,m2] = pearsonp(Mcorrmat[m1,m2],N)
clim1 = (-1,1)
plt.figure(figsize=(10,10))
cax = plt.pcolor(range(M+1), range(M+1), Mcorrmat, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
#plt.axis([2, M+1, floall[0],floall[-1]+10])
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(metricscorr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(metricscorr,size=9)
plt.tight_layout()
# Try to argue that me sampling a bunch of burritos is equivalent to a bunch of people sampling burritos
# you would not be able to tell if a rated burrito was by me or someone else.
# Tests:
# 1. Means of each metric are the same
# 2. Metric correlations are the same (between each quality and overall)
# 3. Do I like Cali burritos more than other people?
# 1. Metric means are the same: I give my meat and meat:filling lower ratings
met_Scott = ['Hunger','Volume','Cost','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
for k in met_Scott:
Msc = df[k][idx_Scott].dropna()
MNsc = df[k][idx_NScott].dropna()
print k
print sp.stats.ttest_ind(Msc,MNsc)
"""
Explanation: Independence of each dimension
End of explanation
"""
|
steven-murray/halomod
|
docs/examples/beyond_galaxy.ipynb
|
mit
|
from halomod import TracerHaloModel
import numpy as np
from matplotlib import pyplot as plt
hm = TracerHaloModel(hod_model="Constant", transfer_model='EH')
hm.central_occupation
plt.plot(np.log10(hm.m),hm.satellite_occupation)
"""
Explanation: Going beyond galaxies as tracers with halomod
halomod is written in a way that is most native to applications of halo models of galaxies. Therefore, modifications and extensions in the context of galaxy clustering (as well as HI assuming HI is trivially related to galaxies) are very straightforward. However, it may not be as straightforward when dealing with other tracers. In this tutorial, we use the flux density power spectrum of arxiv:0906.3020 to demonstrate how to fully utilise the flexibility of halomod.
The flux density power spectrum can modelled as (see Sec 2.3 of arxiv:0906.3020):
$$
P_{1h}(k) = |u_J(k)|^2 \int_{M_{\rm min}}^{\infty} {\rm d}m\, n(m) \bigg(\frac{m}{\bar{\rho}_{\rm gal}}\bigg)^2
$$
$$
P_{2h}(k)=|u_J(k)|^2\bigg[\int_{M_{\rm min}}^{\infty}{\rm d}m\,n(m)b(m)\Big(\frac{m}{\bar{\rho}{\rm gal}}\Big)\bigg]^2 P{\rm lin}(k)
$$
where $u_J(k)={\rm arctan}(k\lambda_{\rm mfp})/(k\lambda_{\rm mfp})$
HOD
Once we have the expression of the power spectrum we want, we should try to identify the halo model components. Comparing it to the standard halo model formalism, it's easy to see that it effectively means:
$$
\langle M_{\rm cen}\rangle \equiv 0
$$
$$
\langle M_{\rm sat}\rangle \equiv A_{\rm sat}
$$
where $A_{\rm sat}$ is a constant so that the total satellite occupation is equal to the mean mass density of galaxies:
$$
\int_{M_{\rm min}} {\rm d}m\, n(m)A = \bar{\rho}_{\rm gal}
$$
This HOD has already been defined within halomod by the Constant HOD class:
End of explanation
"""
from halomod.concentration import CMRelation
from hmf.halos.mass_definitions import SOMean
class CMFlux(CMRelation):
_defaults = {'c_0': 4}
native_mdefs = (SOMean(),)
def cm(self,m,z):
return self.params['c_0']*(m*10**(-11))**(1/3)
hm = TracerHaloModel(
halo_concentration_model = CMFlux,
halo_profile_model = "PowerLawWithExpCut",
halo_profile_params = {"b":2.0,"a":1.0},
hod_model = "Constant",
transfer_model='EH',
)
plt.plot(np.log10(hm.k_hm),hm.tracer_profile.u(hm.k_hm,m=1e12), label='$m = 10^{12}$')
plt.plot(np.log10(hm.k_hm),hm.tracer_profile.u(hm.k_hm,m=1e13), label='$m = 10^{13}$')
plt.plot(np.log10(hm.k_hm),hm.tracer_profile.u(hm.k_hm,m=1e14), label='$m = 10^{14}$')
plt.legend();
"""
Explanation: Density Profile
The density profile from arxiv:0906.3020 is already included as PowerLawWithExpCut:
$$
\rho(r) = \rho_s \big(r/r_s \big)^{-b}{\rm exp}\big[-a r/r_s\big]
$$
and in this specific case we have $b=2$.
However, the native way of defining density profile in halomod is to relate it to the characteristic scale $r_s$, which is related to the concentration parameter. Therefore, for each halo of different mass the shape of the density profile is different. But in this case we want to keep the shape of the profile the same for all halos. Although halomod does not provide a readily available solution, note:
$$
m \sim r_s^3c^3(m,z)
$$
$$
r_s \sim m^{1/3}c^{-1}(m,z)
$$
Therefore, we only need to define a special concentration-mass relation to keep $r_s$ constant. Suppose we construct a C-M relation:
End of explanation
"""
rhoc = hm.cosmo.critical_density0.to("Msun/Mpc^3").value*hm.cosmo.h**2
hm.mean_tracer_den/rhoc
"""
Explanation: One can see that indeed the density profile is now independant of halo mass
Tuning parameters
So far the parameters are randomly set without clear physical meanings. We can easily tune these parameters to desired physical values.
Suppose we want the mass density of galaxies to be $10^{-2}$ of the total critical density:
End of explanation
"""
-np.log10(hm.mean_tracer_den/rhoc)
"""
Explanation: That means the parameter logA for the HOD should be changed to:
End of explanation
"""
hm.hod_params = {"logA":-np.log10(hm.mean_tracer_den/rhoc)}
hm.mean_tracer_den/rhoc
"""
Explanation: We can simply set this on the existing model (everything that's dependent on it will be auto-updated):
End of explanation
"""
rs = hm.halo_profile.scale_radius(1e11)
print(rs)
"""
Explanation: The density profile should satisfy $r_s/a = \lambda_{\rm mfp}$. $r_s$ can be obtained as:
End of explanation
"""
hm.halo_profile.scale_radius(1e12)
"""
Explanation: Just to make sure, we calculate $r_s$ for a different halo mass:
End of explanation
"""
hm.halo_profile_params = {"a":rs/10}
"""
Explanation: in the units of Mpc/h. Assume we want $\lambda_{\rm mfp} = 10$Mpc/h:
End of explanation
"""
plt.plot(hm.k_hm,hm.tracer_profile.u(hm.k_hm,m=1e12))
plt.xlabel("Scale [h/Mpc]")
plt.ylabel("Normalized Fourier Density")
plt.xscale('log')
"""
Explanation: Check the density profile to see the cut-off:
End of explanation
"""
plt.plot(hm.k_hm,hm.power_1h_auto_tracer, ls='--', color='C0', label='1halo')
plt.plot(hm.k_hm,hm.power_2h_auto_tracer, ls=':', color='C0', label='2halo')
plt.plot(hm.k_hm,hm.power_auto_tracer, color='C0', label='full')
plt.legend()
plt.xscale('log')
plt.yscale('log')
plt.ylim(1e-5,)
plt.xlabel("Fourier Scale, $k$")
plt.ylabel("Auto Power Spectrum")
"""
Explanation: You can see it's indeed around 0.1 Mpc$^{-1}$h
Finally we can see the power spectrum:
End of explanation
"""
|
lit-mod-viz/middlemarch-critical-histories
|
old/e1/e1a-analysis.ipynb
|
gpl-3.0
|
import pandas as pd
%matplotlib inline
from ast import literal_eval
import numpy as np
import re
import json
from nltk.corpus import names
from collections import Counter
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [16, 6]
with open('../txt/e1a.json') as f:
rawData = f.read()
df = pd.read_json(rawData)
df['Decade'] = df['year'] - (df['year'] % 10)
df.head()
df['year'].hist()
textALength = 1793449
df['Locations in A'] = df['matches'].apply(lambda x: x[1])
def diachronicAnalysis(df, decades=(1950, 2020)):
decades = np.arange(decades[0], decades[1], 10)
# Make a dictionary of decades.
# Values are a list of locations.
decadeDict = {}
for i, row in df.iterrows():
decade = row['Decade']
locations = row['Locations in A']
if decade not in decadeDict:
decadeDict[decade] = locations
else:
decadeDict[decade] += locations
# Grab the beginnings of quotes.
decadeStarts = {decade: [item[0] for item in loc] for decade, loc in decadeDict.items()}
decadesBinned = {decade:
np.histogram(locations, bins=50, range=(0, textALength))[0]
for decade, locations in decadeStarts.items() if decade in decades}
decadesDF = pd.DataFrame(decadesBinned).T
#Normalize
decadesDF = decadesDF.div(decadesDF.max(axis=1), axis=0)
return decadesDF
def plotDiachronicAnalysis(decadesDF):
ylabels = [str(int(decade)) for decade in decadesDF.index] + ['2020']
plt.pcolor(decadesDF, cmap='gnuplot')
plt.yticks(np.arange(len(decadesDF.index)+1), ylabels)
plt.gca().invert_yaxis()
plt.ylabel('Decade')
plt.xlabel('Novel Segment')
# plt.title("Frequency of Quotations from George Eliot's Middlemarch in Criticism, By Decade")
plt.colorbar(ticks=[])
plt.show()
def plotSynchronicAnalysis(decadesDF):
ax = decadesDF.sum().plot(kind='bar')
decadesDF = diachronicAnalysis(df)
plotDiachronicAnalysis(decadesDF)
"""
Explanation: Experiment 1-A
This experiment used the full corpus of 6K+ texts scraped from JSTOR.
End of explanation
"""
maleNames, femaleNames = names.words('male.txt'), names.words('female.txt')
maleNames = [name.lower() for name in maleNames]
femaleNames = [name.lower() for name in femaleNames]
def guessGender(name):
name = name.split()[0].lower() # Grab the first name.
if name in maleNames and name in femaleNames:
return 'A' #Ambiguous
elif name in maleNames:
return 'M'
elif name in femaleNames:
return 'F'
else:
return 'U'
def averageGender(names):
if type(names) != list:
return 'U'
genderGuesses = [guessGender(name) for name in names]
stats = Counter(genderGuesses).most_common()
if len(stats) == 1:
# Only one author. We can just use that's author's gender guess.
return stats[0][0]
elif stats[0][1] == stats[1][1]: # There's a tie.
return 'A' # Ambiguous.
else:
return stats[0][0] # Return the most common gender.
df['gender'] = df['author'].apply(averageGender)
dfF = df.loc[df['gender'] == 'F']
dfM = df.loc[df['gender'] == 'M']
decadesDFM, decadesDFF = diachronicAnalysis(dfM), diachronicAnalysis(dfF)
# Differences in citations between genders.
decadesGenderDiff = decadesDFM - decadesDFF
plotSynchronicAnalysis(decadesGenderDiff)
"""
Explanation: By (Guessed) Gender of Author
End of explanation
"""
def getFirst(row):
if type(row) == list:
return row[0]
else:
return row
topPublishers = df['publisher_name'].apply(getFirst).value_counts()
publishers = topPublishers[:80].index
publishers = publishers.tolist()
def getCountry(publisher):
brits = ['Oxford University Press', 'Cambridge University Press', 'Modern Humanities Research Association', \
'BMJ', 'Taylor & Francis, Ltd.', 'Edinburgh University Press', \
'Royal Society for the Encouragement of Arts, Manufactures and Commerce']
canadians = ['Victorian Studies Association of Western Canada']
if type(publisher) != list:
return 'Unknown'
publisher = publisher[0]
if publisher in brits:
return 'Britain'
elif publisher in canadians or 'Canada' in publisher:
return 'Canada'
elif 'GmbH' in publisher:
return 'Germany'
elif 'estudios' in publisher:
return 'Spain'
elif 'France' in publisher:
return 'France'
elif 'Ireland' in publisher:
return 'Ireland'
else:
return 'US'
df['country'] = df['publisher_name'].apply(getCountry)
df['country'].value_counts()
dfBrits = df.loc[df['country'] == 'Britain']
dfYanks = df.loc[df['country'] == 'US']
dfCanadians = df.loc[df['country'] == 'Canada']
decadesDFBrits, decadesDFYanks = diachronicAnalysis(dfBrits), diachronicAnalysis(dfYanks)
plotSynchronicAnalysis(decadesDFYanks-decadesDFBrits)
"""
Explanation: By (Guessed) Country of Publication
End of explanation
"""
# Look at the top journals.
df['journal'].value_counts()[:10]
"""
Explanation: By Journal
End of explanation
"""
geJournals = df.loc[df['journal'] == 'George Eliot - George Henry Lewes Studies']
otherJournals = df.loc[df['journal'] != 'George Eliot - George Henry Lewes Studies']
ax = plotSynchronicAnalysis(diachronicAnalysis(geJournals) - diachronicAnalysis(otherJournals))
"""
Explanation: Compare the specialist journal, "George Eliot - George Henry Lewes Studies," with all other journals.
End of explanation
"""
|
simonsfoundation/CaImAn
|
demos/notebooks/demo_OnACID_mesoscope.ipynb
|
gpl-2.0
|
try:
if __IPYTHON__:
# this is used for debugging purposes only. allows to reload classes when changed
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
except NameError:
pass
import logging
import numpy as np
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.INFO)
import caiman as cm
from caiman.source_extraction import cnmf
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
import bokeh.plotting as bpl
bpl.output_notebook()
"""
Explanation: Example of online analysis using OnACID
Complete pipeline for online processing using CaImAn Online (OnACID).
The demo demonstates the analysis of a sequence of files using the CaImAn online
algorithm. The steps include i) motion correction, ii) tracking current
components, iii) detecting new components, iv) updating of spatial footprints.
The script demonstrates how to construct and use the params and online_cnmf
objects required for the analysis, and presents the various parameters that
can be passed as options. A plot of the processing time for the various steps
of the algorithm is also included.
@author: Eftychios Pnevmatikakis @epnev
Special thanks to Andreas Tolias and his lab at Baylor College of Medicine
for sharing the data used in this demo.
End of explanation
"""
fld_name = 'Mesoscope' # folder inside ./example_movies where files will be saved
fnames = []
fnames.append(download_demo('Tolias_mesoscope_1.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_2.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_3.hdf5',fld_name))
print(fnames) # your list of files should look something like this
"""
Explanation: First download the data
The function download_demo will look for the datasets Tolias_mesoscope_*.hdf5 ins your caiman_data folder inside the subfolder specified by the variable fld_name and will download the files if they do not exist.
End of explanation
"""
fr = 15 # frame rate (Hz)
decay_time = 0.5 # approximate length of transient event in seconds
gSig = (4,4) # expected half size of neurons
p = 1 # order of AR indicator dynamics
min_SNR = 1 # minimum SNR for accepting new components
rval_thr = 0.90 # correlation threshold for new component inclusion
ds_factor = 1 # spatial downsampling factor (increases speed but may lose some fine structure)
gnb = 2 # number of background components
gSig = tuple(np.ceil(np.array(gSig)/ds_factor).astype('int')) # recompute gSig if downsampling is involved
mot_corr = True # flag for online motion correction
pw_rigid = False # flag for pw-rigid motion correction (slower but potentially more accurate)
max_shifts_online = np.ceil(10./ds_factor).astype('int') # maximum allowed shift during motion correction
sniper_mode = True # flag using a CNN to detect new neurons (o/w space correlation is used)
init_batch = 200 # number of frames for initialization (presumably from the first file)
expected_comps = 500 # maximum number of expected components used for memory pre-allocation (exaggerate here)
dist_shape_update = True # flag for updating shapes in a distributed way
min_num_trial = 10 # number of candidate components per frame
K = 2 # initial number of components
epochs = 2 # number of passes over the data
show_movie = False # show the movie with the results as the data gets processed
params_dict = {'fnames': fnames,
'fr': fr,
'decay_time': decay_time,
'gSig': gSig,
'p': p,
'min_SNR': min_SNR,
'rval_thr': rval_thr,
'ds_factor': ds_factor,
'nb': gnb,
'motion_correct': mot_corr,
'init_batch': init_batch,
'init_method': 'bare',
'normalize': True,
'expected_comps': expected_comps,
'sniper_mode': sniper_mode,
'dist_shape_update' : dist_shape_update,
'min_num_trial': min_num_trial,
'K': K,
'epochs': epochs,
'max_shifts_online': max_shifts_online,
'pw_rigid': pw_rigid,
'show_movie': show_movie}
opts = cnmf.params.CNMFParams(params_dict=params_dict)
"""
Explanation: Set up some parameters
Here we set up some parameters for running OnACID. We use the same params object as in batch processing with CNMF.
End of explanation
"""
cnm = cnmf.online_cnmf.OnACID(params=opts)
cnm.fit_online()
"""
Explanation: Now run the CaImAn online algorithm (OnACID).
The first initbatch frames are used for initialization purposes. The initialization method chosen here bare will only search for a small number of neurons and is mostly used to initialize the background components. Initialization with the full CNMF can also be used by choosing cnmf.
We first create an OnACID object located in the module online_cnmf and we pass the parameters similarly to the case of batch processing. We then run the algorithm using the fit_online method.
End of explanation
"""
logging.info('Number of components: ' + str(cnm.estimates.A.shape[-1]))
Cn = cm.load(fnames[0], subindices=slice(0,500)).local_correlations(swap_dim=False)
cnm.estimates.plot_contours(img=Cn)
"""
Explanation: Optionally save results and do some plotting
End of explanation
"""
cnm.estimates.nb_view_components(img=Cn, denoised_color='red');
"""
Explanation: View components
Now inspect the components extracted by OnACID. Note that if single pass was used then several components would be non-zero only for the part of the time interval indicating that they were detected online by OnACID.
Note that if you get data rate error you can start Jupyter notebooks using:
'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
End of explanation
"""
T_motion = 1e3*np.array(cnm.t_motion)
T_detect = 1e3*np.array(cnm.t_detect)
T_shapes = 1e3*np.array(cnm.t_shapes)
T_online = 1e3*np.array(cnm.t_online) - T_motion - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_motion)), T_motion, T_online, T_detect, T_shapes)
plt.legend(labels=['motion', 'process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
plt.ylim([0,140])
"""
Explanation: Plot timing
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True) the time required to generate this movie will be included here.
End of explanation
"""
|
jorgemauricio/INIFAP_Course
|
ejercicios/Pandas/Ejercicios de Visualizacion con Pandas-Solucion.ipynb
|
mit
|
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('../data/df3')
%matplotlib inline
df3.plot.scatter(x='a',y='b',c='red',s=50
df3.info()
df3.head()
"""
Explanation: Ejercicio de visualizacion de informacion con Pandas - Soluciones
Este es un pequenio ejercicio para revisar las diferentes graficas que nos permite generar Pandas.
* NOTA: Utilizar el archivo df3 que se encuentra en la carpeta data
End of explanation
"""
df3.plot.scatter(x='a',y='b',c='red',s=50,figsize=(12,3))
"""
Explanation: Recrea la siguiente grafica de puntos de b contra a.
End of explanation
"""
df3['a'].plot.hist()
"""
Explanation: Crea un histograma de la columna 'a'.
End of explanation
"""
plt.style.use('ggplot')
df3['a'].plot.hist(alpha=0.5,bins=25)
"""
Explanation: Las graficas se ven muy bien, pero deseamos que se vean un poco mas profesional, asi que utiliza la hoja de estilo 'ggplot' y genera el histograma nuevamente, ademas investiga como agregar mas divisiones.
End of explanation
"""
df3[['a','b']].plot.box()
"""
Explanation: Crea una grafica de cajas comparando las columnas 'a' y 'b'.
End of explanation
"""
df3['d'].plot.kde()
"""
Explanation: Crea una grafica kde plot de la columna 'd'
End of explanation
"""
df3.loc[0:30].plot.area(alpha=0.4)
"""
Explanation: Crea una grafica de area para todas las columnas, utilizando hasta 30 filas (tip: usar .loc).
End of explanation
"""
|
chetan51/nupic.research
|
projects/dynamic_sparse/notebooks/ExperimentAnalysis-ImprovedMagvsSETcomparison.ipynb
|
gpl-3.0
|
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
"""
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET).
Motivation.
Check if results are consistently above baseline.
Conclusion
No significant difference between both models
No support for early stopping
End of explanation
"""
exps = ['improved_magpruning_eval1', ]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
"""
Explanation: Load and check data
End of explanation
"""
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
"""
Explanation: ## Analysis
Experiment Details
End of explanation
"""
agg(['model'])
agg(['on_perc', 'model'])
agg(['weight_prune_perc', 'model'])
agg(['on_perc', 'pruning_early_stop', 'model'])
"""
Explanation: Does improved weight pruning outperforms regular SET
End of explanation
"""
agg(['pruning_early_stop'])
agg(['model', 'pruning_early_stop'])
agg(['on_perc', 'pruning_early_stop'])
"""
Explanation: No significant difference between the two approaches
What is the impact of early stopping?
End of explanation
"""
|
whitead/numerical_stats
|
unit_12/lectures/lecture_2.ipynb
|
gpl-3.0
|
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import scipy.linalg as linalg
import matplotlib
"""
Explanation: Nonlinear Least Squares
Unit 12, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, April 17 2018
Goals:
Be able to apply the same analysis from 1D/ND OLS to NLS
Compute the F-matrix and understand its use in error analysis
Distinguish between when linearized OLS is possible and NLS is required
End of explanation
"""
#NOT PART OF REGRESSION!
#Make up an equation and create data from it
x = np.linspace(0, 10, 20)
y = 2 * np.exp(-x**2 * 0.1) + scipy.stats.norm.rvs(size=len(x), loc=0, scale=0.2)
#END
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.legend(loc='upper right')
plt.show()
#Now we compute the least squares solution
x_mat = np.column_stack( (np.ones(len(x)), -x**2) )
#Any negative y-values will not work, since log of a negative number is undefined
y_clean = []
for yi in y:
if yi < 0:
y_clean.append(0.0000001)
else:
y_clean.append(yi)
lin_y = np.log(y_clean)
#recall that the *_ means put all the other
#return values into the _ variable, which we
#don't need
lin_beta, *_ = linalg.lstsq(x_mat, lin_y)
print(lin_beta)
beta_0 = np.exp(lin_beta[0])
beta_1 = lin_beta[1]
print(beta_0, 2)
print(beta_1, 0.1)
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.plot(x, beta_0 * np.exp(-x**2 * beta_1), '-', label='linearized least squares')
plt.legend(loc='upper right')
plt.show()
"""
Explanation: Linearizing An Exponential
We previously learned how to linearize polynomials into OLS-ND. What about a more complex equation, like:
$$y = \beta_0 e^{-\beta_1 x^2} + \epsilon $$
Well, you could of course just do this:
$$\ln y = \ln\beta_0 - \beta_1 x^2 + \epsilon $$
and then choose our $x$ matrix to be $[1,-x^2]$
What is wrong with this?
We changed our assumption of noise! To do the math I just above, it should have been that our original equation was:
$$y = \beta_0 e^{-\beta_1 x^2}\times e^{\epsilon} $$ so that after taking the log, we ended up with that above. But that equation doesn't our assumption that we have normally distributed 0-centered noise.
Can we neglect our assumption of linear normal noise?
End of explanation
"""
resids = y - beta_0 * np.exp(-x**2 * beta_1)
scipy.stats.shapiro(resids)
"""
Explanation: NO, we cannot linearize and ignore the impact on noise
Checking if Linearization is Valid
The way to check if the linearization is valid is to see if the residuals are normally distributed. Since you are assuming the noise is normal after linearization, you can check that assumption:
End of explanation
"""
#Create an objective function, that takes in 1 D-dimensional argument and outputs a measure of the goodness of fit (SSR)
def obj(beta, x, y):
beta_0 = beta[0] #<- extract the elements of the beta vector
beta_1 = beta[1]
yhat = beta_0 * np.exp(-beta_1 * x**2) # <- This is our model equation
resids = yhat - y #<- compute residuals
SSR = np.sum(resids**2) #<- square and sum them
return SSR
#Use the minimize (BGFS) function, with starting points
result = scipy.optimize.minimize(obj, x0=[1,1], args=(x, y))
beta_opt = result.x #<- remember, we get out a whole bunch of extra stuff from optimization
print(result)
plt.plot(x,y, 'o', label='data')
plt.plot(x, 2 * np.exp(-x**2 * 0.1), '-', label='exact solution')
plt.plot(x, beta_0 * np.exp(-x**2 * beta_1), '-', label='linearized least squares')
plt.plot(x, beta_opt[0] * np.exp(-x**2 * beta_opt[1]), '-', label='Nonlinear least squares')
plt.legend(loc='upper right')
plt.show()
"""
Explanation: The Shapiro-Wilk test says absolutely not are they normal, as we can imagine from looking at the graph.
Nonlinear Multidimensional Least-Squares
To treat things like the exponential distribution from above, we need to use an objective function and minimization. We'll minimize the SSR directly:
$$SSR = \sum_i (\hat{y}_i - y_i)^2 $$
$$SSR = \sum_i (\beta_0 e^{-\beta_1 x^2} - y_i)^2 $$
So our SSR is a function which takes in $(\beta_0, \beta_1)$
We're going to use a new technique this time. Instead of relying on the data being defined, we'll have our SSR take that as extra arguments and we'll tell the minimizer about these extra args.
End of explanation
"""
def build_F(beta, x):
#Compute the individual partials for each data point
beta_0_vec = np.exp(-beta[1] * x**2)
beta_1_vec = -beta[0] * x**2 * np.exp(-beta[1] * x**2)
#Now stack them together
return np.column_stack( (beta_0_vec, beta_1_vec) )
print(build_F(beta_opt, x))
"""
Explanation: Error Analysis in Nonlinear Least Squares
Just like for the one-dimensional and multidimensional case, there exists an expression to go from standard error in the residuals to standard error for the fit. That expression is:
$$y = f(\beta, x) + \epsilon$$
$$ F_{ij} = \frac{\partial f(\hat{\beta}, x_i)}{\partial \hat{\beta_j}}$$
$$S^2_{\beta_{ij}} = S^2_{\epsilon}\left(\mathbf{F}^T\mathbf{F}\right)^{-1}$$
where again, the standard error for the $i$th given fit parameter is $S^2_{\beta{ii}}$
Take a close look at the partial derivatives. $x_i$ can be a vector here and remember that you're taking the partial with respect to the fit parameters, not $x$
Sketch of Derivation of Error Terms
Let's try to understand this equation. It is a generalization of the OLS in N-D, so our derivation will apply to all cases we learned in lecture 1 too.
Consider the $\mathbf{F}^T\mathbf{F}$ term. You can see by expanding the terms that it is
$$
\mathbf{F}^T\mathbf{F} = \sum_k^N \frac{\partial f(\hat{\beta}, x_k)}{\partial \hat{\beta_i}} \frac{\partial f(\hat{\beta}, x_k)}{\partial \hat{\beta_j}}
$$
where $i$ is the row and $j$ is the index. This expression is approximately:
$$
\mathbf{F}^T\mathbf{F} \approx \frac{\sum_k^N
\left(\Delta f(\hat{\beta}, x_k)\right)^2}{\Delta\hat{\beta_i} \Delta \hat{\beta_j}}
$$
So the diagonal of $\mathbf{F}^T\mathbf{F}$ is the total change in the value of the squared function per change in the square of the value of a fit coefficient.
$$
\mathrm{diag}\left[\mathbf{F}^T\mathbf{F}\right] \approx \frac{\sum_k^N
\left(\Delta f(\hat{\beta}, x_k)\right)^2}{\Delta\hat{\beta_i}^2}
$$
We can loosely think of $\mathbf{F}^T\mathbf{F}$ as a function, or operator, that goes from a change in the fit coefficients squared to a change in the squared value of the function.
We also know that our residual, $f(\hat{\beta}, x) - y$, has the same derivative wrt the fit coefficients as the function itself. Thus $\mathbf{F}^T\mathbf{F}$ allows us to compute a change in the residual given a change in the fit coefficients (this is a simplification).
Therefore the inverse, $\left(\mathbf{F}^T\mathbf{F}\right)^{-1}$, goes from a change in the residual squared to a change in the fit coefficients squared.
Now we can see that:
$$ S^2_{\epsilon}\left(\mathbf{F}^T\mathbf{F}\right)^{-1} $$
is the product of a change in the residual squared with our operator that translate that into a squared change in the fit coefficient squared.
Example: Error Analysis for Exponential Fit in Nonlinear Least Squares
To do the analysis from our example above, we must first compute the partial derivatives. Recall the model is:
$$y = \beta_0 e^{-\beta_1 x^2} + \epsilon $$
So that
$$f(\beta, x) = \beta_0 e^{-\beta_1 x^2}$$
$$\frac{\partial f}{\partial \beta_0} = e^{-\beta_1 x^2}$$
$$\frac{\partial f}{\partial \beta_1} = -\beta_0 x^2 e^{-\beta_1 x^2}$$
To compute $\mathbf{F}$, we'll need to compute these functions at various points, so it's best to create a python function
End of explanation
"""
#The code below is our normal way of computing the standard error in the noise
resids = y - beta_opt[0] * np.exp(-x**2 * beta_opt[1])
SSR = np.sum(resids**2)
s2_epsilon = SSR / (len(x) - len(beta_opt))
print(s2_epsilon)
#Using our F, compute the standard error in beta
F = build_F(beta_opt, x)
s2_beta = s2_epsilon * linalg.inv(F.transpose() @ F)
print(s2_beta)
#We have standard error and can now compute a confidence interval
T = scipy.stats.t.ppf(0.975, len(x) - len(beta_opt))
c0_width = T * np.sqrt(s2_beta[0,0])
print('95% confidence interval for beta_0 is {:.3} +/ {:.2f}'.format(beta_opt[0], c0_width))
c1_width = T * np.sqrt(s2_beta[1,1])
print('95% confidence interval for beta_1 is {:.3} +/ {:.2}'.format(beta_opt[1], c1_width))
"""
Explanation: Now to actually compute the standard error:
End of explanation
"""
data = np.genfromtxt('spectrum.txt')
plt.plot(data[:,0], data[:,1])
plt.show()
"""
Explanation: Of course, we could continue on and do hypothesis tests
Multidimesional Non-Linear Regression
These notes are not on HW or tests. Just stating how to do multidimensional y regression
When we have multiple dimensions in $y$, the dependent variable, a few things need to change. Our model equation becomes:
$$
\vec{y} = f(x, \beta) + \vec{\epsilon}
$$
where the noise is now a multivariate normal distribution. Multivariate normals are like multiple normal distributions, except that they have both different parameters for each dimension and they also may have correlation between dimensions. We will assume here that there is no correlation between the noise.
Here we derive the $F$-matrix from above using its actual derivation.
$$
\mathcal{F} = \sum_k -(y_k - \hat{y})_k^2) / 2\sigma_k^2 - \log \sqrt{2\pi\sigma_k^2}
$$
The best estimaet for $\sigma_k^2$ is
$$
\sigma_k^2 = \frac{1}{N - D}\sum_k (y_k - \hat{y})_k^2
$$
First, we need to redefine the SSR to be:
$$
\textrm{SSR} = \sum_i (y_i - \hat{y}_i)\cdot (y_i- \hat{y}_i)
$$
This quantity should be minimized to find the regression parameters ref
The error analysis becomes:
$$
S^2_e = \frac{SSR}{N - D}
$$
$$
\mathcal{F} = E\left[\frac{\partial^2}{\partial \beta_i \partial \beta_j} SSR \right]
$$
where $E$ means taking the average over the observed data
A Complete & Complex Example - Deconvolution of Spectrum
In spectroscopy and chromatography, often we have a spectrum that is a mixture of peaks and we'd like to spearate them out. For example, here's a plot of a spectrum
End of explanation
"""
def peak(x, a, b, c):
'''computes the peak equation for given parameters'''
return a / np.sqrt(2 * np.pi * c) * np.exp(-(x - b)**2 / c)
def spectrum(x, a_array, b_array, c_array):
'''Takes in the x data and parameters for a set of peaks. Computes spectrum'''
yhat = np.zeros(np.shape(x))
for i in range(len(a_array)):
yhat += peak(x, a_array[i], b_array[i], c_array[i])
return yhat
"""
Explanation: You can see that there are probably 3 peaks in here. What we'd like to find out is what percentage each peak contributes. For example, this would tell us the amount of absorbance contributed by each of these three bonds or perhaps the amount of each compound in chromatography.
The equation for each peak is:
$$ f(x, a, b, c) = \frac{a}{\sqrt{2 c \pi}}\,e^{-(x - b)^2 / c} $$
and the total spectrum is
$$ f(x_i, \vec{a}, \vec{b}, \vec{c}) = \sum_j^M \frac{a_j}{\sqrt{2 \pi c_j}} e^{-(x_i - b_j)^2 / c_j} $$
where $j$ is the index of peak and runs from $1$ to $M$ where $M$ is the number of peaks.
Let's start by writing an equation that can predict a spectrum given some parameters
End of explanation
"""
x = np.linspace(0, 10, 100)
y = peak(x, 1, 5, 1)
plt.plot(x,y)
plt.show()
y = spectrum(x, [1, 2], [3, 5], [1,1])
plt.plot(x,y)
plt.show()
"""
Explanation: It's always good to test your functions, so let's do that
End of explanation
"""
spec_x = data[:,0]
spec_y = data[:,1]
scipy.stats.spearmanr(spec_x, spec_y)
"""
Explanation: Ok! Now let's do the regression
Justifying a regression
Let's first test if there is a correlation
End of explanation
"""
def spec_ssr(params, data, M):
'''Compute SSR given the parameters, data, and number of desired peaks.'''
x = data[:,0]
y = data[:,1]
a_array = params[:M]
b_array = params[M:2*M]
c_array = params[2*M:3 * M]
yhat = spectrum(x, a_array, b_array, c_array)
return np.sum((yhat - y)**2)
def obj(params):
return spec_ssr(params, data=data, M=3)
"""
Explanation: Looks like there is a correlation, as indicated by the $p$-value.
Computing the SSR
Let's write our SSR function
End of explanation
"""
import scipy.optimize as opt
result = opt.basinhopping(obj, x0=[100, 100, 100, 600, 650, 700, 100, 100, 100], niter=100)
print(result.x)
"""
Explanation: Optimizing
Now we need to think aobut if this is non-convex! It is in fact non-convex, because there are many local minimums. This means we'll have to be smart about our starting parameters and/or run for many interations
End of explanation
"""
def spec_yhat(params, data, M):
'''compute the yhats for the spectrum problem'''
x = data[:,0]
a_array = params[:M]
b_array = params[M:2*M]
c_array = params[2*M:3 * M]
return spectrum(x, a_array, b_array, c_array)
plt.plot(spec_x, spec_y, label='data')
plt.plot(spec_x, spec_yhat(result.x, data, 3), label='fit')
plt.legend()
plt.show()
"""
Explanation: Let's see if 100 iterations gave us good data!
End of explanation
"""
for i in range(3):
plt.plot(spec_x, peak(spec_x, result.x[i], result.x[i + 3], result.x[i + 6]))
"""
Explanation: What a bad fit! Let's try plotting the individual peaks
End of explanation
"""
#constraints follow the order above:
constraints = [{'type': 'ineq', 'fun': lambda params: params[3] - 600},
{'type': 'ineq', 'fun': lambda params: 630 - params[3]},
{'type': 'ineq', 'fun': lambda params: params[4] - 630},
{'type': 'ineq', 'fun': lambda params: 650 - params[4]},
{'type': 'ineq', 'fun': lambda params: params[5] - 650},
{'type': 'ineq', 'fun': lambda params: 690 - params[5]}]
minimizer_kwargs = {'constraints': constraints}
result = opt.basinhopping(obj, x0=[100, 100, 100, 600, 650, 700, 100, 100, 100], niter=350, minimizer_kwargs=minimizer_kwargs)
print(result.x)
plt.plot(spec_x, spec_y, label='data')
plt.plot(spec_x, spec_yhat(result.x, data, 3), label='fit')
for i in range(3):
plt.plot(spec_x, peak(spec_x, result.x[i], result.x[i + 3], result.x[i + 6]), label='peak {}'.format(i))
plt.legend()
plt.show()
"""
Explanation: Wow, that is really wrong! We can hit on particular peaks, but they usually have no meaning. Let's try to add more info. What info do we have? The peak centers. Let's try adding some constraints describing this info:
Adding constraints describing what we know:
$$ 600 < b_1 < 630 $$
$$ 630 < b_2 < 650 $$
$$ 650 < b_3 < 690 $$
End of explanation
"""
resids = spec_y - spec_yhat(result.x, data, 3)
plt.hist(resids)
plt.show()
scipy.stats.shapiro(resids)
"""
Explanation: Checking residuals
End of explanation
"""
def peak_partials(x, a, b, c):
'''Returns partial derivatives of peak functions with respect to parameters as a tuple'''
return (1 / (np.sqrt(2 * np.pi * c)) * np.exp(-(x - b)**2 / c), \
2 * a * (x - b) / c / np.sqrt(2 * np.pi * c) * np.exp(-(x - b)**2 / c),\
a / np.sqrt( 2 * np.pi) * np.exp(-(x - b)**2 / c) * ((x - b)**2 / c**(5 / 2) - 1 / 2 / c**(3/2)))
"""
Explanation: Looks like they are normal
Computing Standard Error
We need the partials:
$$\frac{\partial f}{\partial a} = \frac{1}{\sqrt{2 \pi c}}e^{-(x - b)^2 / c}$$
$$\frac{\partial f}{\partial b} = \frac{2a(x - b)}{c\sqrt{2 \pi c}} e^{-(x - b)^2 / c}$$
$$\frac{\partial f}{\partial c} = \frac{a}{\sqrt{2 \pi}}e^{-(x - b)^2 / c}\left[\frac{(x - b)^2}{c^{5/2}} - \frac{1}{2c^{3/2}}\right]$$
End of explanation
"""
def spectrum_partials(x, a_array, b_array, c_array):
'''Takes in the x data and parameters for a set of peaks. Computes partial derivatives and returns as matrix'''
result = np.empty( (len(x), len(a_array) * 3 ) )
for i in range(len(a_array)):
a_p, b_p, c_p = peak_partials(x, a_array[i], b_array[i], c_array[i])
result[:, i] = a_p
result[:, i + 3] = b_p
result[:, i + 6] = c_p
return result
M = 3
F = spectrum_partials(spec_x, result.x[:M], result.x[M:2*M], result.x[2*M:3*M])
print(F)
"""
Explanation: We have to decide how we want to build the ${\mathbf F}$ matrix. I want to build it as
$$\left[\frac{\partial f}{\partial a_1}, \frac{\partial f}{\partial a_2}, \frac{\partial f}{\partial a_3}, \frac{\partial f}{\partial b_1}, \ldots\right]$$
End of explanation
"""
SSR = np.sum(resids**2)
s2_epsilon = SSR / (len(spec_x) - len(result.x))
s2_beta = np.diag(s2_epsilon * linalg.inv(F.transpose() @ F))
ci = np.sqrt(s2_beta) * scipy.stats.norm.ppf(0.975)
for pi, c in zip(result.x, ci):
print('{} +/- {}'.format(pi, c))
"""
Explanation: Now we compute all the confidence intervals
End of explanation
"""
for pi, c in zip(result.x[:3], ci[:3]):
print('{:%} +/- {:%}'.format(pi / np.sum(result.x[:3]), c / np.sum(result.x[:3])))
"""
Explanation: The relative populations, the integrated peaks, are just the $a$ values. I'll normalize them into percent:
End of explanation
"""
|
mmathioudakis/moderndb
|
2017/spark_tutorial.ipynb
|
mit
|
#On windows
#import findspark
#findspark.init(spark_home="C:/Users/me/software/spark-1.6.3-bin-hadoop2.6/")
import pyspark
import numpy as np # we'll be using numpy for some numeric operations
sc = pyspark.SparkContext(master="local[*]", appName="tour")
sc.stop()
"""
Explanation: Lecture 7: Spark Programming
In what follows, you can find pyspark code for the examples we saw in class.
Many of the examples follow examples found in Learning Spark: Lightning-Fast Big Data Analysis, by Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia, which you can also find at Aalto's library.
Further information also available here: https://spark.apache.org/docs/1.6.0/programming-guide.html
Setup
These instructions should work for Mac and Linux. We'll assume you'll be using python3.
To run the following on your computer, make sure that pyspark is in your PYTHONPATH variable.
You can do that by downloading a zipped file with Spark, extracting it into its own folder (e.g., spark-1.6.0-bin-hadoop2.6/) and then executing the following commands in bash.
export PYSPARK_PYTHON=python3
export SPARK_HOME=/path/to/spark-1.6.0-bin-hadoop2.6/
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
On Windows, the easiest way is to use pyspark is to use Anaconda and then install Jupyter and findspark
End of explanation
"""
# To try the SparkContext with other masters first stop the one that is already running
# sc.stop()
"""
Explanation: local: Run Spark locally with one worker thread (i.e. no parallelism at all).
local[K]: Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).
local[*]: Run Spark locally with as many worker threads as logical cores on your machine.
spark://HOST:PORT: Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default.
mesos://HOST:PORT: Connect to the given Mesos cluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://.... To submit with --deploy-mode cluster, the HOST:PORT should be configured to connect to the MesosClusterDispatcher.
yarn: Connect to a YARN cluster in client or cluster mode depending on the value of --deploy-mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable.
yarn-client: Equivalent to yarn with --deploy-mode client, which is preferred to yarn-client
yarn-cluster: Equivalent to yarn with --deploy-mode cluster, which is preferred to yarn-cluster
End of explanation
"""
f = lambda line: 'Spark' in line
f("we are learning park")
def f(line):
return 'Spark' in line
f("we are learning Spark")
"""
Explanation: Lambda functions
Lambda expressions are an easy way to write short functions in Python.
End of explanation
"""
my_list = [0,1,2,3,4,5,6,7,8,9]
data = sc.parallelize(my_list) # create RDD from Python collection
data_squared = data.map(lambda num: num ** 2) # transformation
data_squared.collect()
"""
Explanation: Creating RDDS
We saw that we can create RDDs by loading files from disk. We can also create RDDs from Python collections or transforming other RDDs.
End of explanation
"""
data = sc.parallelize([0,1,2,3,4,5,6,7,8,9]) # creation of RDD
data_squared = data.map(lambda num: num ** 2) # transformation
data_squared.collect() # action
"""
Explanation: RDD operations
There are two types of RDD operations in Spark: transformations and actions. Transfromations create new RDDs from other RDDs. Actions extract information from RDDs and return it to the driver program.
End of explanation
"""
text = sc.textFile("myfile.txt",1) # load data
text = sc.textFile("myfile.txt") # load data
# count only lines that mention "Spark"
spark_lines = text.filter(lambda line: 'Spark' in line)
spark_lines.collect()
"""
Explanation:
End of explanation
"""
def is_prime(num):
""" return True if num is prime, False otherwise"""
if num < 1 or num % 1 != 0:
raise Exception("invalid argument")
for d in range(2, int(np.sqrt(num) + 1)):
if num % d == 0:
return False
return True
numbersRDD = sc.parallelize(range(1, 1000000)) # creation of RDD
primesRDD = numbersRDD.filter(is_prime) # transformation
# primesRDD has not been materialized until this point
primes = primesRDD.collect() # action
type(primesRDD)
print(primes[0:15])
print(primesRDD.take(15))
"""
Explanation: Lazy evaluation
RDDs are evaluated lazily. This means that Spark will not materialize an RDD until it has to perform an action. In the example below, primesRDD is not evaluated until action collect() is performed on it.
End of explanation
"""
primesRDD.persist()
primesRDD.persist() # we're asking Spark to keep this RDD in memory. Note that cache is the same but as using persist for memory. However, persist allows you to define other types of storage
print("Found", primesRDD.count(), "prime numbers") # first action -- causes primesRDD to be materialized
print("Here are some of them:")
print(primesRDD.take(20)) # second action - RDD is already in memory
"""
Explanation: Persistence
RDDs are ephemeral by default, i.e. there is no guarantee they will remain in memory after they are materialized. If we want them to persist in memory, possibly to query them repeatedly or use them in multiple operations, we can ask Spark to do this by calling persist() on them.
End of explanation
"""
primesRDD.unpersist()
"""
Explanation: If we do not need primesRDD in memory anymore, we can tell Spark to discard it.
End of explanation
"""
%%timeit
primes = primesRDD.collect()
"""
Explanation: How long does it take to collect primesRDD? Let's time the operation.
End of explanation
"""
primesRDD.persist()
%%timeit
primes = primesRDD.collect()
"""
Explanation: When I ran the above on my laptop, it took about more than 10s. That's because Spark had to evaluate primesRDD before performing collect on it.
How long would it take if primesRDD was already in memory?
End of explanation
"""
data = sc.parallelize(range(10))
squares = data.map(lambda x: x**2)
squares.collect()
def f(x):
""" return the square of a number"""
return x**2
data = sc.parallelize(range(10))
squares = data.map(f)
squares.collect()
"""
Explanation: When I ran the above on my laptop, it took about 1s to collect primesRDD - that's almost $10$ times faster compared to when the RDD had to be recomputed from scratch.
Passing functions
When we pass a function as a parameter to an RDD operation, the function can be specified either as a lambda function or as a reference to a function defined elsewhere.
End of explanation
"""
class SearchFunctions(object):
def __init__(self, query):
self.query
def is_match(self, s):
return self.query in s
def get_matches_in_rdd_v1(self, rdd):
return rdd.filter(self.is_match) # the function is an object method
def get_matches_in_rdd_v2(self, rdd):
return rdd.filter(lambda x: self.query in x) # the function references an object field
"""
Explanation: Be careful, though: if the function that you pass as argument to an RDD operation
* is an object method, or
* references an object field,
then Spark will ship the entire object to the cluster nodes along with the function.
This is demonstrated in the piece of code below.
End of explanation
"""
class SearchFunctions(object):
def __init__(self, query):
self.query
def is_match(self, s):
return self.query in s
def get_matches_in_rdd(self, rdd):
query = self.query
return rdd.filter(lambda x: query in x)
"""
Explanation: The following is a better way to implement the two methods above (get_matches_in_rdd_v1 and get_matches_in_rdd_v2), if we want to avoid sending a SearchFunctions object for computation to the cluster.
End of explanation
"""
phrases = sc.parallelize(["hello world", "terve terve", "how are you"])
words_map = phrases.map(lambda phrase: phrase.split(" "))
words_map.collect() # This returns a list of lists
phrases = sc.parallelize(["hello world", "terve terve", "how are you"])
words_flatmap = phrases.flatMap(lambda phrase: phrase.split(" "))
words_flatmap.collect() # This returns a list withe the combined elements of the list
# We can use the flatmap to make a word count
words_flatmap.map(
lambda x: (x,1)
).reduceByKey(
lambda x,y: x+y
).collect()
"""
Explanation: map and flatmap
End of explanation
"""
oneRDD = sc.parallelize([1, 1, 1, 2, 3, 3, 4, 4])
oneRDD.persist()
otherRDD = sc.parallelize([1, 4, 4, 7])
otherRDD.persist()
unionRDD = oneRDD.union(otherRDD)
unionRDD.persist()
oneRDD.subtract(otherRDD).collect()
oneRDD.distinct().collect()
oneRDD.intersection(otherRDD).collect() # removes duplicates
oneRDD.cartesian(otherRDD).collect()[:5]
"""
Explanation: (Pseudo) set operations
End of explanation
"""
np.sum([1,43,62,23,52])
data = sc.parallelize([1,43,62,23,52])
data.reduce(lambda x, y: x + y)
data.reduce(lambda x, y: x * y)
data.reduce(lambda x, y: x**2 + y**2) # this does NOT compute the sum of squares of RDD elements
((((1 ** 2 + 43 ** 2) ** 2 + 62 ** 2) **2 + 23 ** 2) **2 + 52 **2)
data.reduce(lambda x, y: np.sqrt(x**2 + y**2)) ** 2
np.sum(np.array([1,43,62,23,52]) ** 2)
"""
Explanation: reduce
End of explanation
"""
help(data.aggregate)
def seq(x,y):
return x[0] + y, x[1] + 1
def comb(x,y):
print(x,y,"comb")
return x[0] + y[0], x[1] + y[1]
data = sc.parallelize([1,43,62,23,52], 1) # Try different levels of paralellism
aggr = data.aggregate(zeroValue = (0,0),
seqOp = seq, #
combOp = comb)
aggr
aggr[0] / aggr[1] # average value of RDD elements
"""
Explanation: aggregate
End of explanation
"""
help(pairRDD.reduceByKey)
pairRDD = sc.parallelize([('$APPL', 100.64),
('$APPL', 100.52),
('$GOOG', 706.2),
('$AMZN', 552.32),
('$AMZN', 552.32) ])
pairRDD.reduceByKey(lambda x,y: x + y).collect() # sum of values per key
"""
Explanation: reduceByKey
End of explanation
"""
help(pairRDD.combineByKey)
pairRDD = sc.parallelize([ ('$APPL', 100.64), ('$GOOG', 706.2), ('$AMZN', 552.32), ('$APPL', 100.52), ('$AMZN', 552.32) ])
aggr = pairRDD.combineByKey(createCombiner = lambda x: (x, 1),
mergeValue = lambda x,y: (x[0] + y, x[1] + 1),
mergeCombiners = lambda x,y: (x[0] + y[0], x[1] + y[1]))
aggr.collect()
aggr.map(lambda x: (x[0], x[1][0]/x[1][1])).collect() # average value per key
"""
Explanation: From https://github.com/vaquarkhan/vk-wiki-notes/wiki/reduceByKey--vs-groupBykey-vs-aggregateByKey-vs-combineByKey
ReduceByKey will aggregate y key before shuffling:
GroupByKey will shuffle all the value key pairs as the diagrams show:
combineByKey
End of explanation
"""
help(course_a.join)
course_a = sc.parallelize([ ("Antti", 8), ("Tuukka", 10), ("Leena", 9)])
course_b = sc.parallelize([ ("Leena", 10), ("Tuukka", 10)])
result = course_a.join(course_b)
result.collect()
"""
Explanation: (inner) join
End of explanation
"""
text = sc.textFile("myfile.txt")
long_lines = sc.accumulator(0) # create accumulator
def line_len(line):
global long_lines # to reference an accumulator, declare it as global variable
length = len(line)
if length > 30:
long_lines += 1 # update the accumulator
return length
llengthRDD = text.map(line_len)
llengthRDD.count()
long_lines.value # this is how we obtain the value of the accumulator in the driver program
help(long_lines)
long_lines.value # this is how we obtain the value of the accumulator in the driver program
"""
Explanation: Accumulators
This example demonstrates how to use accumulators.
The map operations creates an RDD that contains the length of lines in the text file - and while the RDD is materialized, an accumulator keeps track of how many lines are long (longer than $30$ characters).
End of explanation
"""
text = sc.textFile("myfile.txt")
long_lines_2 = sc.accumulator(0)
def line_len(line):
global long_lines_2
length = len(line)
if length > 30:
long_lines_2 += 1
text.foreach(line_len)
long_lines_2.value
"""
Explanation: Warning
In the example above, we update the value of an accumulator within a transformation (map). This is not recommended, unless for debugging purposes! The reason is that, if there are failures during the materialization of llengthRDD, some of its partitions will be re-computed, possibly causing the accumulator to double-count some the the long lines.
It is advisable to use accumulators within actions - and particularly with the foreach action, as demonstrated below.
End of explanation
"""
def load_address_table():
return {"Anu": "Chem. A143", "Karmen": "VTT, 74", "Michael": "OIH, B253.2",
"Anwar": "T, B103", "Orestis": "T, A341", "Darshan": "T, A325"}
address_table = sc.broadcast(load_address_table())
def find_address(name):
res = None
if name in address_table.value:
res = address_table.value[name]
return res
people = sc.parallelize(["Anwar", "Michael", "Orestis", "Darshan"])
pairRDD = people.map(lambda name: (name, find_address(name))) # first operation that uses the address table
print(pairRDD.collectAsMap())
other_people = sc.parallelize(["Karmen", "Michael", "Anu"])
pairRDD = other_people.map(lambda name: (name, find_address(name))) # second operation that uses the address table
print(pairRDD.collectAsMap())
"""
Explanation: Broadcast variable
We use broadcast variables when many operations depend on the same large static object - e.g., a large lookup table that does not change but provides information for other operations. In such cases, we can make a broadcast variable out of the object and thus make sure that the object will be shipped to the cluster only once - and not for each of the operations we'll be using it for.
The example below demonstrates the usage of broadcast variables. In this case, we make a broadcast variable out of a dictionary that represents an address table. The tablke is shipped to cluster nodes only once across multiple operations.
End of explanation
"""
sc.stop()
"""
Explanation: Stopping
Call stop() on the SparkContext object to shut it down.
End of explanation
"""
import random
NUM_SAMPLES = 10000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, NUM_SAMPLES)).filter(inside).count()
print("Pi is roughly %f" % (4.0 * count / NUM_SAMPLES))
"""
Explanation: Example: Estimating Pi
End of explanation
"""
# Details of the algorithm can be found here: http://www.cs.princeton.edu/~chazelle/courses/BIB/pagerank.htm
iterations = 5
def computeContribs(urls, rank):
"""Calculates URL contributions to the rank of other URLs."""
num_urls = len(urls)
for url in urls:
yield (url, rank / num_urls)
def parseNeighbors(urls):
"""Parses a urls pair string into urls pair."""
parts = urls.split(',')
return parts[0], parts[1]
# Read the lines
lines = sc.textFile("higgs-mention_network.txt").persist()
lines.collect()
# Loads all URLs from input file and initialize their neighbors.
links = lines.map(lambda urls: parseNeighbors(urls)).distinct().groupByKey().cache()
links.collect()
# Loads all URLs with other URL(s) link to from input file and initialize ranks of them to one.
ranks = links.map(lambda url_neighbors: (url_neighbors[0], 1.0))
ranks.collect()
# Calculates and updates URL ranks continuously using PageRank algorithm.
for iteration in range(iterations):
# Calculates URL contributions to the rank of other URLs.
contribs = links.join(ranks).flatMap(
lambda url_urls_rank: computeContribs(url_urls_rank[1][0], url_urls_rank[1][1]))
# Re-calculates URL ranks based on neighbor contributions.
ranks = contribs.reduceByKey(lambda x,y: x+y).mapValues(lambda rank: rank * 0.85 + 0.15)
# Collects all URL ranks and dump them to console.
for (link, rank) in ranks.collect():
print("%s has rank: %s." % (link, rank))
"""
Explanation: Example: Computing PageRank
End of explanation
"""
|
compsocialscience/summer-institute
|
2018/materials/boulder/day2-digital-trace-data/Day 2 - Case Study - Web Scraping.ipynb
|
mit
|
pet_pages = ["https://www.boulderhumane.org/animals/adoption/dogs",
"https://www.boulderhumane.org/animals/adoption/cats",
"https://www.boulderhumane.org/animals/adoption/adopt_other"]
r = requests.get(pet_pages[0])
html = r.text
print(html[:500]) # Print the first 500 characters of the HTML. Notice how it's the same as the screenshot above.
"""
Explanation: <b>Acknowledgements:</b> The code below is very much inspired by Chris Bail's "Screen-Scraping in R". Thanks Chris!
Collecting Digital Trace Data: Web Scraping
Web scraping (also sometimes called "screen-scraping") is a method for extracting data from the web. There are many techniques which can be used for web scraping — ranging from requiring human involvement (“human copy-paste”) to fully automated systems. For research questions where you need to visit many webpages, and collect essentially very similar information from each, web scraping can be a great tool.
The typical web scraping program:
<ol>
<li> Loads the address of a webpage to be scraped from your list of webpages</li>
<li> Downloads the HTML or XML of that website</li>
<li> Extracts any desired information</li>
<li> Saves that information in a convenient format (e.g. CSV, JSON, etc.)</li>
</ol>
<img src="https://raw.githubusercontent.com/compsocialscience/summer-institute/master/2018/materials/day2-digital-trace-data/screenscraping/rmarkdown/Screen-Scraping.png"></img>
<em>From Chris Bail's "Screen-Scraping in R": <a href="https://cbail.github.io/SICSS_Screenscraping_in_R.html">https://cbail.github.io/SICSS_Screenscraping_in_R.html</a></em>
Legality & Politeness
When the internet was young, web scraping was a common and legally acceptable practice for collecting data on the web. But with the rise of online platforms, some of which rely heavily on user-created content (e.g. Craigslist), the data made available on these sites has become recognized by their companies as highly valuable. Furthermore, from a website developer's perspective, web crawlers are able request many pages from your site in rapid succession, increasing server loads, and generally being a nuisance.
Thus many websites, especially large sites (e.g. Yelp, AllRecipes, Instagram, The New York Times, etc.), have now forbidden "crawlers" / "robots" / "spiders" from harvesting their data in their "Terms of Service" (TOS). From Yelp's <a href="https://www.yelp.com/static?p=tos">Terms of Service</a>:
<img src="https://user-images.githubusercontent.com/6633242/45270118-b87a2580-b456-11e8-9d26-826f44bf5243.png"></img>
Before embarking on a research project that will involve web scraping, it is important to understand the TOS of the site you plan on collecting data from.
If the site does allow web scraping (and you've consulted your legal professional), many websites have a robots.txt file that tells search engines and web scrapers, written by researchers like you, how to interact with the site "politely" (i.e. the number of requests that can be made, pages to avoid, etc.).
Requesting a Webpage in Python
When you visit a webpage, your web browser renders an HTML document with CSS and Javascript to produce a visually appealing page. For example, to us, the Boulder Humane Society's listing of dogs available for adoption looks something like what's displayed at the top of the browser below:
<img src="https://user-images.githubusercontent.com/6633242/45270123-c760d800-b456-11e8-997e-580508e862e7.png"></img>
But to your web browser, the page actually looks like the HTML source code (basically instructions for what text and images to show and how to do so) shown at the bottom of the page. To see the source code of a webpage, in Safari, go to Develop > Show Page Source or in Chrome, go to Developer > View Source.
To request the HTML for a page in Python, you can use the Python package requests, as such:
End of explanation
"""
soup = BeautifulSoup(html, 'html.parser')
pet = soup.select("#block-system-main > div > div > div.view-content > div.views-row.views-row-1.views-row-odd.views-row-first.On.Hold")
print(pet)
"""
Explanation: Parsing HTML with BeautifulSoup
Now that we've downloaded the HTML of the page, we next need to parse it. Let's say we want to extract all of the names, ages, and breeds of the dogs, cats, and small animals currently up for adoption at the Boulder Humane Society.
Actually, navigating to the location of those attributes in the page can be somewhat tricky. Luckily HTML has a tree-structure, as shown below, where tags fit inside other tags. For example, the title of the document is located within its head, and within the larger html document (<html> <head> <title> </title> ... </head>... </html>).
<img src="https://raw.githubusercontent.com/compsocialscience/summer-institute/master/2018/materials/day2-digital-trace-data/screenscraping/rmarkdown/html_tree.png"></img>
<em>From Chris Bail's "Screen-Scraping in R": <a href="https://cbail.github.io/SICSS_Screenscraping_in_R.html">https://cbail.github.io/SICSS_Screenscraping_in_R.html</a></em>
To find the first pet on the page, we'll find that HTML element's "CSS selector". Within Safari, hover your mouse over the image of the first pet and then control+click on the image. This should highlight the section of HTML where the object you are trying to parse is found. Sometimes you may need to move your mouse through the HTML to find the exact location of the object (see GIF).
<img src="https://user-images.githubusercontent.com/6633242/45270125-dc3d6b80-b456-11e8-80ae-4947dd667d30.png"></img>
(You can also go to 'Develop > Show Page Source' and then click 'Elements'. Hover your mouse over sections of the HTML until the object you are trying to find is highlighted within your browser.)
BeautifulSoup is a Python library for parsing HTML. We'll pass the CSS selector that we just copied to BeautifulSoup to grab that object. Notice below how select-ing on that pet, shows us all of its attributes.
End of explanation
"""
name = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-animalname'})
primary_breed = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-primarybreed'})
secondary_breed = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-secondarybreed'})
age = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-age'})
# We can call `get_text()` on those objects to print them nicely.
print({
"name": name.get_text(strip = True),
"primary_breed": primary_breed.get_text(strip = True),
"secondary_breed": secondary_breed.get_text(strip = True),
"age": age.get_text(strip=True)
})
"""
Explanation: Furthermore, we can select the name, breeds, age, and gender of the pet by find-ing the div tags which contain this information. Notice how the div tag has the attribute (attrs) class with value "views-field views-field-field-pp-animalname".
End of explanation
"""
all_pets = soup.find_all('div', {'class': 'views-row'})
for pet in all_pets:
name = pet.find('div', {'class': 'views-field views-field-field-pp-animalname'}).get_text(strip=True)
primary_breed = pet.find('div', {'class': 'views-field views-field-field-pp-primarybreed'}).get_text(strip=True)
secondary_breed = pet.find('div', {'class': 'views-field views-field-field-pp-secondarybreed'}).get_text(strip=True)
age = pet.find('div', {'class': 'views-field views-field-field-pp-age'}).get_text(strip=True)
print([name, primary_breed, secondary_breed, age])
"""
Explanation: Now to get at the HTML object for each pet, we could find the CSS selector for each. Or, we can exploit the fact that every pet lives in a similar HTML structure for each pet. That is, we can find all of the div tags with the class attribute which contain the string views-row. We'll print out their attributes like we just did.
End of explanation
"""
table = pd.read_html("https://en.wikipedia.org/wiki/List_of_sandwiches", header=0)[0]
#table.to_csv("filenamehere.csv") # Write table to CSV
table.head(20)
"""
Explanation: This may seem like a fairly silly example of webscraping, but one could imagine several research questions using this data. For example, if we collected this data over time (e.g. using Wayback Machine), could we identify what features of pets -- names, breeds, ages -- make them more likely to be adopted? Are there certain names that are more common for certain breeds? Or maybe your research question is something even wackier.
Aside: Read Tables from Webpages
Pandas has really neat functionality in read_html where you can download an HTML table directly from a webpage, and load it into a dataframe.
End of explanation
"""
driver = selenium.webdriver.Safari() # This command opens a window in Safari
# driver = selenium.webdriver.Chrome(executable_path = "<path to chromedriver>") # This command opens a window in Chrome
# driver = selenium.webdriver.Firefox(executable_path = "<path to geckodriver>") # This command opens a window in Firefox
# Get the xkcd website
driver.get("https://xkcd.com/")
# Let's find the 'random' buttom
element = driver.find_element_by_xpath('//*[@id="middleContainer"]/ul[1]/li[3]/a')
element.click()
# Find an attribute of this page - the title of the comic.
element = driver.find_element_by_xpath('//*[@id="comic"]/img')
element.get_attribute("title")
# Continue clicking throught the comics
driver.find_element_by_xpath('//*[@id="middleContainer"]/ul[1]/li[3]/a').click()
driver.quit() # Always remember to close your browser!
"""
Explanation: Requesting a Webpage with Selenium
Sometimes our interactions with webpages involve rendering Javascript. For example, think of visiting a webpage with a search box, typing in a query, pressing search, and viewing the result. Or visiting a webpage that requires a login, or clicking through pages in a list. To handle pages like these we'll use a package in Python called Selenium.
Installing Selenium can be a little tricky. You'll want to follow the directions as best you can here. Requirements (one of the below):
- Firefox + geckodriver (https://github.com/mozilla/geckodriver/releases)
- Chrome + chromedriver (https://sites.google.com/a/chromium.org/chromedriver/)
First a fairly simple example: let's visit xkcd and click through the comics.
End of explanation
"""
driver = selenium.webdriver.Safari() # This command opens a window in Safari
# driver = selenium.webdriver.Chrome(executable_path = "<path to chromedriver>") # This command opens a window in Chrome
# driver = selenium.webdriver.Firefox(executable_path = "<path to geckodriver>") # This command opens a window in Firefox
driver.get('https://www.boxofficemojo.com')
"""
Explanation: We'll now walk through how we can use Selenium to navigate the website to navigate a open source site called <a href="https://www.boxofficemojo.com">"Box Office Mojo"</a>.
<img src="https://user-images.githubusercontent.com/6633242/45270131-f1b29580-b456-11e8-81fd-3f5361161e7f.png"></img>
End of explanation
"""
# Type in the search bar, and click 'Search'
driver.find_element_by_xpath('//*[@id="leftnav"]/li[2]/form/input[1]').send_keys('Avengers: Infinity War')
driver.find_element_by_xpath('//*[@id="leftnav"]/li[2]/form/input[2]').click()
"""
Explanation: Let's say I wanted to know which movie was has been more lucrative 'Wonder Woman', 'Blank Panther', or 'Avengers: Infinity War'. I could type into the search bar on the upper left: 'Avengers: Infinity War'.
End of explanation
"""
# This is what the table looks like
table = driver.find_element_by_xpath('//*[@id="body"]/table[2]')
# table.get_attribute('innerHTML').strip()
pd.read_html(table.get_attribute('innerHTML').strip(), header=0)[2]
# Find the link to more details about the Avengers movie and click it
driver.find_element_by_xpath('//*[@id="body"]/table[2]/tbody/tr/td/table[2]/tbody/tr[2]/td[1]/b/font/a').click()
"""
Explanation: Now, I can parse the table returned using BeautifulSoup.
End of explanation
"""
driver.quit() # Always remember to close your browser!
"""
Explanation: <img src="https://user-images.githubusercontent.com/6633242/45270140-03943880-b457-11e8-9d27-660a7cc4f2eb.png"></img>
Now, we can do the same for the remaining movies: 'Wonder Woman', and 'Black Panther' ...
End of explanation
"""
|
UltronAI/Deep-Learning
|
CS231n/assignment1/.ipynb_checkpoints/features-checkpoint.ipynb
|
mit
|
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
"""
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
"""
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
"""
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
"""
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
"""
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
"""
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
End of explanation
"""
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
"""
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation
"""
|
GoogleCloudPlatform/bigquery-notebooks
|
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/00_prep_bq_and_datastore.ipynb
|
apache-2.0
|
!pip install -q -U apache-beam[gcp]
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Import the sample data into BigQuery and Datastore
This notebook is the first of two notebooks that guide you through completing the prerequisites for running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Importing the playlist table from the public BigQuery dataset to your BigQuery dataset.
Creating the vw_item_groups view that contains the item data used to compute item co-occurence.
Exporting song title and artist information to Datastore to make it available for lookup when making similar song recommendations.
Before starting this notebook, you must set up the GCP environment.
After completing this notebook, run the 00_prep_bq_procedures notebook to complete the solution prerequisites.
Setup
Install the required Python packages, configure the environment variables, and authenticate your GCP account.
End of explanation
"""
import os
from datetime import datetime
import apache_beam as beam
from apache_beam.io.gcp.datastore.v1new.datastoreio import WriteToDatastore
"""
Explanation: Import libraries
End of explanation
"""
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
BQ_REGION = "yourBigQueryRegion" # Change to your BigQuery region.
DF_REGION = "yourDataflowRegion" # Change to your Dataflow region.
BQ_DATASET_NAME = "recommendations"
BQ_TABLE_NAME = "playlist"
DS_KIND = "song"
!gcloud config set project $PROJECT_ID
"""
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
BQ_REGION: The region to use for the BigQuery dataset.
DF_REGION: The region to use for the Dataflow job. Choose the same region that you used for the BQ_REGION variable to avoid issues around reading/writing in different locations.
End of explanation
"""
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
"""
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
"""
!bq mk --dataset \
--location={BQ_REGION} \
--project_id={PROJECT_ID} \
--headless=True \
{PROJECT_ID}:{BQ_DATASET_NAME}
"""
Explanation: Copy the public playlist data into your BigQuery dataset
Create the BigQuery dataset
End of explanation
"""
def run_copy_bq_data_pipeline(args):
schema = "list_Id:INT64, track_Id:INT64, track_title:STRING, track_artist:STRING"
query = """
SELECT
id list_Id,
tracks_data_id track_Id,
tracks_data_title track_title,
tracks_data_artist_name track_artist
FROM `bigquery-samples.playlists.playlist`
WHERE tracks_data_title IS NOT NULL AND tracks_data_id > 0
GROUP BY list_Id, track_Id, track_title, track_artist;
"""
pipeline_options = beam.options.pipeline_options.PipelineOptions(**args)
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| "ReadFromBigQuery"
>> beam.io.Read(
beam.io.BigQuerySource(
project=PROJECT_ID, query=query, use_standard_sql=True
)
)
| "WriteToBigQuery"
>> beam.io.WriteToBigQuery(
table=BQ_TABLE_NAME,
dataset=BQ_DATASET_NAME,
project=PROJECT_ID,
schema=schema,
create_disposition="CREATE_IF_NEEDED",
write_disposition="WRITE_TRUNCATE",
)
)
"""
Explanation: Define the Dataflow pipeline
The pipeline selects songs where the track_data_title field isn't NULL and the track_data_id field is greater than 0.
End of explanation
"""
DATASET = "playlist"
RUNNER = "DataflowRunner"
job_name = f'copy-bigquery-{datetime.utcnow().strftime("%y%m%d%H%M%S")}'
args = {
"job_name": job_name,
"runner": RUNNER,
"project": PROJECT_ID,
"temp_location": f"gs://{BUCKET}/dataflow_tmp",
"region": DF_REGION,
}
print("Pipeline args are set.")
print("Running pipeline...")
%time run_copy_bq_data_pipeline(args)
print("Pipeline is done.")
"""
Explanation: Run the Dataflow pipeline
This pipeline takes approximately 15 minutes to run.
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE VIEW `recommendations.vw_item_groups`
AS
SELECT
list_Id AS group_Id,
track_Id AS item_Id
FROM
`recommendations.playlist`
"""
Explanation: Create the vw_item_groups view
Create the recommendations.vw_item_groups view to focus on song and playlist data.
To adapt this view to your own data, you would need to map your item identifier, for example product SKU, to item_Id, and your context identifier, for example purchase order number, to group_Id.
End of explanation
"""
def create_entity(song_info, kind):
from apache_beam.io.gcp.datastore.v1new.types import Entity, Key
track_Id = song_info.pop("track_Id")
key = Key([kind, track_Id])
song_entity = Entity(key)
song_entity.set_properties(song_info)
return song_entity
def run_export_to_datatore_pipeline(args):
query = f"""
SELECT
track_Id,
MAX(track_title) track_title,
MAX(track_artist) artist
FROM
`{BQ_DATASET_NAME}.{BQ_TABLE_NAME}`
GROUP BY track_Id
"""
pipeline_options = beam.options.pipeline_options.PipelineOptions(**args)
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| "ReadFromBigQuery"
>> beam.io.Read(
beam.io.BigQuerySource(
project=PROJECT_ID, query=query, use_standard_sql=True
)
)
| "ConvertToDatastoreEntity" >> beam.Map(create_entity, DS_KIND)
| "WriteToDatastore" >> WriteToDatastore(project=PROJECT_ID)
)
"""
Explanation: Export song information to Datastore
Export data from the track_title and artist fields to Datastore.
Define the Dataflow pipeline
End of explanation
"""
import os
from datetime import datetime
DATASET = "playlist"
RUNNER = "DataflowRunner"
job_name = f'load-datastore-{datetime.utcnow().strftime("%y%m%d%H%M%S")}'
args = {
"job_name": job_name,
"runner": RUNNER,
"project": PROJECT_ID,
"temp_location": f"gs://{BUCKET}/dataflow_tmp",
"region": DF_REGION,
}
print("Pipeline args are set.")
print("Running pipeline...")
%time run_export_to_datatore_pipeline(args)
print("Pipeline is done.")
"""
Explanation: Run the Dataflow pipeline
This pipeline takes approximately 15 minutes to run.
End of explanation
"""
|
jasonding1354/PRML_Notes
|
1.PROBABILITY_DISTRIBUTIONS/1.1 Binary_Variables.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: 概率论在解决模式识别问题时具有重要作用,它是构成更复杂模型的基石。
概率分布的一个作用是在给定有限次观测x1, . . . , xN的前提下,对随机变量x的概率分布p(x)建模。这个问题被称为密度估计(density estimation)。选择一个合适的分布与模型选择的问题相关,这是模式识别领域的一个中心问题。
二元变量
End of explanation
"""
from scipy.stats import bernoulli
"""
Explanation: 1. 伯努利分布
考虑一个二元随机变量$x\in{0,1}$,$x=1$的概率被记作参数$\mu$,因此$p(x=1|\mu)=\mu$,$p(x=0|\mu)=1-\mu$,其中$\mu\leq1$。
x的概率分布可以写成$Bern(x|\mu)=\mu^x(1-\mu)^{1-x}$,称为伯努利分布($Bernoulli$ $distribution$)。
伯努利分布的均值和方差分别为$\mathbb{E}[x]=\mu$,$var[x]=\mu(1-\mu)$。
假设有x的观测值数据集$\mathbb{D}={x_1,...,x_N}$。假设每次观测都是独立地从$p(x|\mu)$中抽取的,因此关于$\mu$的似然函数是$p(\mathbb{D}|\mu)=\prod\limits_{n=1}^N p(x_n|\mu)=\prod\limits_{n=1}^N \mu^{x_n}(1-\mu)^{1-x_n}$。
其最大似然的估计值是$\mu_{ML}=\frac{1}{N}\sum\limits_{n=1}^N x_n=\frac{m}{N}$,如果把数据集里$x=1$的观测的数量记为m。
因此,在最大似然的框架中,x=1的概率是数据集中x=1的观测所占的比例。假设扔一枚硬币5次,碰巧5次硬币都正面朝上,那么$\mu_{ML}=1$。最大似然的结果会预测所有未来的观测值都是正面向上。这是最大似然中过拟合现象的一个极端例子。我们通过引入$\mu$的先验分布,来得到更加合理的解释。
End of explanation
"""
u = 0.6
x = np.arange(2)
bern = bernoulli.pmf(x, u)
plt.bar(x, bern, facecolor='green', alpha=0.5)
plt.xlabel('x')
plt.xlim(-0.2, 2)
plt.title('Histogram of the nernoulli distribution')
plt.show()
"""
Explanation: 伯努利分布直方图,u=0.6
End of explanation
"""
from scipy.stats import binom
N = 10
u = 0.25
m = np.arange(N+1)
binData = binom.pmf(m, N, u)
binData
"""
Explanation: 2. 二项分布
给定数据集规模N的条件下,x = 1的观测出现的数量m的概率分布,称为二项分布(binomial distribution)。
$Bin(m|N, \mu)=\binom{N}{m}\mu^m(1-\mu)^{N-m}$,其中$\binom{N}{m}=\frac{N!}{(N-m)!m!}$。
其均值和方差分别是$\mathbb{E}[m]=\sum\limits_{m=0}^{N}mBin(m|N,\mu)=N\mu$,$var[m]=\sum\limits_{m=0}^{N}(m-\mathbb{E}[x])^2Bin(m|N,\mu)=N\mu(1-\mu)$。
End of explanation
"""
plt.bar(m, binData, facecolor='green', alpha=0.5)
plt.xlabel('m')
plt.xlim(-0.2, 10.5)
plt.title('Histogram of the binomial distribution')
plt.show()
"""
Explanation: 关于m的函数的二项分布直方图,其中N=10且u=0.25
End of explanation
"""
from scipy.stats import beta
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(221)
x = np.linspace(0, 1, 100)
pbeta = beta.pdf(x, 0.1, 0.1)
ax.plot(x, pbeta, label='a=0.1\nb=0.1',lw=2)
plt.ylim(0,3)
plt.xlabel('$\mu$')
plt.grid(True)
plt.legend(loc="best")
ax = fig.add_subplot(222)
x = np.linspace(0, 1, 100)
pbeta = beta.pdf(x, 1, 1)
ax.plot(x, pbeta, 'g', label='a=1\nb=1',lw=2)
plt.xlabel('$\mu$')
plt.ylim(0,3)
plt.grid(True)
plt.legend(loc="best")
ax = fig.add_subplot(223)
x = np.linspace(0, 1, 100)
pbeta = beta.pdf(x, 2, 3)
ax.plot(x, pbeta, 'r', label='a=2\nb=3',lw=2)
plt.xlabel('$\mu$')
plt.ylim(0,3)
plt.grid(True)
plt.legend(loc="best")
ax = fig.add_subplot(224)
x = np.linspace(0, 1, 100)
pbeta = beta.pdf(x, 8, 4)
ax.plot(x, pbeta, 'm', label='a=8\nb=4',lw=2)
plt.xlabel('$\mu$')
plt.ylim(0,3)
plt.grid(True)
plt.legend(loc="best")
plt.show()
"""
Explanation: 3. Beta分布
伯努利分布的似然函数是某个因子与$\mu^x(1-\mu)^{1-x}$的乘积的形式,如果我们选择一个正比于$\mu$和$(1-\mu)$的幂指数的先验概率分布,那么后验概率分布(正比于先验和似然函数的乘积)就会有着与先验分布相同的函数形式。这种性质叫做共轭性(conjugacy)。
Beta分布定义如下:
$$Beta(\mu|a, b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\mu^{a-1}(1-\mu)^{b-1}$$
$\Gamma(x)$是Gamma函数,保证Beta分布归一化。
Beta分布的均值和方差为$$\mathbb{E}[\mu]=\frac{a}{a+b},var[\mu]=\frac{ab}{(a+b)^2(a+b+1)}$$
参数a和b被称为超参数(hyperparameter),因为它们控制了参数$\mu$的概率分布。
End of explanation
"""
from scipy.misc import comb
pbinom = lambda m, N, p: comb(N, m) * p**m * (1-p)**(N-m)
"""
Explanation: 4. 后验概率
将Beta先验和二项似然函数相乘,得到后验概率分布的形式为:
$$p(\mu|m,l,a,b)\propto\mu^{m+a-1}(1-\mu)^{l+b-1}$$
其中$l=N-m$对应硬币反面朝上的样本数量。
具体得到归一化系数,公式如下:
$$p(\mu|m,l,a,b)=\frac{\Gamma(m+a+l+b)}{\Gamma(m+a)\Gamma(l+b)}\mu^{m+a-1}(1-\mu)^{l+b-1}$$
我们可以发现,如果数据集里有m次观测为x=1,有l次观测为x=0,那么从先验概率到后验概率,a的值变大了m,b的值变大了l。
End of explanation
"""
fig = plt.figure(figsize=(15,3))
ax = fig.add_subplot(131)
x = np.linspace(0, 1, 100)
pbeta = beta.pdf(x, 2, 2)
ax.plot(x, pbeta,lw=2)
plt.xlim(0,1)
plt.ylim(0,2)
plt.xlabel('$\mu$')
plt.grid(True)
plt.text(0.05, 1.7, "prior", fontsize=15)
ax = fig.add_subplot(132)
x = np.linspace(0, 1, 100)
pbin = pbinom(1, 1, x)
ax.plot(x, pbin,lw=2)
plt.xlim(0,1)
plt.ylim(0,2)
plt.xlabel('$\mu$')
plt.grid(True)
plt.text(0.05, 1.7, "likelihood function", fontsize=15)
ax = fig.add_subplot(133)
x = np.linspace(0, 1, 100)
post = pbeta * x
ax.plot(x, post,lw=2)
ax.plot(x, beta.pdf(x, 3, 2), lw=2)
plt.xlim(0,1)
plt.ylim(0,2)
plt.xlabel('$\mu$')
plt.grid(True)
plt.text(0.05, 1.7, "posterior", fontsize=15)
plt.show()
"""
Explanation: 上面代码是二项式分布的分布函数$Bin(m|N, p)=\binom{N}{m}p^m(1-p)^{N-m}$,其似然函数和该分布函数具有相同的形式,只是其参数p是未知的。
End of explanation
"""
|
cvxgrp/cvxpylayers
|
examples/torch/data_poisoning_attack.ipynb
|
apache-2.0
|
import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
import torch
from cvxpylayers.torch import CvxpyLayer
"""
Explanation: Data poisoning attack
In this notebook, we use a convex optimization layer to perform a data poisoning attack; i.e., we show how to perturb the data used to train a logistic regression classifier so as to maximally increase the test loss. This example is also presented in section 6.1 of the paper Differentiable convex optimization layers.
End of explanation
"""
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
torch.manual_seed(0)
np.random.seed(0)
n = 2
N = 60
X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
torch.from_numpy, [Xtrain, Xtest, ytrain, ytest])
Xtrain.requires_grad_(True)
m = Xtrain.shape[0]
a = cp.Variable((n, 1))
b = cp.Variable((1, 1))
X = cp.Parameter((m, n))
Y = ytrain.numpy()[:, np.newaxis]
log_likelihood = (1. / m) * cp.sum(
cp.multiply(Y, X @ a + b) - cp.logistic(X @ a + b)
)
regularization = - 0.1 * cp.norm(a, 1) - 0.1 * cp.sum_squares(a)
prob = cp.Problem(cp.Maximize(log_likelihood + regularization))
fit_logreg = CvxpyLayer(prob, [X], [a, b])
"""
Explanation: We are given training data $(x_i, y_i){i=1}^{N}$,
where $x_i\in\mathbf{R}^n$ are feature vectors and $y_i\in{0,1}$ are the labels.
Suppose we fit a model for this classification problem by solving
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum{i=1}^N \ell(\theta; x_i, y_i) + r(\theta),
\end{array}
\label{eq:trainlinear}
\end{equation}
where the loss function $\ell(\theta; x_i, y_i)$ is convex in $\theta \in \mathbf{R}^n$ and $r(\theta)$ is a convex
regularizer. We hope that the test loss $\mathcal{L}^{\mathrm{test}}(\theta) =
\frac{1}{M}\sum_{i=1}^M \ell(\theta; \tilde x_i, \tilde y_i)$ is small, where
$(\tilde x_i, \tilde y_i)_{i=1}^{M}$ is our test set. In this example, we use the logistic loss
\begin{equation}
\ell(\theta; x_i, y_i) = \log(1 + \exp(\beta^Tx_i + b)) - y_i(\beta^Tx_i + b)
\end{equation}
with elastic net regularization
\begin{equation}
r(\theta) = 0.1\|\beta\|_1 + 0.1\|\beta\|_2^2.
\end{equation}
End of explanation
"""
from sklearn.linear_model import LogisticRegression
a_tch, b_tch = fit_logreg(Xtrain)
loss = 300 * torch.nn.BCEWithLogitsLoss()((Xtest @ a_tch + b_tch).squeeze(), ytest*1.0)
loss.backward()
Xtrain_grad = Xtrain.grad
"""
Explanation: Assume that our training data is subject to a data poisoning attack,
before it is supplied to us. The adversary has full knowledge of our modeling
choice, meaning that they know the form of the optimization problem above, and seeks
to perturb the data to maximally increase our loss on the test
set, to which they also have access. The adversary is permitted to apply an
additive perturbation $\delta_i \in \mathbf{R}^n$ to each of the training points $x_i$,
with the perturbations satisfying $\|\delta_i\|_\infty \leq 0.01$.
Let $\theta^\star$ be optimal.
The gradient of
the test loss with respect to a training data point, $\nabla_{x_i}
\mathcal{L}^{\mathrm{test}}(\theta^\star)$, gives the direction
in which the point should be moved to achieve the greatest
increase in test loss. Hence, one reasonable adversarial policy is to set $x_i
:= x_i +
.01\mathrm{sign}(\nabla_{x_i}\mathcal{L}^{\mathrm{test}}(\theta^\star))$. The
quantity $0.01\sum_{i=1}^N \|\nabla_{x_i}
\mathcal{L}^{\mathrm{test}}(\theta^\star)\|_1$ is the predicted increase in
our test loss due to the poisoning.
End of explanation
"""
lr = LogisticRegression(solver='lbfgs')
lr.fit(Xtest.numpy(), ytest.numpy())
beta_train = a_tch.detach().numpy().flatten()
beta_test = lr.coef_.flatten()
b_train = b_tch.squeeze().detach().numpy()
b_test = lr.intercept_[0]
hyperplane = lambda x, beta, b: - (b + beta[0] * x) / beta[1]
Xtrain_np = Xtrain.detach().numpy()
Xtrain_grad_np = Xtrain_grad.numpy()
ytrain_np = ytrain.numpy().astype(np.bool)
plt.figure()
plt.scatter(Xtrain_np[ytrain_np, 0], Xtrain_np[ytrain_np, 1], s=25, marker='+')
plt.scatter(Xtrain_np[~ytrain_np, 0], Xtrain_np[~ytrain_np, 1], s=25, marker='*')
for i in range(m):
plt.arrow(Xtrain_np[i, 0], Xtrain_np[i, 1],
Xtrain_grad_np[i, 0], Xtrain_grad_np[i, 1])
plt.xlim(-8, 8)
plt.ylim(-8, 8)
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_train, b_train)
for x in np.linspace(-8, 8, 100)], '--', color='red', label='train')
plt.plot(np.linspace(-8, 8, 100),
[hyperplane(x, beta_test, b_test)
for x in np.linspace(-8, 8, 100)], '-', color='blue', label='test')
plt.legend()
plt.savefig("data_poisoning.pdf")
plt.show()
"""
Explanation: Below, we plot the gradient of the test loss with respect to the training data points. The blue and orange points are training data, belonging to different classes. The red line is the hyperplane learned by fitting the the model, while the blue line is the hyperplane that minimizes the test loss. The gradients are visualized as black lines, attached to the data points. Moving the points in the gradient directions torques the learned hyperplane away from the optimal hyperplane for the test set.
End of explanation
"""
|
ppik/playdata
|
Kaggle-Expedia/Expedia Hotel Recommendations - Logistic Regression.ipynb
|
mit
|
import collections
import itertools
import operator
import random
import heapq
import matplotlib.pyplot as plt
import ml_metrics as metrics
import numpy as np
import pandas as pd
import sklearn
import sklearn.decomposition
import sklearn.linear_model
import sklearn.preprocessing
%matplotlib notebook
"""
Explanation: Expedia Hotel Recommendations Kaggle competition
Peeter Piksarv (piksarv .at. gmail.com)
The latest version of this Jupyter notebook is available at https://github.com/ppik/playdata/tree/master/Kaggle-Expedia
Here I'll try to test some machine learning techniques on this dataset.
End of explanation
"""
traincols = ['date_time', 'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city', 'orig_destination_distance',
'user_id', 'is_mobile', 'is_package', 'channel', 'srch_ci', 'srch_co',
'srch_adults_cnt', 'srch_children_cnt', 'srch_rm_cnt', 'srch_destination_id',
'srch_destination_type_id', 'is_booking', 'cnt', 'hotel_continent',
'hotel_country', 'hotel_market', 'hotel_cluster']
testcols = ['id', 'date_time', 'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city', 'orig_destination_distance',
'user_id', 'is_mobile', 'is_package', 'channel', 'srch_ci', 'srch_co',
'srch_adults_cnt', 'srch_children_cnt', 'srch_rm_cnt', 'srch_destination_id',
'srch_destination_type_id', 'hotel_continent', 'hotel_country', 'hotel_market']
"""
Explanation: Data import
Defining a list of available data columns:
End of explanation
"""
def read_csv(filename, cols, nrows=None):
datecols = ['date_time', 'srch_ci', 'srch_co']
dateparser = lambda x: pd.to_datetime(x, format='%Y-%m-%d %H:%M:%S', errors='coerce')
dtypes = {
'id': np.uint32,
'site_name': np.uint8,
'posa_continent': np.uint8,
'user_location_country': np.uint16,
'user_location_region': np.uint16,
'user_location_city': np.uint16,
'orig_destination_distance': np.float32,
'user_id': np.uint32,
'is_mobile': bool,
'is_package': bool,
'channel': np.uint8,
'srch_adults_cnt': np.uint8,
'srch_children_cnt': np.uint8,
'srch_rm_cnt': np.uint8,
'srch_destination_id': np.uint32,
'srch_destination_type_id': np.uint8,
'is_booking': bool,
'cnt': np.uint64,
'hotel_continent': np.uint8,
'hotel_country': np.uint16,
'hotel_market': np.uint16,
'hotel_cluster': np.uint8,
}
df = pd.read_csv(
filename,
nrows=nrows,
usecols=cols,
dtype=dtypes,
parse_dates=[col for col in datecols if col in cols],
date_parser=dateparser,
)
if 'date_time' in df.columns:
df['month'] = df['date_time'].dt.month.astype(np.uint8)
df['year'] = df['date_time'].dt.year.astype(np.uint16)
if 'srch_ci' and 'srch_co' in df.columns:
df['srch_ngt'] = (df['srch_co'] - df['srch_ci']).astype('timedelta64[h]')
if 'srch_children_cnt' in df.columns:
df['is_family'] = np.array(df['srch_children_cnt'] > 0)
return df
train = read_csv('data/train.csv.gz', nrows=None, cols=traincols)
"""
Explanation: Convenience function for reading the data in:
End of explanation
"""
train_ids = set(train.user_id.unique())
len(train_ids)
"""
Explanation: Getting a list of all user_ids in the sample.
End of explanation
"""
sel_user_ids = sorted(random.sample(train_ids, 12000))
sel_train = train[train.user_id.isin(sel_user_ids)]
"""
Explanation: Pick a subset of users for testing and validation
End of explanation
"""
cv_train = sel_train[sel_train.year == 2013]
cv_test = sel_train[sel_train.year == 2014]
"""
Explanation: Create new test and training sets, using bookings from 2013 as training data and 2014 as test data.
End of explanation
"""
cv_test = cv_test[cv_test.is_booking == True]
"""
Explanation: Remove click events from cv_test as in original test data.
End of explanation
"""
most_common_clusters = list(cv_train.hotel_cluster.value_counts().head().index)
"""
Explanation: Model 0: Most common clusters
Public solutions to the compedition (Dataquest tutorial by Vik Paruchuri and Leakage solution by ZFTurbo) use most common clusters in following groups:
* srch_destination_id
* user_location_city, orig_destination_distance (data leak)
* srch_destination_id, hotel_country, hotel_market (for year 2014)
* srch_destination_id
* hotel_country
Finding the most common overall clusters
End of explanation
"""
match_cols = ['srch_destination_id']
match_cols = ['srch_destination_id', 'hotel_country', 'hotel_market']
groups = cv_train.groupby(match_cols + ['hotel_cluster'])
top_clusters = {}
for name, group in groups:
bookings = group['is_booking'].sum()
clicks = len(group) - bookings
score = bookings + .15*clicks
clus_name = name[:len(match_cols)]
if clus_name not in top_clusters:
top_clusters[clus_name] = {}
top_clusters[clus_name][name[-1]] = score
"""
Explanation: Predicting the most common clusters in groups of srch_destination_id, hotel_country, hotel_market.
End of explanation
"""
cluster_dict = {}
for n in top_clusters:
tc = top_clusters[n]
top = [l[0] for l in sorted(tc.items(), key=operator.itemgetter(1), reverse=True)[:5]]
cluster_dict[n] = top
"""
Explanation: This dictionary has a key of srch_destination_id, hotel_country, hotel_market and each value is another dictionary, with hotel clusters as keys and scores as values.
Finding the top 5 for each destination.
End of explanation
"""
preds = []
for index, row in cv_test.iterrows():
key = tuple([row[m] for m in match_cols])
pred = cluster_dict.get(key, most_common_clusters)
preds.append(pred)
cv_target = [[l] for l in cv_test['hotel_cluster']]
metrics.mapk(cv_target, preds, k=5)
"""
Explanation: Making predictions based on destination
End of explanation
"""
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
"""
Explanation: srch_destination_id, is_booking: 0.212
srch_destination_id, hotel_country, hotel_market: 0.214
Model 1: Logistic regression
One vs all classification using stohastic gradient decent and forward stepwise feature selection.
End of explanation
"""
cv_train_data = pd.DataFrame()
for elem in cv_train['srch_destination_id'].unique():
cv_train_data[str(elem)] = cv_train['srch_destination_id'] == elem
cv_test_data = pd.DataFrame()
for elem in cv_train_data.columns:
cv_test_data[elem] = cv_test['srch_destination_id'] == int(elem)
# cv_train_data['is_booking'] = cv_train['is_booking']
# cv_test_data['is_booking'] = np.ones(len(cv_test_data), dtype=bool)
clf.fit(cv_train_data, cv_train['hotel_cluster'])
result = clf.predict_proba(cv_test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
metrics.mapk(cv_target, preds, k=5)
"""
Explanation: Make dummy variables from categorical features. Pandas has get_dummies(), but currently this returns only float64-s, that thends to be rather memory hungry and slow. See #8725.
End of explanation
"""
dest = pd.read_csv(
'data/destinations.csv.gz',
index_col = 'srch_destination_id',
)
pca = sklearn.decomposition.PCA(n_components=10)
dest_small = pca.fit_transform(dest[['d{}'.format(i) for i in range(1,150)]])
dest_small = pd.DataFrame(dest_small, index=dest.index)
cv_train_data = pd.DataFrame({key: cv_train[key] for key in ['srch_destination_id']})
cv_train_data = cv_train_data.join(dest_small, on=['srch_destination_id'], how='left')
cv_train_data = cv_train_data.fillna(dest_small.mean())
cv_test_data = pd.DataFrame({key: cv_test[key] for key in ['srch_destination_id']})
cv_test_data = cv_test_data.join(dest_small, on='srch_destination_id', how='left', rsuffix='dest')
cv_test_data = cv_test_data.fillna(dest_small.mean())
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
clf.fit(cv_train_data, cv_train['hotel_cluster'])
result = clf.predict_proba(cv_test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
metrics.mapk(cv_target, preds, k=5)
"""
Explanation: I would say that not that bad at all (comparing the random forrest classifier in the Dataquest tutorial).
Using destination latent features form destination description data file.
End of explanation
"""
features = [
'site_name', 'posa_continent', 'user_location_country',
'user_location_region', 'user_location_city',
'is_mobile', 'is_package',
'channel', 'srch_adults_cnt', 'srch_destination_id',
'srch_destination_type_id', 'is_booking', 'cnt',
'hotel_continent', 'hotel_country', 'hotel_market',
'month', 'year', 'is_family',
]
def fit_features(features, train, test):
# Data manipulation - split categorical features
train_data = pd.DataFrame()
test_data = pd.DataFrame()
for feature in features:
if train[feature].dtype == np.dtype('bool'):
train_data[feature] = train[feature]
test_data[feature] = test[feature]
else:
for elem in train[feature].unique():
train_data['{}_{}'.format(feature, elem)] = train[feature] == elem
test_data['{}_{}'.format(feature, elem)] = test[feature] == elem
# Fitting
clf = sklearn.linear_model.SGDClassifier(loss='log', n_jobs=4)
clf.fit(train_data, train['hotel_cluster'])
# Cross-validate the fit
result = clf.predict_proba(test_data)
preds = [heapq.nlargest(5, clf.classes_, row.take) for row in result]
target = [[l] for l in test['hotel_cluster']]
return metrics.mapk(target, preds, k=5)
cv_results = {}
for feature in features:
cv_results[feature] = fit_features([feature], cv_train, cv_test)
print('{}: {}'.format(feature, cv_results[feature]))
sorted(cv_results.items(), key=operator.itemgetter(1), reverse=True)
"""
Explanation: => destination latent features seem not to be for any good use?!
End of explanation
"""
features2 = [['hotel_market'] + [f] for f in features if f not in ['hotel_market']]
cv_results2 = {}
for feature in features2:
cv_results2[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results2[tuple(feature)]))
sorted(cv_results2.items(), key=operator.itemgetter(1), reverse=True)[:3]
features3 = [['hotel_market', 'srch_destination_id'] + [f] for f in features if f not in ['hotel_market', 'srch_destination_id']]
cv_results3 = {}
for feature in features3:
cv_results3[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results3[tuple(feature)]))
sorted(cv_results3.items(), key=operator.itemgetter(1), reverse=True)[:3]
features4 = [['hotel_market', 'srch_destination_id', 'hotel_country'] + [f] for f in features if f not in ['hotel_market', 'srch_destination_id', 'hotel_country']]
cv_results4 = {}
for feature in features4:
cv_results4[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results4[tuple(feature)]))
sorted(cv_results4.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package']
features5 = [sel_features + [f] for f in features if f not in sel_features]
cv_results5 = {}
for feature in features5:
cv_results5[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results5[tuple(feature)]))
sorted(cv_results5.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package', 'is_booking']
features6 = [sel_features + [f] for f in features if f not in sel_features]
cv_results6 = {}
for feature in features6:
cv_results6[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results6[tuple(feature)]))
sorted(cv_results6.items(), key=operator.itemgetter(1), reverse=True)[:3]
sel_features = ['hotel_market', 'srch_destination_id', 'hotel_country', 'is_package', 'is_booking', 'posa_continent']
features7 = [sel_features + [f] for f in features if f not in sel_features]
cv_results7 = {}
for feature in features7:
cv_results7[tuple(feature)] = fit_features(feature, cv_train, cv_test)
print('{}: {}'.format(feature, cv_results7[tuple(feature)]))
sorted(cv_results7.items(), key=operator.itemgetter(1), reverse=True)[:3]
"""
Explanation: The best single predictor of a hotel cluster seems to be hotel_market.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session03/Day3/MapReduce.ipynb
|
mit
|
import numpy as np
def mapper(arr):
return np.sum(arr)
def reducer(x, y):
return x + y
a = [1, 12, 3]
b = [4, 12, 6, 3]
c = [8, 1, 12, 11, 12, 2]
inputData = [a, b, c]
# Find the sum of all the numbers:
intermediate = map(mapper, inputData)
reduce(reducer, intermediate)
"""
Explanation: Data Management Part 2: Map Reduce
Version 0.1
Problem 2 has been adapted from a homework developed by Bill Howe at the University of Washington department of Computer Science and Engineering. He says:
In this assignment, you will be designing and implementing MapReduce algorithms for a variety of common data processing tasks. The MapReduce programming model (and a corresponding system) was proposed in a 2004 paper from a team at Google as a simpler abstraction for processing very large datasets in parallel. The goal of this assignment is to give you experience “thinking in MapReduce.” We will be using small datasets that you can inspect directly to determine the correctness of your results and to internalize how MapReduce works.
On Friday, we'll do a demo of a MapReduce-based system to process the large datasets for which it was designed.
Problem 1:
python builtins, map, reduce, and filter
Recall yesterday's challenge problem, we define a function that returned true if a triangle was smaller than some threshold and False otherwise. We filtered the triangles as follows:
idx = [isTriangleLargerThan(triangle) for triangle in triangles]
onlySmallTriangles = triangles[idx]
You could also do this with the map function:
idx = map(isTriangleLargerThan, triangles)
onlySmallTriangles = triangles[idx]
or filter:
onlySmallTriangles = filter(isTriangleLargerThan, triangles)
The following code example is how we'd use them to compute a sum of 3 partitions. Pretend that the 3 lists are on different nodes. :)
_ Note 1) this is operating on a set of values rather than key/value pairs (which we'll introduce in Problem 2)._
Note 2) Yes, this is contrived. In real life, you wouldn't go through this trouble to compute a simple sum, but it is a warm up for Problem 2
End of explanation
"""
def mapper(arr):
# COMPLETE
def reducer(x, y):
# COMPLETE
intermediate = map(mapper, inputData)
reduce(reducer, intermediate)
"""
Explanation: Problem 1a) Re-write the mapper and reducer to return the maximum number in all 3 lists.
End of explanation
"""
DATA_DIR = './data' # Set your path to the data files
import json
import sys
class MapReduce:
def __init__(self):
self.intermediate = {}
self.result = []
def emit_intermediate(self, key, value):
self.intermediate.setdefault(key, [])
self.intermediate[key].append(value)
def emit(self, value):
self.result.append(value)
def execute(self, data, mapper, reducer):
for line in data:
record = json.loads(line)
mapper(record)
for key in self.intermediate:
reducer(key, self.intermediate[key])
jenc = json.JSONEncoder()
for item in self.result:
print(jenc.encode(item))
"""
Explanation: Problem 1b)
How would you use this to compute the MEAN of the input data.
Problem 1c)
Think about how you would adapt this this to compute the MEDIAN of the input data. Do not implement it today! If it seems hard, it is because it is.
What special properties do SUM, MAX, MEAN have that make it trivial to represent in MapReduce?
Problem 2)
Let's go through a more complete example. The following MapReduce class faithfully implements the MapReduce programming model, but it executes entirely on one processor -- it does not involve parallel computation.
Setup
First, download the data:
$ curl -O https://lsst-web.ncsa.illinois.edu/~yusra/escience_mr/books.json
$ curl -O https://lsst-web.ncsa.illinois.edu/~yusra/escience_mr/records.json
End of explanation
"""
# Part 1
mr = MapReduce()
# Part 2
def mapper(record):
# key: document identifier
# value: document contents
key = record[0]
value = record[1]
words = value.split()
for w in words:
mr.emit_intermediate(w, 1)
# Part 3
def reducer(key, list_of_values):
# key: word
# value: list of occurrence counts
total = 0
for v in list_of_values:
total += v
mr.emit((key, total))
# Part 4
inputdata = open(os.path.join(DATA_DIR, "books.json"))
mr.execute(inputdata, mapper, reducer)
"""
Explanation: Here is the word count example discussed in class implemented as a MapReduce program using the framework:
End of explanation
"""
mr = MapReduce()
def mapper(record):
# COMPELTE
def reducer(key, list_of_values):
# COMPLETE
inputdata = open(os.path.join(DATA_DIR, "books.json"))
mr.execute(inputdata, mapper, reducer)
"""
Explanation: Probelm 2a)
Create an Inverted index. Given a set of documents, an inverted index is a dictionary where each word is associated with a list of the document identifiers in which that word appears.
Mapper Input
The input is a 2 element list: [document_id, text], where document_id is a string representing a document identifier and text is a string representing the text of the document. The document text may have words in upper or lower case and may contain punctuation. You should treat each token as if it was a valid word; that is, you can just use value.split() to tokenize the string.
Reducer Output
The output should be a (word, document ID list) tuple where word is a String and document ID list is a list of Strings like:
["all", ["milton-paradise.txt", "blake-poems.txt", "melville-moby_dick.txt"]]
["Rossmore", ["edgeworth-parents.txt"]]
["Consumptive", ["melville-moby_dick.txt"]]
["forbidden", ["milton-paradise.txt"]]
["child", ["blake-poems.txt"]]
["eldest", ["edgeworth-parents.txt"]]
["four", ["edgeworth-parents.txt"]]
["Caesar", ["shakespeare-caesar.txt"]]
["winds", ["whitman-leaves.txt"]]
["Moses", ["bible-kjv.txt"]]
["children", ["edgeworth-parents.txt"]]
["seemed", ["chesterton-ball.txt", "austen-emma.txt"]]
etc...
End of explanation
"""
mr = MapReduce()
def mapper(record):
# COMPLETE
def reducer(key, list_of_values):
# COMPLETE
inputdata = open(os.path.join(DATA_DIR, "records.json"))
mr.execute(inputdata, mapper, reducer)
"""
Explanation: Challenge Problem
Implement a relational join as a MapReduce query
Consider the following query:
SELECT *
FROM Orders, LineItem
WHERE Order.order_id = LineItem.order_id
Your MapReduce query should produce the same result as this SQL query executed against an appropriate database. You can consider the two input tables, Order and LineItem, as one big concatenated bag of records that will be processed by the map function record by record.
Map Input
Each input record is a list of strings representing a tuple in the database. Each list element corresponds to a different attribute of the table
The first item (index 0) in each record is a string that identifies the table the record originates from. This field has two possible values:
"line_item" indicates that the record is a line item.
"order" indicates that the record is an order.
The second element (index 1) in each record is the order_id. <--- JOIN ON THIS ELEMENT
LineItem records have 17 attributes including the identifier string.
Order records have 10 elements including the identifier string.
Reduce Output
The output should be a joined record: a single list of length 27 that contains the attributes from the order record followed by the fields from the line item record. Each list element should be a string like
["order", "32", "130057", "O", "208660.75", "1995-07-16", "2-HIGH", "Clerk#000000616", "0", "ise blithely bold, regular requests. quickly unusual dep", "line_item", "32", "82704", "7721", "1", "28", "47227.60", "0.05", "0.08", "N", "O", "1995-10-23", "1995-08-27", "1995-10-26", "TAKE BACK RETURN", "TRUCK", "sleep quickly. req"]
["order", "32", "130057", "O", "208660.75", "1995-07-16", "2-HIGH", "Clerk#000000616", "0", "ise blithely bold, regular requests. quickly unusual dep", "line_item", "32", "197921", "441", "2", "32", "64605.44", "0.02", "0.00", "N", "O", "1995-08-14", "1995-10-07", "1995-08-27", "COLLECT COD", "AIR", "lithely regular deposits. fluffily "]
["order", "32", "130057", "O", "208660.75", "1995-07-16", "2-HIGH", "Clerk#000000616", "0", "ise blithely bold, regular requests. quickly unusual dep", "line_item", "32", "44161", "6666", "3", "2", "2210.32", "0.09", "0.02", "N", "O", "1995-08-07", "1995-10-07", "1995-08-23", "DELIVER IN PERSON", "AIR", " express accounts wake according to the"]
["order", "32", "130057", "O", "208660.75", "1995-07-16", "2-HIGH", "Clerk#000000616", "0", "ise blithely bold, regular requests. quickly unusual dep", "line_item", "32", "2743", "7744", "4", "4", "6582.96", "0.09", "0.03", "N", "O", "1995-08-04", "1995-10-01", "1995-09-03", "NONE", "REG AIR", "e slyly final pac"]
End of explanation
"""
|
BrownDwarf/ApJdataFrames
|
notebooks/Chapman2009.ipynb
|
mit
|
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: ApJdataFrames Chapman 2009
Title: THE MID-INFRARED EXTINCTION LAW IN THE OPHIUCHUS, PERSEUS, AND SERPENS MOLECULAR CLOUDS
Authors: Nicholas L. Chapman, Lee G Mundy, Shih-Ping Lai, and Neal J Evans
Data is from this paper:
http://iopscience.iop.org/0004-637X/690/1/496/
End of explanation
"""
import pandas as pd
tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/690/1/496/suppdata/apj291883t3_ascii.txt",
skiprows=[0,1,2,4], sep='\t', header=0, na_values=' ... ', skipfooter=6)
del tbl1["Unnamed: 6"]
tbl1
"""
Explanation: Table 1- Measured Quantities for PMS Candidates with Observed Spectra
End of explanation
"""
|
robertoalotufo/ia898
|
src/isolines.ipynb
|
mit
|
import numpy as np
def isolines(f, nc=10, n=1):
from colormap import colormap
from applylut import applylut
maxi = int(np.ceil(f.max()))
mini = int(np.floor(f.min()))
d = int(np.ceil(1.*(maxi-mini)/nc))
m = np.zeros((d,1))
m[0:n,:] = 1
m = np.resize(m, (maxi-mini, 1))
m = np.concatenate((np.zeros((mini,1)), m))
m = np.concatenate((m, np.zeros((256-maxi,1))))
m = np.concatenate((m,m,m), 1)
ct = m*colormap('hsv') + (1-m)*colormap('gray')
g = applylut(f, ct)
return g
"""
Explanation: Function isolines
Synopse
Isolines of a grayscale image.
g = isolines(f, nc=10, np=1)
g: Image.
f: Image. Input image.
nc: Double. Number of colors.
n: Double. Number of pixels by isoline.
Description
Shows lines where the pixels have same intensity with a unique color.
End of explanation
"""
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python isolines.ipynb
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Examples
End of explanation
"""
if testing:
f = ia.normalize(ia.bwlp([150,150], 4, 1), [0,255])
f = f.astype('uint8')
g = ia.isolines(f, 10, 3)
g = g.astype('uint8')
ia.adshow(f)
ia.adshow(g)
"""
Explanation: Example 1
End of explanation
"""
|
vinitsamel/udacitydeeplearning
|
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
|
mit
|
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
wittawatj/kernel-gof
|
ipynb/gof_me_test.ipynb
|
mit
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import freqopttest.tst as tst
import kgof
import kgof.data as data
import kgof.density as density
import kgof.goftest as gof
import kgof.intertst as tgof
import kgof.kernel as ker
import kgof.util as util
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 18
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
"""
Explanation: A notebook to test and demonstrate the METest of Jitkrittum et al., 2016 (NIPS 2016) used as a goodness-of-fit test
End of explanation
"""
# true p
seed = 20
d = 1
# sample
n = 800
alpha = 0.05
# number of test locations to use
J = 2
mean = np.zeros(d)
variance = 1
p = density.IsotropicNormal(mean, variance)
q_mean = mean.copy()
q_variance = variance
# q_mean[0] = 1
# ds = data.DSIsotropicNormal(q_mean, q_variance)
q_means = np.array([ [0], [0]])
q_variances = np.array([0.01, 1])
ds = data.DSIsoGaussianMixture(q_means, q_variances, pmix=[0.2, 0.8])
# ds = data.DSIsoGaussianMixture(p_means, p_variances)
dat = ds.sample(n, seed=seed+2)
tr, te = dat.split_tr_te(tr_proportion=0.5, seed=2)
# Test
Xtr = tr.data()
sig2 = util.meddistance(Xtr, subsample=1000)**2
# random test locations
V0 = util.fit_gaussian_draw(Xtr, J, seed=seed+1)
me_rand = tgof.GaussMETest(p, sig2, V0, alpha=alpha, seed=seed)
me_rand_result = me_rand.perform_test(te)
me_rand_result
#kstein.compute_stat(dat)
"""
Explanation: Test with random test locations
End of explanation
"""
op = {'n_test_locs': J, 'seed': seed+5, 'max_iter': 200,
'batch_proportion': 1.0, 'locs_step_size': 1.0,
'gwidth_step_size': 0.1, 'tol_fun': 1e-4}
# optimize on the training set
me_opt = tgof.GaussMETestOpt(p, n_locs=J, tr_proportion=0.5, alpha=alpha, seed=seed+1)
# Give the ME test the full data. Internally the data are divided into tr and te.
me_opt_result = me_opt.perform_test(dat, op)
me_opt_result
"""
Explanation: Test with optimized test locations
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization
|
Machine Learning Foundations: A Case Study Approach/Assignment_three/.ipynb_checkpoints/Document retrieval-checkpoint.ipynb
|
mit
|
import graphlab
graphlab.product_key.set_product_key("7348-CE53-3B3E-DBED-152B-828E-A99E-F303")
"""
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
"""
people = graphlab.SFrame('people_wiki.gl/people_wiki.gl')
"""
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
"""
people.head()
len(people)
"""
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
"""
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
"""
clooney = people[people['name'] == 'George Clooney']
clooney['text']
"""
Explanation: Exploring the entry for actor George Clooney
End of explanation
"""
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
"""
Explanation: Get the word counts for Obama article
End of explanation
"""
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
"""
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
"""
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
"""
Explanation: Sorting the word counts to show most common words at the top
End of explanation
"""
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
"""
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
"""
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
"""
Explanation: Examine the TF-IDF for the Obama article
End of explanation
"""
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
"""
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
"""
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
"""
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
"""
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
"""
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
"""
knn_model.query(obama)
"""
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
"""
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton = people[people['name'] == 'Elton John']
elton['word_count'] = graphlab.text_analytics.count_words(elton['text'])
print elton['word_count']
elton_word_count_table = elton[['word_count']].stack('word_count', new_column_name = ['word','count'])
elton_word_count_table.sort('count',ascending=False)
elton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
victoria = people[people['name'] == 'Victoria Beckham']
graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0])
knn_model.query(elton, k=None).print_rows(num_rows=30)
kwc_model = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name')
kwc_model.query(elton)
"""
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation
"""
|
asurve/arvind-sysml
|
scripts/staging/SystemML-NN/examples/Example - MNIST Softmax Classifier.ipynb
|
apache-2.0
|
# Create a SystemML MLContext object
from systemml import MLContext, dml
ml = MLContext(sc)
"""
Explanation: Quick Setup
End of explanation
"""
%%sh
mkdir -p data/mnist/
cd data/mnist/
curl -O http://pjreddie.com/media/files/mnist_train.csv
curl -O http://pjreddie.com/media/files/mnist_test.csv
"""
Explanation: Download Data - MNIST
The MNIST dataset contains labeled images of handwritten digits, where each example is a 28x28 pixel image of grayscale values in the range [0,255] stretched out as 784 pixels, and each label is one of 10 possible digits in [0,9]. Here, we download 60,000 training examples, and 10,000 test examples, where the format is "label, pixel_1, pixel_2, ..., pixel_n".
End of explanation
"""
training = """
source("mnist_softmax.dml") as mnist_softmax
# Read training data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
images = data[,2:ncol(data)]
labels = data[,1]
# Scale images to [0,1], and one-hot encode the labels
images = images / 255.0
labels = table(seq(1, n), labels+1, n, 10)
# Split into training (55,000 examples) and validation (5,000 examples)
X = images[5001:nrow(images),]
X_val = images[1:5000,]
y = labels[5001:nrow(images),]
y_val = labels[1:5000,]
# Train
[W, b] = mnist_softmax::train(X, y, X_val, y_val)
"""
script = dml(training).input("$data", "data/mnist/mnist_train.csv").output("W", "b")
W, b = ml.execute(script).get("W", "b")
"""
Explanation: SystemML Softmax Model
1. Train
End of explanation
"""
testing = """
source("mnist_softmax.dml") as mnist_softmax
# Read test data
data = read($data, format="csv")
n = nrow(data)
# Extract images and labels
X_test = data[,2:ncol(data)]
y_test = data[,1]
# Scale images to [0,1], and one-hot encode the labels
X_test = X_test / 255.0
y_test = table(seq(1, n), y_test+1, n, 10)
# Eval on test set
probs = mnist_softmax::predict(X_test, W, b)
[loss, accuracy] = mnist_softmax::eval(probs, y_test)
print("Test Accuracy: " + accuracy)
"""
script = dml(testing).input("$data", "data/mnist/mnist_test.csv", W=W, b=b)
ml.execute(script)
"""
Explanation: 2. Compute Test Accuracy
End of explanation
"""
W_df = W.toDF()
b_df = b.toDF()
W_df, b_df
"""
Explanation: 3. Extract Model Into Spark DataFrames For Future Use
End of explanation
"""
|
kyleabeauchamp/mdtraj
|
examples/ramachandran-plot.ipynb
|
lgpl-2.1
|
traj = md.load('ala2.h5')
atoms, bonds = traj.topology.to_dataframe()
atoms
"""
Explanation: Lets load up the trajectory that we simulated in a previous example
End of explanation
"""
psi_indices, phi_indices = [6, 8, 14, 16], [4, 6, 8, 14]
angles = md.compute_dihedrals(traj, [phi_indices, psi_indices])
"""
Explanation: Because alanine dipeptide is a little nonstandard in the sense that it's basically dominated by the ACE and NME capping residues, we need to find the indicies of the atoms involved in the phi and psi angles somewhat manually. For standard cases, see compute_phi() and compute_psi() for easier solutions that don't require you to manually find the indices of each dihedral angle.
In this case, we're just specifying the four atoms that together parameterize the phi and psi dihedral angles.
End of explanation
"""
from pylab import *
from math import pi
figure()
title('Dihedral Map: Alanine dipeptide')
scatter(angles[:, 0], angles[:, 1], marker='x', c=traj.time)
cbar = colorbar()
cbar.set_label('Time [ps]')
xlabel(r'$\Phi$ Angle [radians]')
xlim(-pi, pi)
ylabel(r'$\Psi$ Angle [radians]')
ylim(-pi, pi)
"""
Explanation: Lets plot our dihedral angles in a scatter plot using matplotlib. What conformational states of Alanine dipeptide did we sample?
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.0/examples/notebooks/generated/exponential_smoothing.ipynb
|
bsd-3-clause
|
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
%matplotlib inline
data = [
446.6565,
454.4733,
455.663,
423.6322,
456.2713,
440.5881,
425.3325,
485.1494,
506.0482,
526.792,
514.2689,
494.211,
]
index = pd.date_range(start="1996", end="2008", freq="A")
oildata = pd.Series(data, index)
data = [
17.5534,
21.86,
23.8866,
26.9293,
26.8885,
28.8314,
30.0751,
30.9535,
30.1857,
31.5797,
32.5776,
33.4774,
39.0216,
41.3864,
41.5966,
]
index = pd.date_range(start="1990", end="2005", freq="A")
air = pd.Series(data, index)
data = [
263.9177,
268.3072,
260.6626,
266.6394,
277.5158,
283.834,
290.309,
292.4742,
300.8307,
309.2867,
318.3311,
329.3724,
338.884,
339.2441,
328.6006,
314.2554,
314.4597,
321.4138,
329.7893,
346.3852,
352.2979,
348.3705,
417.5629,
417.1236,
417.7495,
412.2339,
411.9468,
394.6971,
401.4993,
408.2705,
414.2428,
]
index = pd.date_range(start="1970", end="2001", freq="A")
livestock2 = pd.Series(data, index)
data = [407.9979, 403.4608, 413.8249, 428.105, 445.3387, 452.9942, 455.7402]
index = pd.date_range(start="2001", end="2008", freq="A")
livestock3 = pd.Series(data, index)
data = [
41.7275,
24.0418,
32.3281,
37.3287,
46.2132,
29.3463,
36.4829,
42.9777,
48.9015,
31.1802,
37.7179,
40.4202,
51.2069,
31.8872,
40.9783,
43.7725,
55.5586,
33.8509,
42.0764,
45.6423,
59.7668,
35.1919,
44.3197,
47.9137,
]
index = pd.date_range(start="2005", end="2010-Q4", freq="QS-OCT")
aust = pd.Series(data, index)
"""
Explanation: Exponential smoothing
Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1].
We will work through all the examples in the chapter as they unfold.
[1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2014.
Loading data
First we load some data. We have included the R data in the notebook for expedience.
End of explanation
"""
ax = oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.")
"""
Explanation: Simple Exponential Smoothing
Lets use Simple Exponential Smoothing to forecast the below oil data.
End of explanation
"""
fit1 = SimpleExpSmoothing(oildata, initialization_method="heuristic").fit(
smoothing_level=0.2, optimized=False
)
fcast1 = fit1.forecast(3).rename(r"$\alpha=0.2$")
fit2 = SimpleExpSmoothing(oildata, initialization_method="heuristic").fit(
smoothing_level=0.6, optimized=False
)
fcast2 = fit2.forecast(3).rename(r"$\alpha=0.6$")
fit3 = SimpleExpSmoothing(oildata, initialization_method="estimated").fit()
fcast3 = fit3.forecast(3).rename(r"$\alpha=%s$" % fit3.model.params["smoothing_level"])
plt.figure(figsize=(12, 8))
plt.plot(oildata, marker="o", color="black")
plt.plot(fit1.fittedvalues, marker="o", color="blue")
(line1,) = plt.plot(fcast1, marker="o", color="blue")
plt.plot(fit2.fittedvalues, marker="o", color="red")
(line2,) = plt.plot(fcast2, marker="o", color="red")
plt.plot(fit3.fittedvalues, marker="o", color="green")
(line3,) = plt.plot(fcast3, marker="o", color="green")
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])
"""
Explanation: Here we run three variants of simple exponential smoothing:
1. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $\alpha=0.2$ parameter
2. In fit2 as above we choose an $\alpha=0.6$
3. In fit3 we allow statsmodels to automatically find an optimized $\alpha$ value for us. This is the recommended approach.
End of explanation
"""
fit1 = Holt(air, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2, optimized=False
)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2, optimized=False
)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped_trend=True, initialization_method="estimated").fit(
smoothing_level=0.8, smoothing_trend=0.2
)
fcast3 = fit3.forecast(5).rename("Additive damped trend")
plt.figure(figsize=(12, 8))
plt.plot(air, marker="o", color="black")
plt.plot(fit1.fittedvalues, color="blue")
(line1,) = plt.plot(fcast1, marker="o", color="blue")
plt.plot(fit2.fittedvalues, color="red")
(line2,) = plt.plot(fcast2, marker="o", color="red")
plt.plot(fit3.fittedvalues, color="green")
(line3,) = plt.plot(fcast3, marker="o", color="green")
plt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])
"""
Explanation: Holt's Method
Lets take a look at another example.
This time we use air pollution data and the Holt's Method.
We will fit three examples again.
1. In fit1 we again choose not to use the optimizer and provide explicit values for $\alpha=0.8$ and $\beta=0.2$
2. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt's additive model.
3. In fit3 we used a damped versions of the Holt's additive model but allow the dampening parameter $\phi$ to be optimized while fixing the values for $\alpha=0.8$ and $\beta=0.2$
End of explanation
"""
fit1 = SimpleExpSmoothing(livestock2, initialization_method="estimated").fit()
fit2 = Holt(livestock2, initialization_method="estimated").fit()
fit3 = Holt(livestock2, exponential=True, initialization_method="estimated").fit()
fit4 = Holt(livestock2, damped_trend=True, initialization_method="estimated").fit(
damping_trend=0.98
)
fit5 = Holt(
livestock2, exponential=True, damped_trend=True, initialization_method="estimated"
).fit()
params = [
"smoothing_level",
"smoothing_trend",
"damping_trend",
"initial_level",
"initial_trend",
]
results = pd.DataFrame(
index=[r"$\alpha$", r"$\beta$", r"$\phi$", r"$l_0$", "$b_0$", "SSE"],
columns=["SES", "Holt's", "Exponential", "Additive", "Multiplicative"],
)
results["SES"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] = [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results
"""
Explanation: Seasonally adjusted data
Lets look at some seasonally adjusted livestock data. We fit five Holt's models.
The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.
Note: fit4 does not allow the parameter $\phi$ to be optimized by providing a fixed value of $\phi=0.98$
End of explanation
"""
for fit in [fit2, fit4]:
pd.DataFrame(np.c_[fit.level, fit.trend]).rename(
columns={0: "level", 1: "slope"}
).plot(subplots=True)
plt.show()
print(
"Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method."
)
"""
Explanation: Plots of Seasonally Adjusted Data
The following plots allow us to evaluate the level and slope/trend components of the above table's fits.
End of explanation
"""
fit1 = SimpleExpSmoothing(livestock2, initialization_method="estimated").fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2, initialization_method="estimated").fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True, initialization_method="estimated").fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped_trend=True, initialization_method="estimated").fit(
damping_trend=0.98
)
fcast4 = fit4.forecast(9).rename("Additive Damped")
fit5 = Holt(
livestock2, exponential=True, damped_trend=True, initialization_method="estimated"
).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")
ax = livestock2.plot(color="black", marker="o", figsize=(12, 8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color="red", legend=True)
fcast2.plot(ax=ax, color="green", legend=True)
fcast3.plot(ax=ax, color="blue", legend=True)
fcast4.plot(ax=ax, color="cyan", legend=True)
fcast5.plot(ax=ax, color="magenta", legend=True)
ax.set_ylabel("Livestock, sheep in Asia (millions)")
plt.show()
print(
"Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods."
)
"""
Explanation: Comparison
Here we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.
End of explanation
"""
fit1 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
use_boxcox=True,
initialization_method="estimated",
).fit()
fit2 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
use_boxcox=True,
initialization_method="estimated",
).fit()
fit3 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
damped_trend=True,
use_boxcox=True,
initialization_method="estimated",
).fit()
fit4 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
damped_trend=True,
use_boxcox=True,
initialization_method="estimated",
).fit()
results = pd.DataFrame(
index=[r"$\alpha$", r"$\beta$", r"$\phi$", r"$\gamma$", r"$l_0$", "$b_0$", "SSE"]
)
params = [
"smoothing_level",
"smoothing_trend",
"damping_trend",
"smoothing_seasonal",
"initial_level",
"initial_trend",
]
results["Additive"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style="--", color="red")
fit2.fittedvalues.plot(ax=ax, style="--", color="green")
fit1.forecast(8).rename("Holt-Winters (add-add-seasonal)").plot(
ax=ax, style="--", marker="o", color="red", legend=True
)
fit2.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show()
print(
"Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality."
)
results
"""
Explanation: Holt's Winters Seasonal
Finally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component.
statsmodels allows for all the combinations including as shown in the examples below:
1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation..
1. fit3 additive damped trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.
1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.
The plot shows the results and forecast for fit1 and fit2.
The table allows us to compare the results and parameterizations.
End of explanation
"""
fit1 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="add",
initialization_method="estimated",
).fit()
fit2 = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
df = pd.DataFrame(
np.c_[aust, fit1.level, fit1.trend, fit1.season, fit1.fittedvalues],
columns=[r"$y_t$", r"$l_t$", r"$b_t$", r"$s_t$", r"$\hat{y}_t$"],
index=aust.index,
)
df.append(fit1.forecast(8).rename(r"$\hat{y}_t$").to_frame(), sort=True)
df = pd.DataFrame(
np.c_[aust, fit2.level, fit2.trend, fit2.season, fit2.fittedvalues],
columns=[r"$y_t$", r"$l_t$", r"$b_t$", r"$s_t$", r"$\hat{y}_t$"],
index=aust.index,
)
df.append(fit2.forecast(8).rename(r"$\hat{y}_t$").to_frame(), sort=True)
"""
Explanation: The Internals
It is possible to get at the internals of the Exponential Smoothing models.
Here we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\hat{y}_t$. Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation.
End of explanation
"""
states1 = pd.DataFrame(
np.c_[fit1.level, fit1.trend, fit1.season],
columns=["level", "slope", "seasonal"],
index=aust.index,
)
states2 = pd.DataFrame(
np.c_[fit2.level, fit2.trend, fit2.season],
columns=["level", "slope", "seasonal"],
index=aust.index,
)
fig, [[ax1, ax4], [ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12, 8))
states1[["level"]].plot(ax=ax1)
states1[["slope"]].plot(ax=ax2)
states1[["seasonal"]].plot(ax=ax3)
states2[["level"]].plot(ax=ax4)
states2[["slope"]].plot(ax=ax5)
states2[["seasonal"]].plot(ax=ax6)
plt.show()
"""
Explanation: Finally lets look at the levels, slopes/trends and seasonal components of the models.
End of explanation
"""
fit = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
simulations = fit.simulate(8, repetitions=100, error="mul")
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts and simulations from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style="--", color="green")
simulations.plot(ax=ax, style="-", alpha=0.05, color="grey", legend=False)
fit.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show()
"""
Explanation: Simulations and Confidence Intervals
By using a state space formulation, we can perform simulations of future values. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate.
Similar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. We simulate up to 8 steps into the future, and perform 1000 simulations. As can be seen in the below figure, the simulations match the forecast values quite well.
[2] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice, 2nd edition. OTexts, 2018.
End of explanation
"""
fit = ExponentialSmoothing(
aust,
seasonal_periods=4,
trend="add",
seasonal="mul",
initialization_method="estimated",
).fit()
simulations = fit.simulate(
16, anchor="2009-01-01", repetitions=100, error="mul", random_errors="bootstrap"
)
ax = aust.plot(
figsize=(10, 6),
marker="o",
color="black",
title="Forecasts and simulations from Holt-Winters' multiplicative method",
)
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit.fittedvalues.plot(ax=ax, style="--", color="green")
simulations.plot(ax=ax, style="-", alpha=0.05, color="grey", legend=False)
fit.forecast(8).rename("Holt-Winters (add-mul-seasonal)").plot(
ax=ax, style="--", marker="o", color="green", legend=True
)
plt.show()
"""
Explanation: Simulations can also be started at different points in time, and there are multiple options for choosing the random noise.
End of explanation
"""
|
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Machine Learning Sections/Natural-Language-Processing/NLP (Natural Language Processing) with Python.ipynb
|
mit
|
# ONLY RUN THIS CELL IF YOU NEED
# TO DOWNLOAD NLTK AND HAVE CONDA
# WATCH THE VIDEO FOR FULL INSTRUCTIONS ON THIS STEP
# Uncomment the code below and run:
# !conda install nltk #This installs nltk
# import nltk # Imports the library
# nltk.download() #Download the necessary datasets
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
NLP (Natural Language Processing) with Python
This is the notebook that goes along with the NLP video lecture!
In this lecture we will discuss a higher level overview of the basics of Natural Language Processing, which basically consists of combining machine learning techniques with text, and using math and statistics to get that text in a format that the machine learning algorithms can understand!
Once you've completed this lecture you'll have a project using some Yelp Text Data!
Requirements: You will need to have NLTK installed, along with downloading the corpus for stopwords. To download everything with a conda installation, run the cell below. Or reference the full video lecture
End of explanation
"""
messages = [line.rstrip() for line in open('smsspamcollection/SMSSpamCollection')]
print(len(messages))
"""
Explanation: Get the Data
We'll be using a dataset from the UCI datasets! This dataset is already located in the folder for this section.
The file we are using contains a collection of more than 5 thousand SMS phone messages. You can check out the readme file for more info.
Let's go ahead and use rstrip() plus a list comprehension to get a list of all the lines of text messages:
End of explanation
"""
for message_no, message in enumerate(messages[:10]):
print(message_no, message)
print('\n')
"""
Explanation: A collection of texts is also sometimes called "corpus". Let's print the first ten messages and number them using enumerate:
End of explanation
"""
import pandas as pd
"""
Explanation: Due to the spacing we can tell that this is a TSV ("tab separated values") file, where the first column is a label saying whether the given message is a normal message (commonly known as "ham") or "spam". The second column is the message itself. (Note our numbers aren't part of the file, they are just from the enumerate call).
Using these labeled ham and spam examples, we'll train a machine learning model to learn to discriminate between ham/spam automatically. Then, with a trained model, we'll be able to classify arbitrary unlabeled messages as ham or spam.
From the official SciKit Learn documentation, we can visualize our process:
<img src='http://www.astroml.org/sklearn_tutorial/_images/plot_ML_flow_chart_3.png' width=600/>
Instead of parsing TSV manually using Python, we can just take advantage of pandas! Let's go ahead and import it!
End of explanation
"""
messages = pd.read_csv('smsspamcollection/SMSSpamCollection', sep='\t',
names=["label", "message"])
messages.head()
"""
Explanation: We'll use read_csv and make note of the sep argument, we can also specify the desired column names by passing in a list of names.
End of explanation
"""
messages.describe()
"""
Explanation: Exploratory Data Analysis
Let's check out some of the stats with some plots and the built-in methods in pandas!
End of explanation
"""
messages.groupby('label').describe()
"""
Explanation: Let's use groupby to use describe by label, this way we can begin to think about the features that separate ham and spam!
End of explanation
"""
messages['length'] = messages['message'].apply(len)
messages.head()
"""
Explanation: As we continue our analysis we want to start thinking about the features we are going to be using. This goes along with the general idea of feature engineering. The better your domain knowledge on the data, the better your ability to engineer more features from it. Feature engineering is a very large part of spam detection in general. I encourage you to read up on the topic!
Let's make a new column to detect how long the text messages are:
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
messages['length'].plot(bins=50, kind='hist')
"""
Explanation: Data Visualization
Let's visualize this! Let's do the imports:
End of explanation
"""
messages.length.describe()
"""
Explanation: Play around with the bin size! Looks like text length may be a good feature to think about! Let's try to explain why the x-axis goes all the way to 1000ish, this must mean that there is some really long message!
End of explanation
"""
messages[messages['length'] == 910]['message'].iloc[0]
"""
Explanation: Woah! 910 characters, let's use masking to find this message:
End of explanation
"""
messages.hist(column='length', by='label', bins=50,figsize=(12,4))
"""
Explanation: Looks like we have some sort of Romeo sending texts! But let's focus back on the idea of trying to see if message length is a distinguishing feature between ham and spam:
End of explanation
"""
import string
mess = 'Sample message! Notice: it has punctuation.'
# Check characters to see if they are in punctuation
nopunc = [char for char in mess if char not in string.punctuation]
# Join the characters again to form the string.
nopunc = ''.join(nopunc)
"""
Explanation: Very interesting! Through just basic EDA we've been able to discover a trend that spam messages tend to have more characters. (Sorry Romeo!)
Now let's begin to process the data so we can eventually use it with SciKit Learn!
Text Pre-processing
Our main issue with our data is that it is all in text format (strings). The classification algorithms that we've learned about so far will need some sort of numerical feature vector in order to perform the classification task. There are actually many methods to convert a corpus to a vector format. The simplest is the the bag-of-words approach, where each unique word in a text will be represented by one number.
In this section we'll convert the raw messages (sequence of characters) into vectors (sequences of numbers).
As a first step, let's write a function that will split a message into its individual words and return a list. We'll also remove very common words, ('the', 'a', etc..). To do this we will take advantage of the NLTK library. It's pretty much the standard library in Python for processing text and has a lot of useful features. We'll only use some of the basic ones here.
Let's create a function that will process the string in the message column, then we can just use apply() in pandas do process all the text in the DataFrame.
First removing punctuation. We can just take advantage of Python's built-in string library to get a quick list of all the possible punctuation:
End of explanation
"""
from nltk.corpus import stopwords
stopwords.words('english')[0:10] # Show some stop words
nopunc.split()
# Now just remove any stopwords
clean_mess = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
clean_mess
"""
Explanation: Now let's see how to remove stopwords. We can impot a list of english stopwords from NLTK (check the documentation for more languages and info).
End of explanation
"""
def text_process(mess):
"""
Takes in a string of text, then performs the following:
1. Remove all punctuation
2. Remove all stopwords
3. Returns a list of the cleaned text
"""
# Check characters to see if they are in punctuation
nopunc = [char for char in mess if char not in string.punctuation]
# Join the characters again to form the string.
nopunc = ''.join(nopunc)
# Now just remove any stopwords
return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
"""
Explanation: Now let's put both of these together in a function to apply it to our DataFrame later on:
End of explanation
"""
messages.head()
"""
Explanation: Here is the original DataFrame again:
End of explanation
"""
# Check to make sure its working
messages['message'].head(5).apply(text_process)
# Show original dataframe
messages.head()
"""
Explanation: Now let's "tokenize" these messages. Tokenization is just the term used to describe the process of converting the normal text strings in to a list of tokens (words that we actually want).
Let's see an example output on on column:
Note:
We may get some warnings or errors for symbols we didn't account for or that weren't in Unicode (like a British pound symbol)
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
"""
Explanation: Continuing Normalization
There are a lot of ways to continue normalizing this text. Such as Stemming or distinguishing by part of speech.
NLTK has lots of built-in tools and great documentation on a lot of these methods. Sometimes they don't work well for text-messages due to the way a lot of people tend to use abbreviations or shorthand, For example:
'Nah dawg, IDK! Wut time u headin to da club?'
versus
'No dog, I don't know! What time are you heading to the club?'
Some text normalization methods will have trouble with this type of shorthand and so I'll leave you to explore those more advanced methods through the NLTK book online.
For now we will just focus on using what we have to convert our list of words to an actual vector that SciKit-Learn can use.
Vectorization
Currently, we have the messages as lists of tokens (also known as lemmas) and now we need to convert each of those messages into a vector the SciKit Learn's algorithm models can work with.
Now we'll convert each message, represented as a list of tokens (lemmas) above, into a vector that machine learning models can understand.
We'll do that in three steps using the bag-of-words model:
Count how many times does a word occur in each message (Known as term frequency)
Weigh the counts, so that frequent tokens get lower weight (inverse document frequency)
Normalize the vectors to unit length, to abstract from the original text length (L2 norm)
Let's begin the first step:
Each vector will have as many dimensions as there are unique words in the SMS corpus. We will first use SciKit Learn's CountVectorizer. This model will convert a collection of text documents to a matrix of token counts.
We can imagine this as a 2-Dimensional matrix. Where the 1-dimension is the entire vocabulary (1 row per word) and the other dimension are the actual documents, in this case a column per text message.
For example:
<table border = “1“>
<tr>
<th></th> <th>Message 1</th> <th>Message 2</th> <th>...</th> <th>Message N</th>
</tr>
<tr>
<td><b>Word 1 Count</b></td><td>0</td><td>1</td><td>...</td><td>0</td>
</tr>
<tr>
<td><b>Word 2 Count</b></td><td>0</td><td>0</td><td>...</td><td>0</td>
</tr>
<tr>
<td><b>...</b></td> <td>1</td><td>2</td><td>...</td><td>0</td>
</tr>
<tr>
<td><b>Word N Count</b></td> <td>0</td><td>1</td><td>...</td><td>1</td>
</tr>
</table>
Since there are so many messages, we can expect a lot of zero counts for the presence of that word in that document. Because of this, SciKit Learn will output a Sparse Matrix.
End of explanation
"""
# Might take awhile...
bow_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
# Print total number of vocab words
print(len(bow_transformer.vocabulary_))
"""
Explanation: There are a lot of arguments and parameters that can be passed to the CountVectorizer. In this case we will just specify the analyzer to be our own previously defined function:
End of explanation
"""
message4 = messages['message'][3]
print(message4)
"""
Explanation: Let's take one text message and get its bag-of-words counts as a vector, putting to use our new bow_transformer:
End of explanation
"""
bow4 = bow_transformer.transform([message4])
print(bow4)
print(bow4.shape)
"""
Explanation: Now let's see its vector representation:
End of explanation
"""
print(bow_transformer.get_feature_names()[4073])
print(bow_transformer.get_feature_names()[9570])
"""
Explanation: This means that there are seven unique words in message number 4 (after removing common stop words). Two of them appear twice, the rest only once. Let's go ahead and check and confirm which ones appear twice:
End of explanation
"""
messages_bow = bow_transformer.transform(messages['message'])
print('Shape of Sparse Matrix: ', messages_bow.shape)
print('Amount of Non-Zero occurences: ', messages_bow.nnz)
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print('sparsity: {}'.format(round(sparsity)))
"""
Explanation: Now we can use .transform on our Bag-of-Words (bow) transformed object and transform the entire DataFrame of messages. Let's go ahead and check out how the bag-of-words counts for the entire SMS corpus is a large, sparse matrix:
End of explanation
"""
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer().fit(messages_bow)
tfidf4 = tfidf_transformer.transform(bow4)
print(tfidf4)
"""
Explanation: After the counting, the term weighting and normalization can be done with TF-IDF, using scikit-learn's TfidfTransformer.
So what is TF-IDF?
TF-IDF stands for term frequency-inverse document frequency, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.
One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.
TF: Term Frequency, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization:
TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).
IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:
IDF(t) = log_e(Total number of documents / Number of documents with term t in it).
See below for a simple example.
Example:
Consider a document containing 100 words wherein the word cat appears 3 times.
The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.
Let's go ahead and see how we can do this in SciKit Learn:
End of explanation
"""
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['u']])
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['university']])
"""
Explanation: We'll go ahead and check what is the IDF (inverse document frequency) of the word "u" and of word "university"?
End of explanation
"""
messages_tfidf = tfidf_transformer.transform(messages_bow)
print(messages_tfidf.shape)
"""
Explanation: To transform the entire bag-of-words corpus into TF-IDF corpus at once:
End of explanation
"""
from sklearn.naive_bayes import MultinomialNB
spam_detect_model = MultinomialNB().fit(messages_tfidf, messages['label'])
"""
Explanation: There are many ways the data can be preprocessed and vectorized. These steps involve feature engineering and building a "pipeline". I encourage you to check out SciKit Learn's documentation on dealing with text data as well as the expansive collection of available papers and books on the general topic of NLP.
Training a model
With messages represented as vectors, we can finally train our spam/ham classifier. Now we can actually use almost any sort of classification algorithms. For a variety of reasons, the Naive Bayes classifier algorithm is a good choice.
We'll be using scikit-learn here, choosing the Naive Bayes classifier to start with:
End of explanation
"""
print('predicted:', spam_detect_model.predict(tfidf4)[0])
print('expected:', messages.label[3])
"""
Explanation: Let's try classifying our single random message and checking how we do:
End of explanation
"""
all_predictions = spam_detect_model.predict(messages_tfidf)
print(all_predictions)
"""
Explanation: Fantastic! We've developed a model that can attempt to predict spam vs ham classification!
Part 6: Model Evaluation
Now we want to determine how well our model will do overall on the entire dataset. Let's begin by getting all the predictions:
End of explanation
"""
from sklearn.metrics import classification_report
print (classification_report(messages['label'], all_predictions))
"""
Explanation: We can use SciKit Learn's built-in classification report, which returns precision, recall, f1-score, and a column for support (meaning how many cases supported that classification). Check out the links for more detailed info on each of these metrics and the figure below:
<img src='https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/700px-Precisionrecall.svg.png' width=400 />
End of explanation
"""
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = \
train_test_split(messages['message'], messages['label'], test_size=0.2)
print(len(msg_train), len(msg_test), len(msg_train) + len(msg_test))
"""
Explanation: There are quite a few possible metrics for evaluating model performance. Which one is the most important depends on the task and the business effects of decisions based off of the model. For example, the cost of mis-predicting "spam" as "ham" is probably much lower than mis-predicting "ham" as "spam".
In the above "evaluation",we evaluated accuracy on the same data we used for training. You should never actually evaluate on the same dataset you train on!
Such evaluation tells us nothing about the true predictive power of our model. If we simply remembered each example during training, the accuracy on training data would trivially be 100%, even though we wouldn't be able to classify any new messages.
A proper way is to split the data into a training/test set, where the model only ever sees the training data during its model fitting and parameter tuning. The test data is never used in any way. This is then our final evaluation on test data is representative of true predictive performance.
Train Test Split
End of explanation
"""
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)), # strings to token integer counts
('tfidf', TfidfTransformer()), # integer counts to weighted TF-IDF scores
('classifier', MultinomialNB()), # train on TF-IDF vectors w/ Naive Bayes classifier
])
"""
Explanation: The test size is 20% of the entire dataset (1115 messages out of total 5572), and the training is the rest (4457 out of 5572). Note the default split would have been 30/70.
Creating a Data Pipeline
Let's run our model again and then predict off the test set. We will use SciKit Learn's pipeline capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. Let's see an example of how it works:
End of explanation
"""
pipeline.fit(msg_train,label_train)
predictions = pipeline.predict(msg_test)
print(classification_report(predictions,label_test))
"""
Explanation: Now we can directly pass message text data and the pipeline will do our pre-processing for us! We can treat it as a model/estimator API:
End of explanation
"""
|
gfeiden/Notebook
|
Projects/ngc2516_spots/cool_spot_colors.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import LinearNDInterpolator
"""
Explanation: BC Grid Extrapolation
Testing errors generated by grid extrapolation for extremely cool spot bolometric corrections. A first test of this will be to use a more extensive Phoenix color grid to explore effects that may be missing from MARCS (aka: condensates).
End of explanation
"""
phx_col_dir = '/Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15'
opt_table = np.genfromtxt('{0}/colmag.BT-Settl.server.COUSINS.Vega'.format(phx_col_dir), comments='!')
nir_table = np.genfromtxt('{0}/colmag.BT-Settl.server.2MASS.Vega'.format(phx_col_dir), comments='!')
"""
Explanation: Setting up Phoenix Grid Interpolation
First, load required color tables (1 optical, 1 NIR).
End of explanation
"""
opt_surface = LinearNDInterpolator(opt_table[:, :2], opt_table[:, 4:8])
nir_surface = LinearNDInterpolator(nir_table[:, :2], nir_table[:, 4:7])
"""
Explanation: Generate (linear) interpolation surfaces as a function of $\log g$ and $T_{\rm eff}$.
End of explanation
"""
iso = np.genfromtxt('data/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
"""
Explanation: BT-Settl Colorize a Dartmouth Isochrone
Load a standard isochrone, with MARCS colors.
End of explanation
"""
phx_opt_mags = opt_surface(10.0**iso[:, 1], iso[:, 2])
phx_nir_mags = nir_surface(10.0**iso[:, 1], iso[:, 2])
"""
Explanation: Compute colors using Phoenix BT-Settl models using CIFIST 2015 color tables. Colors were shown to be compatible with MARCS colors in another note.
End of explanation
"""
for i in range(phx_opt_mags.shape[1]):
phx_opt_mags[:, i] = phx_opt_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
for i in range(phx_nir_mags.shape[1]):
phx_nir_mags[:, i] = phx_nir_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
"""
Explanation: Convert from surface magnitudes to absolute magnitudes.
End of explanation
"""
phx_iso = np.column_stack((iso[:, :6], phx_opt_mags)) # stack props with BVRI
phx_iso = np.column_stack((phx_iso, phx_nir_mags)) # stack props/BVRI with JHK
"""
Explanation: Stack colors with stellar properties to form a new isochrone.
End of explanation
"""
orig_iso = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/isochrone_120.0myr_z+0.00_a+0.00_marcs.iso')
spot_mags = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/sts/mag_zet+0.62_eps+1.00_rho+0.40_pi+0.50.dat')
spot_prop = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/sts/spots_zet+0.62_eps+1.00_rho+0.40_pi+0.50.dat')
"""
Explanation: Load Spotted Isochrone(s)
There are two types of spotted isochrones, one with magnitudes and colors with average surface properties, the other has more detailed information about spot temperatures and luminosities.
End of explanation
"""
phx_opt_phot = opt_surface(10**spot_prop[:, 1], spot_mags[:, 2])
phx_opt_spot = opt_surface(10**spot_prop[:, 2], spot_mags[:, 2])
phx_nir_phot = nir_surface(10**spot_prop[:, 1], spot_mags[:, 2])
phx_nir_spot = nir_surface(10**spot_prop[:, 2], spot_mags[:, 2])
"""
Explanation: Compute colors for photospheric and spot components.
End of explanation
"""
for i in range(phx_opt_phot.shape[1]):
phx_opt_phot[:, i] = phx_opt_phot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
phx_opt_spot[:, i] = phx_opt_spot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
for i in range(phx_nir_phot.shape[1]):
phx_nir_phot[:, i] = phx_nir_phot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
phx_nir_spot[:, i] = phx_nir_spot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
"""
Explanation: Convert surface magnitudes to absolute magnitudes.
End of explanation
"""
L_spot = 10**spot_prop[:, 4]/10**orig_iso[:, 3]
L_phot = 10**spot_prop[:, 3]/10**orig_iso[:, 3]
"""
Explanation: Compute luminosity fractions for spots and photosphere for use in combining the two contributions.
End of explanation
"""
phx_opt_spot_mags = np.empty(phx_opt_phot.shape)
phx_nir_spot_mags = np.empty(phx_nir_phot.shape)
for i in range(phx_opt_phot.shape[1]):
phx_opt_spot_mags[:,i] = -2.5*np.log10(0.6*10**(-phx_opt_phot[:,i]/2.5)
+ 0.4*10**(-phx_opt_spot[:,i]/2.5))
for i in range(phx_nir_phot.shape[1]):
phx_nir_spot_mags[:,i] = -2.5*np.log10(0.6*10**(-phx_nir_phot[:,i]/2.5)
+ 0.4*10**(-phx_nir_spot[:,i]/2.5))
"""
Explanation: Now combine spot properties with the photospheric properties to derive properties for spotted stars.
End of explanation
"""
spt_iso = np.column_stack((spot_mags[:, :6], phx_opt_spot_mags))
spt_iso = np.column_stack((spt_iso, phx_nir_spot_mags))
"""
Explanation: Stack with average surface properties to form a spotted isochrone.
End of explanation
"""
fig, ax = plt.subplots(1, 3, figsize=(18., 8.), sharey=True)
for axis in ax:
axis.grid(True)
axis.set_ylim(17., 2.)
axis.tick_params(which='major', axis='both', labelsize=16., length=15.)
# V/(B-V)
ax[0].set_xlim(-0.5, 2.0)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c='#b22222')
ax[0].plot(spot_mags[:, 7] - spot_mags[:, 8], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[0].plot(phx_iso[:, 6] - phx_iso[:, 7], phx_iso[:, 7], lw=3, c='#555555')
ax[0].plot(spt_iso[:, 6] - spt_iso[:, 7], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
# V/(V-Ic)
ax[1].set_xlim(0.0, 4.0)
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c='#b22222')
ax[1].plot(spot_mags[:, 8] - spot_mags[:,10], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[1].plot(phx_iso[:, 7] - phx_iso[:, 9], phx_iso[:, 7], lw=3, c='#555555')
ax[1].plot(spt_iso[:, 7] - spt_iso[:, 9], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
# V/(V-K)
ax[2].set_xlim(0.0, 7.0)
ax[2].plot(iso[:, 7] - iso[:,10], iso[:, 7], lw=3, c='#b22222')
ax[2].plot(spot_mags[:, 8] - spot_mags[:,13], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[2].plot(phx_iso[:, 7] - phx_iso[:,12], phx_iso[:, 7], lw=3, c='#555555')
ax[2].plot(spt_iso[:, 7] - spt_iso[:,12], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
"""
Explanation: Isochrone Comparisons
We may now compare morphologies of spotted isochrones computed using Phoenix and MARCS color tables.
End of explanation
"""
fig, ax = plt.subplots(1, 3, figsize=(18., 8.), sharey=True)
for axis in ax:
axis.grid(True)
axis.set_ylim(10., 2.)
axis.tick_params(which='major', axis='both', labelsize=16., length=15.)
# K/(Ic-K)
ax[0].set_xlim(0.0, 3.0)
ax[0].plot(iso[:, 8] - iso[:, 10], iso[:, 10], lw=3, c='#b22222')
ax[0].plot(spot_mags[:, 10] - spot_mags[:, 13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[0].plot(phx_iso[:, 9] - phx_iso[:, 12], phx_iso[:, 12], lw=3, c='#555555')
ax[0].plot(spt_iso[:, 9] - spt_iso[:, 12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
# K/(J-K)
ax[1].set_xlim(0.0, 1.0)
ax[1].plot(iso[:, 9] - iso[:, 10], iso[:, 10], lw=3, c='#b22222')
ax[1].plot(spot_mags[:, 11] - spot_mags[:,13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[1].plot(phx_iso[:, 10] - phx_iso[:, 12], phx_iso[:, 12], lw=3, c='#555555')
ax[1].plot(spt_iso[:, 10] - spt_iso[:, 12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
# K/(V-K)
ax[2].set_xlim(0.0, 7.0)
ax[2].plot(iso[:, 7] - iso[:,10], iso[:, 10], lw=3, c='#b22222')
ax[2].plot(spot_mags[:, 8] - spot_mags[:,13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[2].plot(phx_iso[:, 7] - phx_iso[:,12], phx_iso[:, 12], lw=3, c='#555555')
ax[2].plot(spt_iso[:, 7] - spt_iso[:,12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
"""
Explanation: Optical CMDs appear to be in good order, even though some of the spot properties may extend beyond the formal MARCS grid. At high temperatures, the Phoenix models cut out before the MARCS models, with the maximum temperature in the Phoenix models at 7000 K.
Now we may check NIR CMDs.
End of explanation
"""
10**spot_prop[0, 1], phx_opt_phot[0], 10**spot_prop[0, 2], phx_opt_spot[0]
10**spot_prop[0, 1], phx_nir_phot[0], 10**spot_prop[0, 2], phx_nir_spot[0]
"""
Explanation: Things look good!
Sanity Checks
Before moving on and accepting that, down to $\varpi = 0.50$, our models produce reliable results, with possible difference in $(J-K)$ colors below the M dwarf boundary, we should confirm that all star have actual values for their spot colors.
End of explanation
"""
|
rflamary/POT
|
notebooks/plot_UOT_1D.ipynb
|
mit
|
# Author: Hicham Janati <hicham.janati@inria.fr>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import ot
import ot.plot
from ot.datasets import make_1D_gauss as gauss
"""
Explanation: 1D Unbalanced optimal transport
This example illustrates the computation of Unbalanced Optimal transport
using a Kullback-Leibler relaxation.
End of explanation
"""
#%% parameters
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a = gauss(n, m=20, s=5) # m= mean, s= std
b = gauss(n, m=60, s=10)
# make distributions unbalanced
b *= 5.
# loss matrix
M = ot.dist(x.reshape((n, 1)), x.reshape((n, 1)))
M /= M.max()
"""
Explanation: Generate data
End of explanation
"""
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
pl.plot(x, a, 'b', label='Source distribution')
pl.plot(x, b, 'r', label='Target distribution')
pl.legend()
# plot distributions and loss matrix
pl.figure(2, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, M, 'Cost matrix M')
"""
Explanation: Plot distributions and loss matrix
End of explanation
"""
# Sinkhorn
epsilon = 0.1 # entropy parameter
alpha = 1. # Unbalanced KL relaxation parameter
Gs = ot.unbalanced.sinkhorn_unbalanced(a, b, M, epsilon, alpha, verbose=True)
pl.figure(4, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, Gs, 'UOT matrix Sinkhorn')
pl.show()
"""
Explanation: Solve Unbalanced Sinkhorn
End of explanation
"""
|
Cyb3rWard0g/ThreatHunter-Playbook
|
docs/notebooks/windows/06_credential_access/WIN-191030201010.ipynb
|
gpl-3.0
|
from openhunt.mordorutils import *
spark = get_spark()
"""
Explanation: Remote Interactive Task Manager LSASS Dump
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/10/30 |
| modification date | 2020/09/20 |
| playbook related | ['WIN-1904101010'] |
Hypothesis
Adversaries might be RDPing to computers in my environment and interactively dumping the memory contents of LSASS with task manager.
Technical Context
None
Offensive Tradecraft
The Windows Task Manager may be used to dump the memory space of lsass.exe to disk for processing with a credential access tool such as Mimikatz.
This is performed by launching Task Manager as a privileged user, selecting lsass.exe, and clicking “Create dump file”.
This saves a dump file to disk with a deterministic name that includes the name of the process being dumped.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/06_credential_access/SDWIN-191027055035.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/rdp_interactive_taskmanager_lsass_dump.zip |
Analytics
Initialize Analytics Engine
End of explanation
"""
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/rdp_interactive_taskmanager_lsass_dump.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
"""
Explanation: Download & Process Mordor Dataset
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, TargetFilename, ProcessGuid
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 11
AND Image LIKE "%taskmgr.exe"
AND lower(TargetFilename) RLIKE ".*lsass.*\.dmp"
'''
)
df.show(10,False)
"""
Explanation: Analytic I
Look for taskmgr creating files which name contains the string lsass and with extension .dmp.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SourceImage, TargetImage, GrantedAccess
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 10
AND lower(SourceImage) LIKE "%taskmgr.exe"
AND lower(TargetImage) LIKE "%lsass.exe"
AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*")
'''
)
df.show(10,False)
"""
Explanation: Analytic II
Look for task manager access lsass and with functions from dbgcore.dll or dbghelp.dll libraries
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SourceImage, TargetImage, GrantedAccess
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 10
AND lower(TargetImage) LIKE "%lsass.exe"
AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*")
'''
)
df.show(10,False)
"""
Explanation: Analytic III
Look for any process accessing lsass and with functions from dbgcore.dll or dbghelp.dll libraries
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 |
End of explanation
"""
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.Image, o.LogonId, o.ProcessGuid, a.SourceProcessGUID, o.CommandLine
FROM mordorTable o
INNER JOIN (
SELECT Hostname,SourceProcessGUID
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 10
AND lower(TargetImage) LIKE "%lsass.exe"
AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*")
) a
ON o.ProcessGuid = a.SourceProcessGUID
WHERE o.Channel = "Microsoft-Windows-Sysmon/Operational"
AND o.EventID = 1
'''
)
df.show(10,False)
"""
Explanation: Analytic IV
Look for combinations of process access and process creation to get more context around potential lsass dump form task manager or other binaries
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 |
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
End of explanation
"""
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SessionName, o.AccountName, o.ClientName, o.ClientAddress, a.Image, a.CommandLine
FROM mordorTable o
INNER JOIN (
SELECT LogonId, Image, CommandLine
FROM (
SELECT o.Image, o.LogonId, o.CommandLine
FROM mordorTable o
INNER JOIN (
SELECT Hostname,SourceProcessGUID
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 10
AND lower(TargetImage) LIKE "%lsass.exe"
AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*")
) a
ON o.ProcessGuid = a.SourceProcessGUID
WHERE o.Channel = "Microsoft-Windows-Sysmon/Operational"
AND o.EventID = 1
)
) a
ON o.LogonID = a.LogonId
WHERE lower(o.Channel) = "security"
AND o.EventID = 4778
'''
)
df.show(10,False)
"""
Explanation: Analytic V
Look for binaries accessing lsass that are running under the same logon context of a user over an RDP session
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 |
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4778 |
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/end-to-end-structured/solutions/5b_deploy_keras_ai_platform_babyweight.ipynb
|
apache-2.0
|
import os
"""
Explanation: LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.
Learning Objectives
Setup up the environment
Deploy trained Keras model to Cloud AI Platform
Online predict from model on Cloud AI Platform
Batch predict from model on Cloud AI Platform
Introduction
In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Import necessary libraries.
End of explanation
"""
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project ID is: "$PROJECT
# Change these to try this notebook out
PROJECT = "asl-ml-immersion" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.3"
%%bash
gcloud config set ai_platform/region $REGION
"""
Explanation: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project ID for our bucket, so you only need to change your project and region.
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model_tuned
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model_tuned/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
"""
Explanation: Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last lab. We'll want to deploy the saved_model.pb within the directory of the tuned model as well as the variable values in the variables folder. Therefore, we need the path of the tuned model directory so that everything within it can be found by Cloud AI Platform's model deployment service.
End of explanation
"""
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model_tuned/2* \
| tail -1 | tr -d '[:space:]')
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.3 \
--python-version=3.7
"""
Explanation: Lab Task #2: Deploy trained model
Deploying the trained model to act as a REST web service is a simple gcloud call.
End of explanation
"""
import json
import requests
from oauth2client.client import GoogleCredentials
MODEL_NAME = "babyweight"
MODEL_VERSION = "ml_on_gcp"
token = (
GoogleCredentials.get_application_default().get_access_token().access_token
)
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict".format(
PROJECT, MODEL_NAME, MODEL_VERSION
)
headers = {"Authorization": "Bearer " + token}
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39,
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38,
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39,
},
{
"is_male": "Unknown",
"mother_age": 29.0,
"plurality": "Multiple(2+)",
"gestation_weeks": 38,
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
"""
Explanation: Use model to make online prediction.
Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
End of explanation
"""
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
"""
Explanation: The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
End of explanation
"""
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=ml_on_gcp
"""
Explanation: Now call gcloud ai-platform predict using the JSON we just created and point to our deployed model and version.
End of explanation
"""
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=ml_on_gcp
"""
Explanation: Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction.
End of explanation
"""
|
GoogleCloudPlatform/professional-services
|
examples/e2e-home-appliance-status-monitoring/notebook/EnergyDisaggregationEDA.ipynb
|
apache-2.0
|
# @title Upload files (skip this if this is run locally)
# Use this cell to update the following files
# 1. requirements.txt
from google.colab import files
uploaded = files.upload()
# @title Install missing packages
# run this cell to install packages if some are missing
!pip install -r requirements.txt
# @title Import libraries
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import gcsfs
import sklearn.metrics
%matplotlib inline
"""
Explanation: License
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at .
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
"""
def load_main_energy_data(path):
"""Load main energy data from the specified file.
Load main energy data from the specified file.
Args:
path - string. Path to the data file.
Returns:
pd.DataFrame - Main energy data in the household.
Raises:
ValueError. Raised when the specified file does not exist.
"""
if not os.path.exists(path):
raise ValueError('File {} does not exist.'.format(path))
with open(path, 'r') as f:
data = pd.read_csv(f,
delimiter=' ',
header=None,
names=['time',
'main_watts',
'main_va',
'main_RMS'])
data.time = data.time.apply(lambda x: datetime.fromtimestamp(x))
data.set_index('time', drop=True, inplace=True)
data.index = data.index.floor('S')
return data
def load_appliance_energy_data(path, appliance_name):
"""Load appliance energy data from file.
Load energy data from the specified file.
Args:
path - string. Path to the data file.
appliance_name - string. Name of the appliance.
Returns:
pd.DataFrame. A 2-column dataframe.
The 1st column is timestamp in UTC, and the 2nd is energy in
Raises:
ValueError. Raised when the specified file does not exist.
"""
if not os.path.exists(path):
raise ValueError('File {} does not exist.'.format(path))
with open(path, 'r') as f:
df = pd.read_csv(f,
delimiter=' ',
header=None,
names=['time', appliance_name])
df.time = df.time.apply(lambda x: datetime.fromtimestamp(x))
df.set_index('time', drop=True, inplace=True)
df.index = df.index.floor('S')
return df
def load_energy_data(data_dir, house_id, load_main=False):
"""Load all appliances energy data.
Load all appliances energy data collected in a specified household.
Args:
data_dir - string. Path to the directory of data.
house_id - int. Household id.
load_main - bool. Whether to load mains.dat.
Returns:
pd.DataFrame - Energy data in the household.
Raises:
ValueError. Raised when the specified directory or household does not exist.
"""
house_data_dir = os.path.join(data_dir, 'house_{}'.format(house_id))
if not os.path.exists(house_data_dir):
raise ValueError('{} does not exist.'.format(house_data_dir))
if load_main:
main_file = os.path.join(house_data_dir, 'mains.dat')
data = load_main_energy_data(main_file)
label_file = os.path.join(house_data_dir, 'labels.dat')
with open(label_file, 'r') as f:
labels = pd.read_csv(f,
delimiter=' ',
header=None,
index_col=0,
names=['appliance'])
appliance_files = filter(lambda x: re.match(r'channel_\d+\.dat', x),
os.listdir(house_data_dir))
ll = [data,] if load_main else []
for f in appliance_files:
appliance_id = int(f.split('.')[0].split('_')[1])
appliance_name = labels.loc[appliance_id, 'appliance']
ll.append(load_appliance_energy_data(os.path.join(house_data_dir, f),
appliance_name))
if load_main:
data = pd.concat(ll, axis=1, join_axes=[data.index])
else:
data = pd.concat(ll, axis=1)
return data
"""
Explanation: Data inspection
Utility for loading and transforming raw data
End of explanation
"""
GOOGLE_CLOUD_PROJECT = 'your-google-project-id' #@param
GOOGLE_APPLICATION_CREDENTIALS = 'e2e_demo_credential.json' #@param
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = GOOGLE_APPLICATION_CREDENTIALS
os.environ['GOOGLE_CLOUD_PROJECT'] = GOOGLE_CLOUD_PROJECT
# If raw data is used, please make sure raw_data_dir is correctly set
use_raw = False #@param
selected_hid = 2 #@param
raw_data_dir = 'ukdale data directory' #@param
selected_house_dir = os.path.join(raw_data_dir, 'house_{}'.format(selected_hid))
%%time
if not use_raw:
print("Download processed sample file for house 2 from GCS")
fs = gcsfs.GCSFileSystem(project=os.environ['GOOGLE_CLOUD_PROJECT'])
with fs.open('gs://gcp_blog/e2e_demo/processed_h2_appliance.csv') as f:
energy_data = pd.read_csv(f,
index_col=0,
parse_dates=True)
else:
# load energy data from raw downloaded ukdale data directory
energy_data = load_energy_data(raw_data_dir, selected_hid)
energy_data.head()
"""
Explanation: Data Loading
End of explanation
"""
print(energy_data.shape)
energy_data.describe()
energy_data.index.min(), energy_data.index.max()
cutoff_st = '2013-06-01 00:00:00'
cutoff_et = '2013-09-30 23:59:59'
energy_data = energy_data.loc[cutoff_st:cutoff_et]
print('{}, {}'.format(energy_data.index.min(), energy_data.index.max()))
energy_data.describe()
energy_data = energy_data.fillna(method='ffill').fillna(method='bfill')
energy_data = energy_data.asfreq(freq='6S', method='ffill')
print(energy_data.shape)
energy_data.describe()
energy_data.head()
energy_data = energy_data.astype(int)
energy_data.info()
energy_data.describe()
if 'aggregate' in energy_data.columns:
energy_data = energy_data.drop('aggregate', axis=1)
energy_data['gross'] = energy_data.sum(axis=1)
energy_data.describe()
appliance_cols = ['running_machine', 'washing_machine', 'dish_washer',
'microwave', 'toaster', 'kettle', 'rice_cooker', 'cooker']
print(appliance_cols)
energy_data['app_sum'] = energy_data[appliance_cols].sum(axis=1)
energy_data.describe()
st = '2013-07-04 00:00:00'
et = '2013-07-05 00:00:00'
sub_df = energy_data.loc[st:et]
print(sub_df.shape)
fig, ax = plt.subplots(1, 1, figsize=(15, 8))
ax = sub_df[['gross', 'app_sum']].plot(ax=ax)
ax.grid(True)
ax.set_title('House {}'.format(selected_hid))
ax.set_ylabel('Power consumption in watts')
nrow = int(np.ceil(np.sqrt(len(appliance_cols))))
ncol = int(np.ceil(1.0 * len(appliance_cols) / nrow))
fig, axes = plt.subplots(nrow, ncol, figsize=(5*ncol, 3*nrow))
axes[-1, -1].axis('off')
for i, app in enumerate(appliance_cols):
row_ix = i // 3
col_ix = i % 3
ax = axes[row_ix][col_ix]
lb = energy_data[app].std()
ub = energy_data[app].max() - lb
energy_data[app + '_on'] = energy_data[app].apply(
lambda x: 1 if x > lb else 0)
energy_data[app][(energy_data[app] > lb) &
(energy_data[app] < ub)].plot.hist(bins=20, ax=ax)
ax.set_title(app)
ax.grid(True)
plt.tight_layout()
energy_data.mean(axis=0)
train_st = '2013-06-01 00:00:00'
train_et = '2013-07-31 23:59:59'
train_data = energy_data.loc[train_st:train_et]
print(train_data.shape)
valid_st = '2013-08-01 00:00:00'
valid_et = '2013-08-31 23:59:59'
valid_data = energy_data.loc[valid_st:valid_et]
print(valid_data.shape)
test_st = '2013-09-01 00:00:00'
test_et = '2013-09-30 23:59:59'
test_data = energy_data.loc[test_st:test_et]
print(test_data.shape)
train_file = os.path.join(raw_data_dir, 'house_{}/train.csv'.format(selected_hid))
valid_file = os.path.join(raw_data_dir, 'house_{}/valid.csv'.format(selected_hid))
test_file = os.path.join(raw_data_dir, 'house_{}/test.csv'.format(selected_hid))
with open(train_file, 'w') as f:
train_data.to_csv(f)
print('train_data saved.')
with open(valid_file, 'w') as f:
valid_data.to_csv(f)
print('valid_data saved.')
with open(test_file, 'w') as f:
test_data.to_csv(f)
print('test_data saved.')
"""
Explanation: EDA
End of explanation
"""
train_file = os.path.join(raw_data_dir, 'house_{}/train.csv'.format(selected_hid))
valid_file = os.path.join(raw_data_dir, 'house_{}/valid.csv'.format(selected_hid))
test_file = os.path.join(raw_data_dir, 'house_{}/test.csv'.format(selected_hid))
# @title Peek at the input file
with open(train_file, 'r') as f:
train_data = pd.read_csv(f, index_col=0)
print(pd.Series(train_data.columns))
train_data.head()
appliance_cols = [x for x in train_data.columns if '_on' in x]
print(train_data[appliance_cols].mean())
with open(test_file, 'r') as f:
test_data = pd.read_csv(f, index_col=0)
print(test_data.shape)
print(test_data[appliance_cols].mean())
ss = ['2013-09-{0:02d} 00:00:00'.format(i+1) for i in range(30)]
ee = ['2013-09-{0:02d} 23:59:59'.format(i+1) for i in range(30)]
fig, axes = plt.subplots(30, 1, figsize=(15, 120))
for i, (s, e) in enumerate(zip(ss, ee)):
test_data.loc[s:e].gross.plot(ax=axes[i])
axes[i].set
plt.tight_layout()
"""
Explanation: Splitted Data inspection
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/miroc/cmip6/models/miroc6/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc6', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC6
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: CMIP5:MIROC5
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
DOC.set_value("AGCM")
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
DOC.set_value("hydrostatic")
DOC.set_value("primitive equations")
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("12 min")
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("3 hours")
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
DOC.set_value("present day")
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
DOC.set_value("spectral")
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
DOC.set_value("leap frog")
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
DOC.set_value("Other: vapour/solid/liquid")
DOC.set_value("clouds")
DOC.set_value("surface pressure")
DOC.set_value("temperature")
DOC.set_value("wind components")
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
DOC.set_value("radiation boundary condition")
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Free")
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Damp")
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Bi-harmonic diffusion")
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
DOC.set_value("Other: ffsl+ppm")
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
DOC.set_value("finite volume")
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
DOC.set_value("Other: dry air, specific humidity and aerosol tracers")
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
DOC.set_value("conservation fixer")
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
DOC.set_value("Other: pdf variance and skewness")
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
DOC.set_value("conservation fixer")
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
DOC.set_value("BC (black carbon / soot)")
DOC.set_value("POM (particulate organic matter)")
DOC.set_value("SOA (secondary organic aerosols)")
DOC.set_value("dust")
DOC.set_value("organic")
DOC.set_value("sea salt")
DOC.set_value("sulphate")
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
DOC.set_value("wide-band model")
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(15)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
DOC.set_value("wide-band model")
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
DOC.set_value("two-stream")
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(14)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
DOC.set_value("Mellor-Yamada")
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
DOC.set_value("TKE prognostic")
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(2)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Chikira-Sugiyama")
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
DOC.set_value("mass-flux")
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
DOC.set_value("convective momentum transport")
DOC.set_value("detrainment")
DOC.set_value("entrainment")
DOC.set_value("radiative effect of anvils")
DOC.set_value("updrafts")
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Hybrid Prognostic Cloud (HPC) Watanabe et al. 2009")
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
DOC.set_value("graupel")
DOC.set_value("hail")
DOC.set_value("liquid rain")
DOC.set_value("snow")
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Willson-Ballard")
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
DOC.set_value("cloud droplets")
DOC.set_value("cloud ice")
DOC.set_value("ice nucleation")
DOC.set_value("mixed phase")
DOC.set_value("water vapour deposition")
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
DOC.set_value("Other: nucleation, deposition, evaporation/sublimation, acretion, melting, condensation")
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
DOC.set_value("IR brightness")
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
DOC.set_value("highest altitude level")
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
DOC.set_value("Inline")
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(32768)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(140)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(40)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(94.0)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
DOC.set_value("space borne")
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
DOC.set_value("ice spheres")
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
DOC.set_value("max")
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
DOC.set_value("effect on drag")
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
DOC.set_value("statistical sub-grid scale variance")
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
DOC.set_value("linear theory")
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
DOC.set_value("spectral")
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
DOC.set_value("transient")
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("SOLARIS")
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
DOC.set_value("fixed")
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(1950)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
DOC.set_value("Berger 1978")
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
DOC.set_value("Other: via stratospheric aerosols optical thickness")
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
mspieg/dynamical-systems
|
LorenzEquationsDerivation.ipynb
|
cc0-1.0
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
"""
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman, Based on ipython notebook by Kyle Mandli from his course [Introduction to numerical methods](https://github.com/mandli/intro-numerical-methods)</td>
</table>
End of explanation
"""
# show Ra vs a
a = np.linspace(0.01,2.)
Rc = np.pi**4*(1. + a**2)**3/a**2
plt.figure()
plt.semilogy(a,Rc)
amin = np.sqrt(1./2.)
Rcmin = np.pi**4*(1. + amin**2)**3/amin**2
plt.hold(True)
plt.semilogy(amin,Rcmin,'ro')
plt.xlabel('a')
plt.ylabel('Ra$_c$')
plt.title('Critical Rayleigh Number')
plt.show()
"""
Explanation: Deriving the Lorenz Equations
The Lorenz Equations are a 3-D dynamical system that is a simplified model of Rayleigh-Benard thermal convection. They are derived and described in detail in Edward Lorenz' 1963 paper Deterministic Nonperiodic Flow in the Journal of Atmospheric Science.
Here we will just sketch out the key points of the derivation. A more complete derivation can be found here
The key idea is that the Lorenz Equations result from a severely truncated spectral approximation to the 2-D equations for incompressible thermal convection in stream-function/vorticity form. These equations govern the flow of a buouyant incompressible fluid with a temperature dependent density in a layer of depth $h$, that is heated from below and cooled from the top.
(Insert a movie here?)
The full coupled set of scaled PDE's describe the coupling of incompressible Navier Stokes flow with an advection-diffusion equation for temperature, and can be written in dimensionless form as,
$$
\frac{1}{\mathrm{Pr}}\left[ \frac{\partial \omega}{\partial t} + \vec{v}\cdot\nabla\omega\right] = \nabla^2\omega + \mathrm{Ra}\frac{\partial T}{\partial x}
$$
$$
\nabla^2 \psi = -\omega
$$
$$
\frac{\partial T}{\partial t} + \vec{v}\cdot\nabla T = \nabla^2 T
$$
Where
$$
\vec{v}=(u,w) = \nabla\times\psi\hat{k}=(\frac{\partial\psi}{\partial y},- \frac{\partial\psi}{\partial x})
$$
is the fluid velocity field (which in this form is exactly incompressible with $\nabla\cdot\vec{v}=0$). $\psi$ is the "Streamfunction" whose contours are tangent to the fluid trajectories at all times. The vorticity,
$$
\omega\hat{k} = \nabla\times\vec{v}
$$
measures the local rate of rotation of the fluid, and is driven by horizontal variations in temperature (actually density).
Boundary conditions for Temperature are $T=1$ on the bottom of the layer and $T=0$ on the top. In the absence of any fluid motion (\omega=\vec{v}=0), the temperature field is just a steady conductive ramp with
$$
T = 1 - y
$$
Thus we can also solve for the perturbation away from this steady state by substituting
$$
T = 1 - y +\theta(x,y,t)
$$
into the energy equation to solve for the perturbed temperature using
$$
\frac{\partial \theta}{\partial t} + \vec{v}\cdot\nabla \theta = \nabla^2\theta + w
$$
Parameters
In dimensionless form, these equations have two important dimensionless numbers that control the structure and behavior of the convection.
The Prandtl Number
The first is the "Prandtl Number", $\mathrm{Pr} = \frac{\nu}{\kappa}$ which is the ratio of the fluid viscosity $\nu$ to the thermal diffusivitiy $\kappa$. Since both vorticity and Temperature both obey advection diffusion equations (and viscosity acts to diffuse momentum/vorticity), the Prandtl number is a measure of whether momemtum or energy is more dissipative.
The Rayleigh Number
The second key parameter is the Rayleigh number
$$
\mathrm{Ra} = \frac{g\alpha(T_1 - T_0)h^3}{\nu\kappa}
$$
which measures the balance of forces that drive convection (i.e. gravity, or temperature differences), to those that damp convection such as viscosity and thermal diffusivity. Systems with large Rayleigh numbers are prone to vigorous convection. However, it was shown by Rayleigh, that there is a critical value of the Rayleigh Number $\mathrm{Ra}_c$ below which there is no convection. This value depends on the size of the convection cell and boundary conditions for stress on the fluid, however, for the simplest case of a layer with no-slip top and bottom boundary conditions and cell with aspect ratio $a=h/L$ (with $h$ the layer depth and $L$ the width of the convection cell), then the critical Ra number is
$$
\mathrm{Ra}_c = \pi^4 (1 + a^2)^3/a^2
$$
which has a minimum value for $a^2=1/2$.
End of explanation
"""
a = np.sqrt(0.5)
x = np.linspace(0,1./a)
y = np.linspace(0.,1.)
X,Y = np.meshgrid(x,y)
psi = np.sin(a*np.pi*X)*np.sin(np.pi*Y)
theta0 = np.cos(a*np.pi*X)*np.sin(np.pi*Y)
theta1 = -np.sin(2.*np.pi*Y)
plt.figure()
plt.subplot(2,2,1)
plt.contourf(X,Y,psi)
plt.title('$\psi$')
plt.subplot(2,2,3)
plt.contourf(X,Y,theta0)
plt.title('$\\theta_0$')
plt.subplot(2,2,4)
plt.contourf(X,Y,theta1)
plt.title('$\\theta_1$')
plt.show()
"""
Explanation: Spectral decomposition
We next expand the streamfunction and Temperature fields in terms of a highly truncated Fourier Series where
the streamfunction contains one cellular mode
$$
\psi(x,y,t) = X(t)\sin(a\pi x)\sin(\pi y)
$$
and Temperature has two modes
$$
T(x,y,t) = Y(t)\cos(a\pi x)\sin(\pi y) - Z(t)\sin(2\pi y)
$$
Where $X(t)$, $Y(t)$ and $Z(t)$ are the time dependent amplitudes of each mode. The spatial components of each mode looks like
End of explanation
"""
plt.figure()
plt.subplot(1,2,1)
plt.contourf(X,Y,2.*psi)
plt.gca().set_aspect('equal')
plt.title('$\psi$')
plt.subplot(1,2,2)
plt.contourf(X,Y,3*theta0 + 4*theta1)
plt.gca().set_aspect('equal')
plt.title('Temperature $\\theta$')
plt.show()
"""
Explanation: and for our initial condition $X(0) = 2$, $Y(0) = 3$, $Z(0) = 4$, the streamfunction and Temperature fields would look like
End of explanation
"""
|
kellerberrin/OSM-QSAR
|
Notebooks/OSM_Results/OSM Prelim Results.ipynb
|
mit
|
from IPython.display import display
import pandas as pd
print("Meta Results")
meta_results = pd.read_csv("./meta_results.csv")
display(meta_results)
"""
Explanation: OSM COMPETITION: A Meta Model that optimally combines the outputs of other models.
The aim of the competition is to develop a computational model that predicts which molecules will block the malaria parasite's ion pump, PfATP4.
Submitted by James McCulloch - james.duncan.mcculloch@gmail.com
Final Results. The DNN meta model combines probability maps of molecular structure and EC50 classifiers and has a predictive score of AUC = 0.89 on the test molecules. This model ("osm" in the software) is selected for the competition.
Other "off-the-shelf" meta models from SKLearn have predictive scores of AUC [0.81, 0.85] (see below) and support the results obtained from the meta DNN.
What is a Meta Model?
Each predictive model based on fingerprints or another SMILE based description vector such as DRAGON brings a certain amount of predictive power to the task of assessing likely molecular activity against PfATP4.
What the meta model does is combine the predictive power of each model in an optimal way to produce a more predictive composite model.
It does this by taking as it's input the probability maps (the outputs) of other classifiers,
The two models chosen as inputs to the meta model are:
A Neural Network model that uses the DRAGON molecular descriptor to estimate molecular PfATP4 ion activity directly. This model had modest predictive power of AUC=0.77. See the first notebook for details.
A logistic classifier that uses the Morgan fingerprints (mol radius = 5) to predict the EC50 <= 500 nMol class. This model was discussed in notebook 2 and has a predictive power of AUC=0.93 for the test molecules. Crucially, this predictive power is for EC50 only, not PfATP4 ion activity. For the test set, EC50 and PfATP4 ion activity are closely correlated because these molecules have similar structures and were designed to be active against PfATP4. However, other molecules from the training set with different structures have different sites of activity and membership of the EC50 <= 500 nMol class is not predictive of PfATP4 ion activity.
Finding the Optimal Final Model.
A DNN and a variety SKlearn classifiers were trained as Meta Models against the probability maps of the 2 models described above and the resultant Area Under Curve (AUC) statistics against the test molecules are tabulated below. Note the meta model is a binary classifier [ACTIVE, INACTIVE] for ion activity it does not attempt to classify molecules as [PARTIAL].
Results Summary
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from pylab import *
from sklearn.preprocessing import minmax_scale
def sort_map(column):
array = minmax_scale(train_results[column])
return array[np.argsort(-array)]
scale = 1.0
fig = plt.figure(num=None, figsize=(8 * scale, 6 * scale), dpi=80, facecolor='w', edgecolor='k')
for map in all_active: plt.plot(sort_map(map), label=map)
xlabel("molecules")
ylabel("normalized probability")
title(" Training Set [ACTIVE] Probability Maps")
legend(loc=1); # upper right corner
def mol_label_list(data_frame): # Function to produce rdkit mols and associated molecular labels
id = data_frame["ID"].tolist()
klass = data_frame["ACTUAL_500"].tolist()
potency = data_frame["EC50"].tolist()
ion_activity = data_frame["ION_ACTIVITY"].tolist()
map_prob = data_frame["M5_500_250"].tolist()
labels = []
for idx in range(len(id)):
labels.append("{} {} {} {} {:5.0f} ({:5.4f})".format(idx+1, id[idx],
klass[idx][0], ion_activity[idx][0],
potency[idx]*1000, map_prob[idx]))
smiles = data_frame["SMILE"].tolist()
mols = [Chem.MolFromSmiles(smile) for smile in smiles]
return mols, labels
from rdkit import Chem
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
from rdkit import rdBase
IPythonConsole.ipython_useSVG=True
"""
Explanation: Where the META MODELs are as follows:
DNN - A Deep Neural Network classifier [16, 32, 32, 16, 2] from the Keras toolkit. Cross-entropy loss function.
NBC - A Naive Bayes Classifier
SVMC - A support vector machine classifier.
LOGC - A Logistic classifier.
Modelling.
The Meta Models run on Linux and Windows under Python 2.7 and 3.5 (Mac untested):
Download (follow the readme setup) the entire directory tree from google drive here: https://github.com/kellerberrin/OSM-QSAR. Detailed instructions will be also posted so that the withheld molecules can be tested against the optimal model with minimum hassle. The pre-trained DRAGON classifier "ION_DRAGON_625.krs" must be in the model directories. In addition, for the "osm" model, the pre-trained meta model "ION_META_40.krs" should be in the "osm" model directory. The software should give sensible error messages if they are missing.
Make sure you have setup and activated the python anaconda environment as described in "readme.md".
For the optimal OSM meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory):
$python OSM_QSAR.py --classify osm --load ION_META --epoch 40 --train 0 [--clean]
For the svmc SKLearn meta model (--help for flag descriptions) the following cmd was used (the clean flag is optional it removes previous results from the model directory):
$python OSM_QSAR.py --classify osm_sk [--clean]
We visualize the training set probability maps by normalizing them to the unit interval [0, 1] and sorting them in descending order.
End of explanation
"""
ion_active = ec50_200_active.loc[train_results["ION_ACTIVITY"] == "ACTIVE"].sort_values("EC50")
mols, labels = mol_label_list(ion_active)
Draw.MolsToGridImage(mols,legends=labels,molsPerRow=4)
"""
Explanation: ION_ACTIVITY [ACTIVE] in EC50_200
End of explanation
"""
sorted = test_results.sort_values("M5_500_250", ascending=False)
mols, labels = mol_label_list(sorted)
Draw.MolsToGridImage(mols,legends=labels,molsPerRow=4)
"""
Explanation: ION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_200 to EC50_500
Commentary
These molecules have the same Triazole arm as we noticed in the previous notebook when trying to classifiy the molecular ION_ACTIVITY using D840_ACTIVE (DRAGON). This structure is also well represented in the test molecules.
ION_ACTIVITY [ACTIVE] Exemplar molecules that were added to the training set when moving from EC50_500 to EC50_1000
The results of the EC50_500 classification of the test molecules.
End of explanation
"""
|
pfschus/fission_bicorrelation
|
methods/build_plot_bhp_e.ipynb
|
mit
|
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
"""
Explanation: Goal: Build and plot bhp_e
P. Schuster, University of Michigan
June 21, 2018
Load bhm_e
Build a function to sum across custom pairs for bhp_e
Plot it
Plot slices
End of explanation
"""
%load_ext autoreload
%autoreload 2
import os
import sys
sys.path.append('../scripts/')
import numpy as np
import bicorr_e
import bicorr
import bicorr_plot as bicorr_plot
import bicorr_math as bicorr_math
import matplotlib.pyplot as plt
import matplotlib.colors
import seaborn as sns
sns.set(style='ticks')
"""
Explanation: <div id="toc"></div>
End of explanation
"""
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')
print(bhm_e.shape)
print(e_bin_edges.shape)
print(note)
"""
Explanation: Step 1) Load bhm_e
End of explanation
"""
det_df = bicorr.load_det_df()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
"""
Explanation: Load detector pair dictionaries.
End of explanation
"""
help(bicorr_e.build_bhp_e)
bhp_e, norm_factor = bicorr_e.build_bhp_e(bhm_e,e_bin_edges)
"""
Explanation: Step 2) Produce bhp_e
End of explanation
"""
bhp_e.shape
"""
Explanation: Look at subsets of pairs later. For now I'll assume it's going to work...
End of explanation
"""
help(bicorr_plot.bhp_e_plot)
bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, title='Plot of bhp_e', show_flag=True)
"""
Explanation: Step 3) Plot it
I'm going to make a function called bhp_e_plot based off of bhp_plot.
End of explanation
"""
bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, vmin=10,vmax=1e4,title='Plot of bhp_e', show_flag=True)
"""
Explanation: Try out the vmin and vmax input parameters.
End of explanation
"""
num_fissions = 2194651200.00
bhp_e, norm_factor = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,num_fissions=num_fissions)
bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, title='Normalized plot of bhp_e', show_flag=True)
"""
Explanation: Try adding num_fissions.
End of explanation
"""
zoom_range = [0,6]
bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, zoom_range=zoom_range, title='Normalized plot of bhp_e', show_flag=True)
zoom_range = [0,0.5]
bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, zoom_range=zoom_range, title='Normalized plot of bhp_e', show_flag=True)
"""
Explanation: Looks pretty good. Zoom in on the central area to see what's going on there.
End of explanation
"""
|
LDSSA/learning-units
|
units/07-data-diagnostics/examples/Diagnosing data problems example .ipynb
|
mit
|
import pandas as pd
import numpy as np
% matplotlib inline
from matplotlib import pyplot as plt
"""
Explanation: Diagnosing the data issues:
End of explanation
"""
data = pd.read_csv('all_data.csv')
data.head(10)
"""
Explanation: The data you'll be exloring:
End of explanation
"""
duplicated_data = data.duplicated()
duplicated_data.head()
"""
Explanation: Duplicated data:
We seem to have a problem with some duplicated data. We can find them using Pandas duplicated
End of explanation
"""
data[duplicated_data]
"""
Explanation: So this is actually a mask. We can now ask for the data where the mask applies:
End of explanation
"""
heights = data['height']
ages = data['age']
gender = data['gender']
"""
Explanation: Missing data:
End of explanation
"""
missing_height = heights.isnull()
missing_height.head()
"""
Explanation: How much missing data do we have for heights?
Make a mask, with those who are missing, using isnull
End of explanation
"""
missing_height.sum()
"""
Explanation: In python, False evaluates to 0, and True to 1. So we can count the number of missing by doing:
End of explanation
"""
data[missing_height]
"""
Explanation: As before, we can use that mask on our original dataset, and see it:
End of explanation
"""
missing_ages = ages.isnull()
data[missing_ages]
"""
Explanation: How about age?
End of explanation
"""
gender.value_counts(dropna=False)
missing_gender = data['gender'].isnull()
data[missing_gender]
"""
Explanation: And gender?
Here we're going to do something clever. We're going to get the value_counts, but we're going to change the parameter dropna (drop nulls) to false, so that we keep them.
This would not be very useful with numerical data, but given that we know that age is categorical, we might as well:
End of explanation
"""
gender.value_counts(dropna=False).plot(kind='bar', rot=0)
"""
Explanation: But wait, we have another problem. We seem to have male and MALE:
End of explanation
"""
heights.hist(bins=40, figsize=(16,4))
plt.xlabel('Height')
plt.ylabel('Count')
"""
Explanation: Outliers:
What is the distribution of the heights?
Note: pyplot is used here to make axis labels. Because otherwise, this happens
End of explanation
"""
def print_analysis(series):
for nr in range(1, 4):
upper_limit = series.mean() + (nr * series.std())
lower_limit = series.mean() - (nr * series.std())
over_range = series > upper_limit
percent_over_range = over_range.sum() / len(series) * 100
under_range = series < lower_limit
percent_under_range = under_range.sum() / len(series) * 100
in_range = (series < upper_limit) & (series > lower_limit)
percent_in_range = in_range.sum() / len(series) * 100
print('\nFor the range of %0.0f standard deviations:' % nr)
print(' Lower limit: %0.0f' % lower_limit)
print(' Percent under range: %0.1f%%' % percent_under_range)
print(' Upper limit: %0.0f' % upper_limit)
print(' Percent over range: %0.1f%%' % percent_over_range)
print(' Percent within range: %0.1f%%' % percent_in_range)
heights.hist(bins=20)
plt.xlabel('Height')
plt.ylabel('Count')
print_analysis(heights)
"""
Explanation: This was useful, we can see that there are some really tall people, and some quite small ones. The distribution also looks close to normal
Who is outside of 2 standard deviations?
Let's make a quick function to deal with this...
End of explanation
"""
heights[heights < heights.mean() - 2*heights.std()]
"""
Explanation: Looking at a few of these outliers:
End of explanation
"""
heights[heights > heights.mean() + 2*heights.std()]
"""
Explanation: Over:
End of explanation
"""
heights[heights < heights.mean() - 3*heights.std()]
heights[heights > heights.mean() + 3*heights.std()]
"""
Explanation: Note: the 131cm and 208cm actually seem quite plausible.
And outside 3 standard deviations?
Under:
End of explanation
"""
ages.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
print_analysis(ages)
"""
Explanation: How about the ages?
End of explanation
"""
ages.max()
"""
Explanation: Well, this is quite useless. The reason is that using standard deviations assumes that the distribution is normal, and we can clearly see it isn't.
Let's try to solve this with other means...
What's the biggest outlier we have?
End of explanation
"""
extreme_value = .99
ages.dropna().quantile(extreme_value)
"""
Explanation: What if we used percentiles?
End of explanation
"""
# under_extreme_value = ages, where ages is smaller than the extreme value:
under_extreme_value = ages[ages < ages.dropna().quantile(extreme_value)]
under_extreme_value.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
"""
Explanation: Note: we had to use .dropna() there, as otherwise pandas would raise a runtime error(try it!)
Now, let's take a look at what our data is under this extreme value:
End of explanation
"""
non_babies = under_extreme_value[under_extreme_value > 10]
non_babies.hist(bins=40, figsize=(16,4))
plt.xlabel('Age')
plt.ylabel('Count')
"""
Explanation: Now that looks a lot better. We can clearly identify that almost everyone is an adult, except the point on the extreme left.
End of explanation
"""
non_babies.max()
"""
Explanation: Starting to look better, what about the point on the far right?
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II
|
00_Intro_Python_Jupyter_notebooks/4_NumPy_Arrays_and_Plotting.ipynb
|
gpl-3.0
|
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi
End of explanation
"""
import numpy
"""
Explanation: Play with NumPy Arrays
Welcome to Lesson 4 of the first course module in "Engineering Computations." You have come a long way!
Remember, this course assumes no coding experience, so the first three lessons were focused on creating a foundation with Python programming constructs using essentially no mathematics. The previous lessons are:
Lesson 1: Interacting with Python
Lesson 2: Play with data in Jupyter
Lesson 3: Strings and lists in action
In engineering applications, most computing situations benefit from using arrays: they are sequences of data all of the same type. They behave a lot like lists, except for the constraint in the type of their elements. There is a huge efficiency advantage when you know that all elements of a sequence are of the same type—so equivalent methods for arrays execute a lot faster than those for lists.
The Python language is expanded for special applications, like scientific computing, with libraries. The most important library in science and engineering is NumPy, providing the n-dimensional array data structure (a.k.a, ndarray) and a wealth of functions, operations and algorithms for efficient linear-algebra computations.
In this lesson, you'll start playing with NumPy arrays and discover their power. You'll also meet another widely loved library: Matplotlib, for creating two-dimensional plots of data.
Importing libraries
First, a word on importing libraries to expand your running Python session. Because libraries are large collections of code and are for special purposes, they are not loaded automatically when you launch Python (or IPython, or Jupyter). You have to import a library using the import command. For example, to import NumPy, with all its linear-algebra goodness, we enter:
python
import numpy
Once you execute that command in a code cell, you can call any NumPy function using the dot notation, prepending the library name. For example, some commonly used functions are:
numpy.linspace()
numpy.ones()
numpy.zeros()
numpy.empty()
numpy.copy()
Follow the links to explore the documentation for these very useful NumPy functions!
Warning:
You will find a lot of sample code online that uses a different syntax for importing. They will do:
python
import numpy as np
All this does is create an alias for numpy with the shorter string np, so you then would call a NumPy function like this: np.linspace(). This is just an alternative way of doing it, for lazy people that find it too long to type numpy and want to save 3 characters each time. For the not-lazy, typing numpy is more readable and beautiful.
We like it better like this:
End of explanation
"""
numpy.array([3, 5, 8, 17])
"""
Explanation: Creating arrays
To create a NumPy array from an existing list of (homogeneous) numbers, we call numpy.array(), like this:
End of explanation
"""
numpy.ones(5)
numpy.zeros(3)
"""
Explanation: NumPy offers many ways to create arrays in addition to this. We already mentioned some of them above.
Play with numpy.ones() and numpy.zeros(): they create arrays full of ones and zeros, respectively. We pass as an argument the number of array elements we want.
End of explanation
"""
numpy.arange(4)
numpy.arange(2, 6)
numpy.arange(2, 6, 2)
numpy.arange(2, 6, 0.5)
"""
Explanation: Another useful one: numpy.arange() gives an array of evenly spaced values in a defined interval.
Syntax:
numpy.arange(start, stop, step)
where start by default is zero, stop is not inclusive, and the default
for step is one. Play with it!
End of explanation
"""
numpy.linspace(2.0, 3.0)
len(numpy.linspace(2.0, 3.0))
numpy.linspace(2.0, 3.0, 6)
numpy.linspace(-1, 1, 9)
"""
Explanation: numpy.linspace() is similar to numpy.arange(), but uses number of samples instead of a step size. It returns an array with evenly spaced numbers over the specified interval.
Syntax:
numpy.linspace(start, stop, num)
stop is included by default (it can be removed, read the docs), and num by default is 50.
End of explanation
"""
x_array = numpy.linspace(-1, 1, 9)
"""
Explanation: Array operations
Let's assign some arrays to variable names and perform some operations with them.
End of explanation
"""
y_array = x_array**2
print(y_array)
"""
Explanation: Now that we've saved it with a variable name, we can do some computations with the array. E.g., take the square of every element of the array, in one go:
End of explanation
"""
z_array = numpy.sqrt(y_array)
print(z_array)
"""
Explanation: We can also take the square root of a positive array, using the numpy.sqrt() function:
End of explanation
"""
add_array = x_array + y_array
print(add_array)
"""
Explanation: Now that we have different arrays x_array, y_array and z_array, we can do more computations, like add or multiply them. For example:
End of explanation
"""
mult_array = x_array * z_array
print(mult_array)
"""
Explanation: Array addition is defined element-wise, like when adding two vectors (or matrices). Array multiplication is also element-wise:
End of explanation
"""
x_array / y_array
"""
Explanation: We can also divide arrays, but you have to be careful not to divide by zero. This operation will result in a nan which stands for Not a Number. Python will still perform the division, but will tell us about the problem.
Let's see how this might look:
End of explanation
"""
array_2d = numpy.array([[1, 2], [3, 4]])
print(array_2d)
"""
Explanation: Multidimensional arrays
2D arrays
NumPy can create arrays of N dimensions. For example, a 2D array is like a matrix, and is created from a nested list as follows:
End of explanation
"""
X = numpy.array([[1, 2], [3, 4]])
Y = numpy.array([[1, -1], [0, 1]])
"""
Explanation: 2D arrays can be added, subtracted, and multiplied:
End of explanation
"""
X + Y
"""
Explanation: The addition of these two matrices works exactly as you would expect:
End of explanation
"""
X * Y
"""
Explanation: What if we try to multiply arrays using the '*'operator?
End of explanation
"""
X @ Y
"""
Explanation: The multiplication using the '*' operator is element-wise. If we want to do matrix multiplication we use the '@' operator:
End of explanation
"""
numpy.dot(X, Y)
"""
Explanation: Or equivalently we can use numpy.dot():
End of explanation
"""
a = numpy.arange(24)
a_3D = numpy.reshape(a, (2, 3, 4))
print(a_3D)
"""
Explanation: 3D arrays
Let's create a 3D array by reshaping a 1D array. We can use numpy.reshape(), where we pass the array we want to reshape and the shape we want to give it, i.e., the number of elements in each dimension.
Syntax
numpy.reshape(array, newshape)
For example:
End of explanation
"""
numpy.shape(a_3D)
"""
Explanation: We can check for the shape of a NumPy array using the function numpy.shape():
End of explanation
"""
X
# Grab the element in the 1st row and 1st column
X[0, 0]
# Grab the element in the 1st row and 2nd column
X[0, 1]
"""
Explanation: Visualizing the dimensions of the a_3D array can be tricky, so here is a diagram that will help you to understand how the dimensions are assigned: each dimension is shown as a coordinate axis. For a 3D array, on the "x axis", we have the sub-arrays that themselves are two-dimensional (matrices). We have two of these 2D sub-arrays, in this case; each one has 3 rows and 4 columns. Study this sketch carefully, while comparing with how the array a_3D is printed out above.
<img src="images/3d_array_sketch.png" style="width: 400px;"/>
When we have multidimensional arrays, we can access slices of their elements by slicing on each dimension. This is one of the advantages of using arrays: we cannot do this with lists.
Let's access some elements of our 2D array called X.
End of explanation
"""
# Grab the 1st column
X[:, 0]
"""
Explanation: Exercises:
From the X array:
Grab the 2nd element in the 1st column.
Grab the 2nd element in the 2nd column.
Play with slicing on this array:
End of explanation
"""
# Grab the 1st row
X[0, :]
"""
Explanation: When we don't specify the start and/or end point in the slicing, the symbol ':' means "all". In the example above, we are telling NumPy that we want all the elements from the 0-th index in the second dimension (the first column).
End of explanation
"""
a_3D
"""
Explanation: Exercises:
From the X array:
Grab the 2nd column.
Grab the 2nd row.
Let's practice with a 3D array.
End of explanation
"""
a_3D[:, :, 0]
"""
Explanation: If we want to grab the first column of both matrices in our a_3D array, we do:
End of explanation
"""
a_3D[:, 0:2, 0]
"""
Explanation: The line above is telling NumPy that we want:
first ':' : from the first dimension, grab all the elements (2 matrices).
second ':': from the second dimension, grab all the elements (all the rows).
'0' : from the third dimension, grab the first element (first column).
If we want the first 2 elements of the first column of both matrices:
End of explanation
"""
a_3D[0, 1, 1:3]
"""
Explanation: Below, from the first matrix in our a_3D array, we will grab the two middle elements (5,6):
End of explanation
"""
#import random library
import random
lst_1 = random.sample(range(100), 100)
lst_2 = random.sample(range(100), 100)
#print first 10 elements
print(lst_1[0:10])
print(lst_2[0:10])
"""
Explanation: Exercises:
From the array named a_3D:
Grab the two middle elements (17, 18) from the second matrix.
Grab the last row from both matrices.
Grab the elements of the 1st matrix that exclude the first row and the first column.
Grab the elements of the 2nd matrix that exclude the last row and the last column.
NumPy == Fast and Clean!
When we are working with numbers, arrays are a better option because the NumPy library has built-in functions that are optimized, and therefore faster than vanilla Python. Especially if we have big arrays. Besides, using NumPy arrays and exploiting their properties makes our code more readable.
For example, if we wanted to add element-wise the elements of 2 lists, we need to do it with a for statement. If we want to add two NumPy arrays, we just use the addtion '+' symbol!
Below, we will add two lists and two arrays (with random elements) and we'll compare the time it takes to compute each addition.
Element-wise sum of a Python list
Using the Python library random, we will generate two lists with 100 pseudo-random elements in the range [0,100), with no numbers repeated.
End of explanation
"""
%%time
res_lst = []
for i in range(100):
res_lst.append(lst_1[i] + lst_2[i])
print(res_lst[0:10])
"""
Explanation: We need to write a for statement, appending the result of the element-wise sum into a new list we call result_lst.
For timing, we can use the IPython "magic" %%time. Writing at the beginning of the code cell the command %%time will give us the time it takes to execute all the code in that cell.
End of explanation
"""
arr_1 = numpy.random.randint(0, 100, size=100)
arr_2 = numpy.random.randint(0, 100, size=100)
#print first 10 elements
print(arr_1[0:10])
print(arr_2[0:10])
"""
Explanation: Element-wise sum of NumPy arrays
In this case, we generate arrays with random integers using the NumPy function numpy.random.randint(). The arrays we generate with this function are not going to be like the lists: in this case we'll have 100 elements in the range [0, 100) but they can repeat. Our goal is to compare the time it takes to compute addition of a list or an array of numbers, so all that matters is that the arrays and the lists are of the same length and type (integers).
End of explanation
"""
%%time
arr_res = arr_1 + arr_2
"""
Explanation: Now we can use the %%time cell magic, again, to see how long it takes NumPy to compute the element-wise sum.
End of explanation
"""
xarray = numpy.linspace(0, 2, 41)
print(xarray)
pow2 = xarray**2
pow3 = xarray**3
pow_half = numpy.sqrt(xarray)
"""
Explanation: Notice that in the case of arrays, the code not only is more readable (just one line of code), but it is also faster than with lists. This time advantage will be larger with bigger arrays/lists.
(Your timing results may vary to the ones we show in this notebook, because you will be computing in a different machine.)
Exercise
Try the comparison between lists and arrays, using bigger arrays; for example, of size 10,000.
Repeat the analysis, but now computing the operation that raises each element of an array/list to the power two. Use arrays of 10,000 elements.
Time to Plot
You will love the Python library Matplotlib! You'll learn here about its module pyplot, which makes line plots.
We need some data to plot. Let's define a NumPy array, compute derived data using its square, cube and square root (element-wise), and plot these values with the original array in the x-axis.
End of explanation
"""
from matplotlib import pyplot
%matplotlib inline
"""
Explanation: To plot the resulting arrays as a function of the orginal one (xarray) in the x-axis, we need to import the module pyplot from Matplotlib.
End of explanation
"""
#Plot x^2
pyplot.plot(xarray, pow2, color='k', linestyle='-', label='square')
#Plot x^3
pyplot.plot(xarray, pow3, color='k', linestyle='--', label='cube')
#Plot sqrt(x)
pyplot.plot(xarray, pow_half, color='k', linestyle=':', label='square root')
#Plot the legends in the best location
pyplot.legend(loc='best')
"""
Explanation: The command %matplotlib inline is there to get our plots inside the notebook (instead of a pop-up window, which is the default behavior of pyplot).
We'll use the pyplot.plot() function, specifying the line color ('k' for black) and line style ('-', '--' and ':' for continuous, dashed and dotted line), and giving each line a label. Note that the values for color, linestyle and label are given in quotes.
End of explanation
"""
#Plot x^2
pyplot.plot(xarray, pow2, color='red', linestyle='-', label='$x^2$')
#Plot x^3
pyplot.plot(xarray, pow3, color='green', linestyle='-', label='$x^3$')
#Plot sqrt(x)
pyplot.plot(xarray, pow_half, color='blue', linestyle='-', label='$\sqrt{x}$')
#Plot the legends in the best location
pyplot.legend(loc='best');
"""
Explanation: To illustrate other features, we will plot the same data, but varying the colors instead of the line style. We'll also use LaTeX syntax to write formulas in the labels. If you want to know more about LaTeX syntax, there is a quick guide to LaTeX available online.
Adding a semicolon (';') to the last line in the plotting code block prevents that ugly output, like <matplotlib.legend.Legend at 0x7f8c83cc7898>. Try it.
End of explanation
"""
|
turbomanage/training-data-analyst
|
courses/fast-and-lean-data-science/04_Keras_Flowers_transfer_learning_playground.ipynb
|
apache-2.0
|
import os, sys, math
import numpy as np
from matplotlib import pyplot as plt
if 'google.colab' in sys.modules: # Colab-only Tensorflow version selector
%tensorflow_version 2.x
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
AUTO = tf.data.experimental.AUTOTUNE
"""
Explanation: Training on GPU will be fine for transfer learning as it is not a very demanding process.
Imports
End of explanation
"""
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'
IMAGE_SIZE = [192, 192]
BATCH_SIZE = 64 # 128 works on GPU too but comes very close to the memory limit of the Colab GPU
EPOCHS = 5
VALIDATION_SPLIT = 0.19
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)
# splitting data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))
#@title display utilities [RUN ME]
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break;
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color='red' if red else 'black')
return subplot+1
def display_9_images_from_dataset(dataset):
subplot=331
plt.figure(figsize=(13,13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[labels[i]]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break;
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot=331
plt.figure(figsize=(13,13))
classes = np.argmax(predictions, axis=-1)
for i, image in enumerate(images):
title, correct = title_from_label_and_target(classes[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break;
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
"""
Explanation: Configuration
End of explanation
"""
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example['image'], channels=3)
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU
class_label = example['class']
return image, class_label
def load_dataset(filenames):
# read from TFRecords. For optimal performance, read from multiple
# TFRecord files at once and set the option experimental_deterministic = False
# to allow order-altering optimizations.
option_no_order = tf.data.Options()
option_no_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO)
dataset = dataset.with_options(option_no_order)
dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)
return dataset
display_9_images_from_dataset(load_dataset(training_filenames))
"""
Explanation: Read images and labels from TFRecords
End of explanation
"""
def get_batched_dataset(filenames, train=False):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
if train:
# Best practices for Keras:
# Training dataset: repeat then batch
# Evaluation dataset: do not repeat
dataset = dataset.repeat()
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
# should shuffle too but this dataset was well shuffled on disk already
return dataset
# source: Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
# instantiate the datasets
training_dataset = get_batched_dataset(training_filenames, train=True)
validation_dataset = get_batched_dataset(validation_filenames, train=False)
"""
Explanation: training and validation datasets
End of explanation
"""
pretrained_model = tf.keras.applications.MobileNetV2(input_shape=[*IMAGE_SIZE, 3], include_top=False)
#pretrained_model = tf.keras.applications.VGG16(weights='imagenet', include_top=False ,input_shape=[*IMAGE_SIZE, 3])
#pretrained_model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False, input_shape=[*IMAGE_SIZE, 3])
#pretrained_model = tf.keras.applications.MobileNet(weights='imagenet', include_top=False, input_shape=[*IMAGE_SIZE, 3])
pretrained_model.trainable = False
model = tf.keras.Sequential([
#
# YOUR CODE HERE
#
])
model.compile(
#
# YOUR CODE HERE
#
)
model.summary()
"""
Explanation: Model [WORK REQUIRED]
Start with a dummy single-layer model using one dense layer:
Use a tf.keras.Sequential model. The constructor takes a list of layers.
First, Flatten() the pixel values of the the input image to a 1D vector so that a dense layer can consume it:<br/>
tf.keras.layers.Flatten(input_shape=[*IMAGE_SIZE, 3]) # the first layer must also specify input shape
Add a single tf.keras.layers.Dense layer with softmax activation and the correct number of units (hint: 5 classes of flowers):<br/>
tf.keras.layers.Dense(5, activation='softmax')
add the last bits and pieces with model.compile(). For a classifier, you need 'sparse_categorical_crossentropy' loss, 'accuracy' in metrics and you can use the 'adam' optimizer.
==>Train this model: not very good... but all the plumbing is in place.
Instead of trying to figure out a better architecture, we will adapt a pretrained model to our data. Please remove all your layers to restart from scratch.
Instantiate a pre-trained model from tf.keras.applications.*
You do not need its final softmax layer (include_top=False) because you will be adding your own. This code is already written in the cell below.<br/>
Use pretrained_model as your first "layer" in your Sequential model.
Follow with tf.keras.layers.Flatten() or tf.keras.layers.GlobalAveragePooling2D() to turn the data from the pretrained model into a flat 1D vector.
Add your tf.keras.layers.Dense layer with softmax activation and the correct number of units (hint: 5 classes of flowers).
==>Train the model: you should be able to reach above 75% accuracy by training for 10 epochs
You can try adding a second dense layer. Use 'relu' activation on all dense layers but the last one which must be 'softmax'. An additional layer ads trainable weights. It is unlikely to do much good here though, because our dataset is too small.
This technique is called "transfer learning". The pretrained model has been trained on a different dataset but its layers have still learned to recognize bits and pieces of images that can be useful for flowers. You are retraining the last layer only, the pretrained weights are frozen. With far fewer weights to adjust, it works with less data.
End of explanation
"""
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset)
print(history.history.keys())
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
"""
Explanation: Training
End of explanation
"""
# random input: execute multiple times to change results
flowers, labels = dataset_to_numpy_util(load_dataset(validation_filenames).skip(np.random.randint(300)), 9)
predictions = model.predict(flowers, steps=1)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
display_9_images_with_predictions(flowers, predictions, labels)
"""
Explanation: Predictions
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/hadgem3-gc31-ll/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
spencer2211/deep-learning
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counts = Counter(text) # Outputs a dict {'word': #occurences}
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(sorted_vocab, 0)}
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab, 0)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.': "||Period||",
',': "||Comma||",
'"': "||Quotation_Mark||",
';': "||Semicolon||",
'!': "||Exclamtion_Mark||",
'?': "||Question_Mark||",
'(': "||Left_Parentheses||",
')': "||Right_Parentheses||",
'--': "||Dash||",
'\n': "||Return||"}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
print(len(int_to_vocab))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
Input = tf.placeholder(tf.int32, [None, None], name='input')
Targets = tf.placeholder(tf.int32, [None, None], name='targets')
LearningRate = tf.placeholder(tf.float32, name='learning_rate')
return Input, Targets, LearningRate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
Cell = tf.contrib.rnn.MultiRNNCell([lstm])
Initial_State = tf.identity(Cell.zero_state(batch_size, tf.float32), name='initial_state')
return Cell, Initial_State
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
Embedded = tf.nn.embedding_lookup(embedding, input_data)
return Embedded
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
Outputs, Final_State = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
Final_State = tf.identity(Final_State, name='final_state')
return Outputs, Final_State
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
rnn_outputs, FinalState = build_rnn(cell, embed)
Logits = tf.contrib.layers.fully_connected(rnn_outputs,
vocab_size,
activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(mean=0.0,
stddev=0.01),
biases_initializer=tf.zeros_initializer())
return Logits, FinalState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length])
ydata = np.append(ydata, xdata[0])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
Batches = np.array(list(zip(x_batches, y_batches)))
return Batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
with loaded_graph.as_default() as g:
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
picker = np.random.choice(len(int_to_vocab), 1, p=probabilities)[0]
picked_word = int_to_vocab.get(picker)
return picked_word
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
darioflute/CS4A
|
Lecture-python_intro.ipynb
|
gpl-3.0
|
%%writefile hello-world.py
#!/usr/bin/env python
print ("Hello world!")
ls hello-world*.py
cat hello-world.py
!python hello-world.py
"""
Explanation: Introduction to Python programming
Python program files
Python code is usually stored in text files with the file ending ".py":
myprogram.py
Every line in a Python program file is assumed to be a Python statement, or part thereof.
The only exception is comment lines, which start with the character # or in between two
series of triple quotes ```
To run our Python program from the command line we use:
$ python myprogram.py
On UNIX systems it is common to define the path to the interpreter on the first line of the program (note that this is a comment line as far as the Python interpreter is concerned):
#!/usr/bin/env python
If we do, and if we additionally set the file script to be executable (chmod +x myprogram.py), we can run the program like this:
$ myprogram.py
Example:
End of explanation
"""
%%writefile hello-world-in-german.py
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
print(" Meine Völker !")
!python hello-world-in-german.py
"""
Explanation: Character encoding
The standard character encoding is ASCII, but we can use any other encoding, for example UTF-8. To specify that UTF-8 is used we include the special line
# -*- coding: UTF-8 -*-
at the top of the file.
End of explanation
"""
import math
"""
Explanation: Other than these two optional lines in the beginning of a Python code file, no additional code is required for initializing a program.
Modules
Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
References
The Python Language Reference: http://docs.python.org/2/reference/index.html
The Python Standard Library: http://docs.python.org/2/library/
To use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:
End of explanation
"""
import math
x = math.cos(2 * math.pi)
print(x)
"""
Explanation: This includes the whole module and makes it available for use later in the program. For example, we can do:
End of explanation
"""
from math import *
x = cos(2 * pi)
print(x)
"""
Explanation: Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix "math." every time we use something from the math module:
End of explanation
"""
from math import cos, pi
x = cos(2 * pi)
print(x)
"""
Explanation: This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions.
As a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *:
End of explanation
"""
import math
print(dir(math))
"""
Explanation: Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the dir function:
End of explanation
"""
help(math.log)
log(10)
log(10, 2)
"""
Explanation: And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
End of explanation
"""
# variable assignments
x = 1.0
my_variable = 12.2
"""
Explanation: We can also use the help function directly on modules: Try
help(math)
Some very useful modules form the Python standard library are os, sys, math, shutil, re, subprocess, multiprocessing, threading.
A complete lists of standard modules for Python 2 and Python 3 are available at http://docs.python.org/2/library/ and http://docs.python.org/3/library/, respectively.
Variables and types
Symbol names
Variable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter.
By convention, variable names start with a lower-case letter, and Class names start with a capital letter.
In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Note: Be aware of the keyword lambda, which could easily be a natural variable name in a scientific program. But being a keyword, it cannot be used as a variable name.
Assignment
The assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
End of explanation
"""
type(x)
"""
Explanation: Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
End of explanation
"""
x = 1
type(x)
"""
Explanation: If we assign a new value to a variable, its type can change.
End of explanation
"""
print(y)
"""
Explanation: If we try to use a variable that has not yet been defined we get an NameError:
End of explanation
"""
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
print(x)
print(x.real, x.imag)
"""
Explanation: Fundamental types
End of explanation
"""
import types
# print all types defined in the `types` module
print(dir(types))
x = 1.0
# check if the variable x is a float
type(x) is float
# check if the variable x is an int
type(x) is int
"""
Explanation: Type utility functions
The module types contains a number of type name definitions that can be used to test if variables are of certain types:
End of explanation
"""
isinstance(x, float)
"""
Explanation: We can also use the isinstance method for testing types of variables:
End of explanation
"""
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
z = complex(x)
print(z, type(z))
x = float(z)
"""
Explanation: Type casting
End of explanation
"""
y = bool(z.real)
print(z.real, " -> ", y, type(y))
y = bool(z.imag)
print(z.imag, " -> ", y, type(y))
"""
Explanation: Complex variables cannot be cast to floats or integers. We need to use z.real or z.imag to extract the part of the complex number we want:
End of explanation
"""
1 + 2, 1 - 2, 1 * 2, 1 / 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
# Integer division of float numbers
3.0 // 2.0
# Note! The power operators in python isn't ^, but **
2 ** 2
"""
Explanation: Operators and comparisons
Most operators and comparisons in Python work as one would expect:
Arithmetic operators +, -, *, /, // (integer division), '**' power
End of explanation
"""
True and False
not False
True or False
"""
Explanation: Note: The / operator always performs a floating point division in Python 3.x.
This is not true in Python 2.x, where the result of / is always an integer if the operands are integers.
to be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x).
The boolean operators are spelled out as the words and, not, or.
End of explanation
"""
2 > 1, 2 < 1
2 > 2, 2 < 2
2 >= 2, 2 <= 2
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
"""
Explanation: Comparison operators >, <, >= (greater or equal), <= (less or equal), == equality, is identical.
End of explanation
"""
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
# replace a substring in a string with something else
s2 = s.replace("world", "test")
print(s2)
"""
Explanation: Compound types: Strings, List and dictionaries
Strings
Strings are the variable type that is used for storing text messages.
End of explanation
"""
s[0]
"""
Explanation: We can index a character in a string using []:
End of explanation
"""
s[0:5]
s[4:5]
"""
Explanation: We can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):
End of explanation
"""
s[:5]
s[6:]
s[:]
"""
Explanation: If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:
End of explanation
"""
s[::1]
s[::2]
"""
Explanation: We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):
End of explanation
"""
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("str1" + "str2" + "str3") # strings added with + are concatenated without space
print("value = %f" % 1.0) # we can use C-style string formatting
# this formatting creates a string
s2 = "value1 = %.2f. value2 = %d" % (3.1415, 1.5)
print(s2)
# alternative, more intuitive way of formatting a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)
print(s3)
"""
Explanation: This technique is called slicing. Read more about the syntax here: http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python has a very rich set of functions for text processing. See for example http://docs.python.org/2/library/string.html for more information.
String formatting examples
End of explanation
"""
l = [1,2,3,4]
print(type(l))
print(l)
"""
Explanation: List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is [...]:
End of explanation
"""
print(l)
print(l[1:3])
print(l[::2])
"""
Explanation: We can use the same slicing techniques to manipulate lists as we could use on strings:
End of explanation
"""
l[0]
"""
Explanation: Indexing starts at 0!
End of explanation
"""
l = [1, 'a', 1.0, 1-1j]
print(l)
"""
Explanation: Elements in a list do not all have to be of the same type:
End of explanation
"""
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
"""
Explanation: Python lists can be inhomogeneous and arbitrarily nested:
End of explanation
"""
start = 10
stop = 30
step = 2
range(start, stop, step)
# in python 3 range generates an iterator, which can be converted to a list using 'list(...)'.
# It has no effect in python 2
list(range(start, stop, step))
list(range(-10, 10))
s
# convert a string to a list by type casting:
s2 = list(s)
s2
# sorting lists
s2.sort()
print(s2)
"""
Explanation: Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function:
End of explanation
"""
# create a new empty list
l = []
# add an elements using `append`
l.append("A")
l.append("d")
l.append("d")
print(l)
"""
Explanation: Indexing in figures
Adding, inserting, modifying, and removing elements from lists
End of explanation
"""
l[1] = "p"
l[2] = "p"
print(l)
l[1:3] = ["d", "d"]
print(l)
"""
Explanation: We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
End of explanation
"""
l.insert(0, "i")
l.insert(1, "n")
l.insert(2, "s")
l.insert(3, "e")
l.insert(4, "r")
l.insert(5, "t")
print(l)
"""
Explanation: Insert an element at an specific index using insert
End of explanation
"""
l.remove("A")
print(l)
"""
Explanation: Remove first element with specific value using 'remove'
End of explanation
"""
del l[7]
del l[6]
print(l)
"""
Explanation: Remove an element at a specific location using del:
End of explanation
"""
point = (10, 20)
print(point, type(point))
point = 10, 20
print(point, type(point))
"""
Explanation: See help(list) for more details, or read the online documentation
Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are immutable.
Tuples are used a lot as results of functions when several variables are returned.
In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:
End of explanation
"""
x, y = point
print("x =", x)
print("y =", y)
"""
Explanation: We can unpack a tuple by assigning it to a comma-separated list of variables:
End of explanation
"""
point[0] = 20
"""
Explanation: If we try to assign a new value to an element in a tuple we get an error:
End of explanation
"""
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
params["parameter1"] = "A"
params["parameter2"] = "B"
# add a new entry
params["parameter4"] = "D"
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
print("parameter4 = " + str(params["parameter4"]))
"""
Explanation: Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:
End of explanation
"""
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
"""
Explanation: Control Flow
Conditional statements: if, elif, else
The Python syntax for conditional execution of code uses the keywords if, elif (else if), else:
End of explanation
"""
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
"""
Explanation: For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
Examples:
End of explanation
"""
for x in [1,2,3]:
print(x)
"""
Explanation: Loops
In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is:
for loops:
End of explanation
"""
for x in range(4): # by default range start at 0
print(x)
"""
Explanation: The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:
End of explanation
"""
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
"""
Explanation: Note: range(4) does not include 4 !
End of explanation
"""
for key, value in params.items():
print(key + " = " + str(value))
"""
Explanation: To iterate over key-value pairs of a dictionary:
End of explanation
"""
for idx, x in enumerate(range(-3,3)):
print(idx, x)
"""
Explanation: Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:
End of explanation
"""
l1 = [x**2 for x in range(0,5)]
print(l1)
"""
Explanation: List comprehensions: Creating lists using for loops:
A convenient and compact way to initialize lists:
End of explanation
"""
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
"""
Explanation: while loops:
End of explanation
"""
def func0():
print("test")
func0()
"""
Explanation: Note that the print("done") statement is not part of the while loop body because of the difference in indentation.
Functions
A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.
End of explanation
"""
def func1(s):
"""
Print a string 's' and tell how many characters it has
"""
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
"""
Explanation: Optionally, but highly recommended, we can define a so called "docstring", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
End of explanation
"""
def square(x):
"""
Return the square of x.
"""
return x ** 2
square(4)
"""
Explanation: Functions that returns a value use the return keyword:
End of explanation
"""
def powers(x):
"""
Return a few powers of x.
"""
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
"""
Explanation: We can return multiple values from a function using tuples (see above):
End of explanation
"""
def myfunc(x, p=2, debug=False):
if debug:
print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p))
return x**p
"""
Explanation: Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes:
End of explanation
"""
myfunc(5)
myfunc(5, debug=True)
"""
Explanation: If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:
End of explanation
"""
myfunc(p=3, debug=True, x=7)
"""
Explanation: If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
End of explanation
"""
f1 = lambda x: x**2
# is equivalent to
def f2(x):
return x**2
f1(2), f2(2)
"""
Explanation: Unnamed functions (lambda function)
In Python we can also create unnamed functions, using the lambda keyword:
End of explanation
"""
# map is a built-in python function
map(lambda x: x**2, range(-3,4))
# in python 3 we can use `list(...)` to convert the iterator to an explicit list
list(map(lambda x: x**2, range(-3,4)))
"""
Explanation: This technique is useful for example when we want to pass a simple function as an argument to another function, like this:
End of explanation
"""
class Point:
"""
Simple class for representing a point in a Cartesian coordinate system.
"""
def __init__(self, x, y):
"""
Create a new Point at x, y.
"""
self.x = x
self.y = y
def translate(self, dx, dy):
"""
Translate the point by dx and dy in the x and y direction.
"""
self.x += dx
self.y += dy
def __str__(self):
return("Point at [%f, %f]" % (self.x, self.y))
"""
Explanation: Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain attributes (variables) and methods (functions).
A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).
Each class method should have an argument self as its first argument. This object is a self-reference.
Some class method names have special meaning, for example:
__init__: The name of the method that is invoked when the object is first created.
__str__ : A method that is invoked when a simple string representation of the class is needed, as for example when printed.
There are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names
End of explanation
"""
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class
print(p1) # this will invoke the __str__ method
"""
Explanation: To create a new instance of a class:
End of explanation
"""
p2 = Point(1, 1)
p1.translate(0.25, 1.5)
print(p1)
print(p2)
"""
Explanation: To invoke a class method in the class instance p:
End of explanation
"""
%%file mymodule.py
"""
Example of a python module. Contains a variable called my_variable,
a function called my_function, and a class called MyClass.
"""
my_variable = 0
def my_function():
"""
Example function
"""
return my_variable
class MyClass:
"""
Example class.
"""
def __init__(self):
self.variable = my_variable
def set_variable(self, new_value):
"""
Set self.variable to a new value
"""
self.variable = new_value
def get_variable(self):
return self.variable
"""
Explanation: Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities.
Modules
One of the most important concepts in good programming is to reuse code and avoid repetitions.
The idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot.
Python supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending .py), and it can be made accessible to other Python modules and programs using the import statement.
Consider the following example: the file mymodule.py contains simple example implementations of a variable, function and a class:
End of explanation
"""
import mymodule
"""
Explanation: We can import the module mymodule into our Python program using import:
End of explanation
"""
help(mymodule)
mymodule.my_variable
mymodule.my_function()
my_class = mymodule.MyClass()
my_class.set_variable(10)
my_class.get_variable()
"""
Explanation: Use help(module) to get a summary of what the module provides:
End of explanation
"""
reload(mymodule) # works only in python 2
"""
Explanation: If we make changes to the code in mymodule.py, we need to reload it using reload:
End of explanation
"""
raise Exception("description of the error")
"""
Explanation: Exceptions
In Python errors are managed with a special language construct called "Exceptions". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
End of explanation
"""
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except:
print("Caught an exception")
"""
Explanation: A typical use of exceptions is to abort functions when some error condition occurs, for example:
def my_function(arguments):
if not verify(arguments):
raise Exception("Invalid arguments")
# rest of the code goes here
To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements:
try:
# normal code goes here
except:
# code for error handling goes here
# this code is not executed unless the code
# above generated an error
For example:
End of explanation
"""
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except Exception as e:
print("Caught an exception:" + str(e))
"""
Explanation: To get information about the error, we can access the Exception class instance that describes the exception by using for example:
except Exception as e:
End of explanation
"""
%load_ext version_information
%version_information
"""
Explanation: Further reading
http://www.python.org - The official web page of the Python programming language.
http://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended.
http://www.greenteapress.com/thinkpython/ - A free book on Python programming.
Python Essential Reference - A good reference book on Python programming.
Versions
End of explanation
"""
|
mspcvsp/cincinnati311Data
|
ComputeCincinnatiNeighborhoodCentroids.ipynb
|
gpl-3.0
|
import findspark
import numpy as np
import os
import re
import subprocess
import shapefile
findspark.init()
import pyspark
sc = pyspark.SparkContext()
sqlContext = pyspark.sql.SQLContext(sc)
"""
Explanation: Initialize software environment
Initialize Spark Environment for Juypter Notebook
End of explanation
"""
output_shapefile = "../CagisOpenDataQuarterly/neighborhood.shp"
if not os.path.exists(output_shapefile):
sys_command = 'ogr2ogr ' + output_shapefile + ' ' +\
'"../CagisOpenDataQuarterly/cc neighbndy.shp" -t_srs EPSG:4326'
process = subprocess.Popen(sys_command,
shell=True,
stdout=subprocess.PIPE)
process.wait()
print process.returncode
"""
Explanation: Reproject Cincinnati Area Geographic Information System (CAGIS) cc neighbndy.shp
Cagis Homepage
Reproject shapefile with ogr2ogr
Run system command with subprocess library
End of explanation
"""
def init_neighborhoods(readerobj):
""" Initializes a dictionary that stores a description of
City of Cincinnati neighborhoods
Args:
readerobj: shapfilemodule Reader class object handle
Returns:
neighborhood: Dictionary that stores a description of
City of Cincinnati neighborhoods"""
shapes = readerobj.shapes()
fieldnames = [re.sub('_', '', elem[0].lower())
for elem in readerobj.fields[1:]]
neighborhood = {}
for idx in range(0, sf.numRecords):
row_dict = dict(zip(fieldnames, readerobj.record(idx)))
row_dict['boundingbox'] = np.array(shapes[idx].bbox)
row_dict['centroid'] = [np.mean(row_dict['boundingbox'][0:4:2]),
np.mean(row_dict['boundingbox'][1:4:2])]
cur_neighborhood = row_dict.pop('neigh').lower()
cur_neighborhood = re.sub('[-\s+]','', cur_neighborhood)
neighborhood[cur_neighborhood] = row_dict
return neighborhood
readerobj = shapefile.Reader(output_shapefile)
neighborhood = init_neighborhoods(readerobj)
from pyspark.mllib.feature import Vectors
from pyspark.mllib.linalg import DenseVector
neighborhood_centroid = []
for key in neighborhood.keys():
neighborhood_centroid.append(Vectors.dense(neighborhood[key]['centroid']))
"""
Explanation: Compute Cincinnati neighborhood centroids
Read shapefile
End of explanation
"""
|
TomTranter/OpenPNM
|
examples/simulations/Transient Fickian Diffusion.ipynb
|
mit
|
import numpy as np
import openpnm as op
np.random.seed(10)
%matplotlib inline
np.set_printoptions(precision=5)
"""
Explanation: Transient Fickian Diffusion
The package OpenPNM allows for the simulation of many transport phenomena in porous media such as Stokes flow, Fickian diffusion, advection-diffusion, transport of charged species, etc. Transient and steady-state simulations are both supported. An example of a transient Fickian diffusion simulation through a Cubic pore network is shown here.
First, OpenPNM is imported.
End of explanation
"""
ws = op.Workspace()
ws.settings["loglevel"] = 40
proj = ws.new_project()
"""
Explanation: Define new workspace and project
End of explanation
"""
net = op.network.Cubic(shape=[29, 13, 1], spacing=1e-5, project=proj)
"""
Explanation: Generate a pore network
An arbitrary Cubic 3D pore network is generated consisting of a layer of $29\times13$ pores with a constant pore to pore centers spacing of ${10}^{-5}{m}$.
End of explanation
"""
geo = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
"""
Explanation: Create a geometry
Here, a geometry, corresponding to the created network, is created. The geometry contains information about the size of pores and throats in the network such as length and diameter, etc. OpenPNM has many prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. In this example, a simple geometry known as StickAndBall that assigns random diameter values to pores throats, with certain constraints, is used.
End of explanation
"""
phase = op.phases.Water(network=net)
"""
Explanation: Add a phase
Then, a phase (water in this example) is added to the simulation and assigned to the network. The phase contains the physical properties of the fluid considered in the simulation such as the viscosity, etc. Many predefined phases as available on OpenPNM.
End of explanation
"""
phys = op.physics.GenericPhysics(network=net, phase=phase, geometry=geo)
"""
Explanation: Add a physics
Next, a physics object is defined. The physics object stores information about the different physical models used in the simulation and is assigned to specific network, geometry and phase objects. This ensures that the different physical models will only have access to information about the network, geometry and phase objects to which they are assigned. In fact, models (such as Stokes flow or Fickian diffusion) require information about the network (such as the connectivity between pores), the geometry (such as the pores and throats diameters), and the phase (such as the diffusivity coefficient).
End of explanation
"""
phase['pore.diffusivity'] = 2e-09
"""
Explanation: The diffusivity coefficient of the considered chemical species in water is also defined.
End of explanation
"""
mod = op.models.physics.diffusive_conductance.ordinary_diffusion
phys.add_model(propname='throat.diffusive_conductance', model=mod, regen_mode='normal')
"""
Explanation: Defining a new model
The physical model, consisting of Fickian diffusion, is defined and attached to the physics object previously defined.
End of explanation
"""
fd = op.algorithms.TransientFickianDiffusion(network=net, phase=phase)
"""
Explanation: Define a transient Fickian diffusion algorithm
Here, an algorithm for the simulation of transient Fickian diffusion is defined. It is assigned to the network and phase of interest to be able to retrieve all the information needed to build systems of linear equations.
End of explanation
"""
fd.set_value_BC(pores=net.pores('front'), values=0.5)
fd.set_value_BC(pores=net.pores('back'), values=0.2)
"""
Explanation: Add boundary conditions
Next, Dirichlet boundary conditions are added over the front and back boundaries of the network.
End of explanation
"""
fd.set_IC(0.2)
"""
Explanation: Define initial conditions
Initial conditions (optional) can also be specified. If they are not defined, a zero concentration is assumed at the beginning of the transient simulation.
End of explanation
"""
fd.setup(t_scheme='cranknicolson', t_final=100, t_output=5, t_step=1, t_tolerance=1e-12)
"""
Explanation: Note that both set_value_BC and set_IC also accept as input, in addition to a single scalar value, an ndarray.
Setup the transient algorithm settings
The settings of the transient algorithm are updated here. This step is optional as default settings are predefined. It is, however, important to update these settings on each new simulation as the time-scale of different phenomena in different problems may strongly differ.
Here, the time discretization scheme is set to cranknicolson, which is second-order accurate in time. The two other options supported in OpenPNM are the implicit scheme (only first order accurate but faster than the cranknicolson) and the steady which simply corresponds to a steady-state simulation.
Other parameters are also set; the final time step t_final, the output time stepping t_output, the computational time step t_step, and the tolerance to be achieved before reaching steady-state t_tolerance.
End of explanation
"""
print(fd.settings)
"""
Explanation: Note that the output time stepping t_output may a scalar, ND-array, or list. For a scalar, it is considered as an output interval. If t_output > t_final, no transient data is stored. If t_output is not a multiple of t_step, t_output will be approximated. When t_output is a list or ND-array, transient solutions corresponding to this list or array will be stored. Finally, initial, final and steady-state (if reached) solutions are always stored.
Print the algorithm settings
One can print the algorithm's settings as shown here.
End of explanation
"""
fd.run()
"""
Explanation: Note that the quantity corresponds to the quantity solved for.
Run the algorithm
The algorithm is run here.
End of explanation
"""
print(fd)
"""
Explanation: Post process and export the results
Once the simulation is successfully performed. The solution at every time steps is stored within the algorithm object. The algorithm's stored information is printed here.
End of explanation
"""
fd['pore.concentration@10']
"""
Explanation: Note that the solutions at every exported time step contain the @ character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example.
To print the solution at $t=10s$
End of explanation
"""
phase.update(fd.results())
"""
Explanation: The solution is here stored in the phase before export.
End of explanation
"""
proj.export_data(phases=[phase], filename='./results/out', filetype='xdmf')
"""
Explanation: Export the results into an xdmf file to be able to play an animation of the time dependent concentration on Paraview.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
import matplotlib.pyplot as plt
c = fd['pore.concentration'].reshape((net._shape))
fig, ax = plt.subplots(figsize=(6, 6))
plt.imshow(c[:,:,0])
plt.title('Concentration (mol/m$^3$)')
plt.colorbar();
"""
Explanation: Visialization using Matplotlib
One can perform post processing and visualization using the exported files on an external software such as Paraview. Additionally, the Pyhton library Matplotlib can be used as shown here to plot the concentration color map at steady-state.
End of explanation
"""
|
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch8-Problem_8-22.ipynb
|
unlicense
|
%pylab notebook
%precision %.4g
"""
Explanation: Excercises Electric Machinery Fundamentals
Chapter 8
Problem 8-22
End of explanation
"""
n0 = 1800 # [r/min]
Ra = 0.18 # [Ohm]
Vf = 120 # [V]
Radj_min = 0 # [Ohm]
Radj_max = 40 # [Ohm]
Rf = 20 # [Ohm]
Nf = 1000
"""
Explanation: Description
The magnetization curve for a separately excited dc generator is shown in Figure P8-7. The generator is
rated at 6 kW, 120 V, 50 A, and 1800 r/min and is shown in Figure P8-8. Its field circuit is rated at 5A.
<img src="figs/FigC_P8-7.jpg" width="70%">
<hr>
Note
An electronic version of this magnetization curve can be found in file
p87_mag.dat , which can be used with Python programs. Column 1
contains field current in amps, and column 2 contains the internal generated
voltage $E_A$ in volts.
<hr>
<img src="figs/FigC_P8-8.jpg" width="70%">
The following data are known about the machine:
$$R_A = 0.18\,\Omega \qquad \quad V_F = 120\,V$$
$$R_\text{adj} = 0\text{ to }40\,\Omega \qquad R_F = 20\, \Omega$$
$$N_F = 1000 \text{ turns per pole}$$
End of explanation
"""
If_max = Vf / (Rf + Radj_min)
If_max
"""
Explanation: (a)
If this generator is operating at no load
What is the range of voltage adjustments that can be achieved by changing $R_\text{adj}$ ?
(b)
If the field rheostat is allowed to vary from 0 to 30 $\Omega$ and the generator's speed is allowed to vary
from 1500 to 2000 r/min
What are the maximum and minimum no-load voltages in the generator?
SOLUTION
(a)
If the generator is operating with no load at 1800 r/min, then the terminal voltage will equal the
internal generated voltage $E_A$ . The maximum possible field current occurs when $R_\text{adj} = 0\,\Omega$ . The current is:
$$I_F = \frac{V_F}{R_F + R_\text{adj}}$$
End of explanation
"""
If_min = Vf / (Rf + Radj_max)
If_min
"""
Explanation: Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 135 V.
Since the actual speed is 1800r/min, the maximum no-load voltage is 135 V.
The minimum possible field current occurs when $R_\text{adj} = 40\,\Omega$.
The current is:
End of explanation
"""
If_max = Vf / (Rf + Radj_min)
If_max
"""
Explanation: Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 79.5 V.
Since the actual speed is 1800r/min, the minimum no-load voltage is 79.5 V.
(b)
The maximum voltage will occur at the highest current and speed, and the minimum voltage will
occur at the lowest current and speed. The maximum possible field current occurs when $R_\text{adj} = 0\,\Omega$.
The current is
End of explanation
"""
n_max = 2000 # [r/min]
Ea0_max = 135 # [V]
Ea_max = Ea0_max * n_max / n0
print('''
Ea_max = {:.0f} V
=============='''.format(Ea_max))
"""
Explanation: From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 135 V. Since the actual speed is 2000
r/min, the maximum no-load voltage is:
$$\frac{E_A}{E_{A0}} = \frac{n}{n_0}$$
End of explanation
"""
Radj_max_b = 30.0 # [Ohm]
If_min = Vf / (Rf + Radj_max_b)
If_min
"""
Explanation: The minimum possible field current occurs and minimum speed and field current. The maximum adjustable resistance is $R_\text{adj} = 30\,\Omega$.
The current is
End of explanation
"""
n_min = 1500 # [r/min]
Ea0_min = 93.1 # [V]
Ea_min = Ea0_min * n_min / n0
print('''
Ea_min = {:.1f} V
==============='''.format(Ea_min))
"""
Explanation: Amperes. From the magnetization curve, the voltage $E_{Ao}$ at 1800 r/min is 93.1 V.
Since the actual speed is 1500 r/min, the maximum no-load voltage is
End of explanation
"""
|
jackovt/Presentation-Design-Patterns
|
examples/python-example/observe.ipynb
|
mit
|
class Observable:
""" Extend this class to be observable. """
def __init__(self):
self.observers = []
def register(self, observer):
if not observer in self.observers:
self.observers.append(observer)
def unregister(self, observer):
if observer in self.observers:
self.observers.remove(observer)
def unregister_all(self):
if self.observers:
del self.observers[:]
def update_observers(self, *args):
""" Walk through the list of observers and call their update method. """
for observer in self.observers:
# Any observer must have this update method, see observer interface below.
observer.update(*args)
"""
Explanation: Observer/Observable in Python
Here is a quick example of the Observer/Observable pattern in python. We use a simple model object to hold data that is to be rendered by a plot object. As the data in the model is changed, we want to notify the plot without the model knowing any details (i.e. not having a direct dependency) about the plot/graph.
Start with the Observable
Start with a base class, Observable. Anything that inherits from this class can be "observed". Essentially, this base class will hold a list of observers and manage registering and unregistering observers. Leveraging Giant Flying Saucer example.
End of explanation
"""
class Model(Observable):
""" A class to hold a 2D matrix of data. """
def __init__(self,name):
self.name = name
self.x = []
self.y = []
super(Model,self).__init__()
def _notify(self):
""" Ensure the dimensions are of equal length before notifying observers. """
if len(self.x) == len(self.y):
self.update_observers(self.__dict__)
def set_x(self,x_vals):
self.x = x_vals
self._notify()
def set_y(self,y_vals):
self.y = y_vals
self._notify()
"""
Explanation: An observable 'Model'
We are creating a very simple class to hold x and y data. We added a convenience method to provide more control over when we actually update our observers.
End of explanation
"""
from abc import ABCMeta, abstractmethod
class Observer(object):
__metaclass__ = ABCMeta
@abstractmethod
def update(self,*args):
""" Can take an arbitrary list of arguments. """
pass
"""
Explanation: The Observer
The base class for Observer is very simple. In Java, it would be an interface. We are using Python's Abstract Base Class with the abstract method (pretty much equivalent to a Java Interface). To observe an observable, all you need to do is implement the update method!
End of explanation
"""
import matplotlib.pyplot as pyplot
class Plot(Observer):
# Just rotating through colors to help differentiate plots
colors = ['black','blue','green','red']
def __init__(self):
self.color = -1
def _color(self):
""" Just work through set of colors """
color_indx = len(self.colors)-1
self.color += 1 if self.color < color_indx else -color_indx
return self.colors[self.color]
def update(self,*args):
pyplot.plot(args[0]['x'],args[0]['y'],self._color())
pyplot.show()
"""
Explanation: For this example, we just have one observer (can certainly have more than one) that is a plot object.
End of explanation
"""
# An Observable instance
m = Model('2D Graph')
# An Observer instance
p = Plot()
# Hook them together
m.register(p)
# Now set the data on observable object, just creating a list from 0 to 13
m.set_x(range(14))
# Because we ensure the x dimension and y dimension match, the observer isn't notified until y is set.
m.set_y(range(14))
"""
Explanation: So, now we'll use our classes by creating a model instance, registering an observer, and setting the data.
End of explanation
"""
y = [0,1]
for i in range(12):
y.append(y[i]+y[i+1])
print(y)
m.set_y(y)
# Change y data to be a quadratic
m.set_y([x**2 for x in range(14)])
# And for fun, let's go exponential. And our pattern is really paying off now, graphs are redrawn as soon
# as y is set because the observers are notified.
m.set_y([2**x for x in range(14)])
"""
Explanation: Okay, so, now let's assign y to the first 13 numbers in the fibonacci sequence. Notice, as soon as y is set, the plot is updated. There is no code in this fragment to explicitly re-draw the plot!
End of explanation
"""
|
semsturgut/Robotic_ARM
|
SCS_Documents/ikpy-master/tutorials/ikpy/Moving the Poppy Torso using Inverse Kinematics.ipynb
|
gpl-3.0
|
from poppy.creatures import PoppyTorso
poppy = PoppyTorso(simulator="vrep")
"""
Explanation: Moving the Poppy Torso using Inverse Kinematics
This notebook illustrates how you can use the kinematic chains defined by the PoppyTorso class to directly control the arms of the robot in the cartesian space.
Said in a simpler way, this means that we will see how you can:
* get the end effector position from the joint angles of the motors (forward kinematic)
* compute the joint angles needed to reach a specific cartesian point: i.e. a 3D position (inverse kinematic)
This is a particularly useful and efficient technique, for instance if you want to grab an object in a specific position.
The Torso robots defines two kinematic chains, one for each arm. They are in the same coordinates system. The origin of both chain is the first link of the robot: the base. They are composed of 7 joints. For the left chain:
* abs_z
* bust_y
* bust_x
* l_shoulder_y
* l_shoulder_x
* l_arm_z
* l_elbow_y
The 3 first joints are passiv, they are part of the chain (this allows two share the same origin for both chains) but are not effectively used, they will not be moved.
The figure below shows you a plot of both kinematics chains of the Torso (in the zero position):
All examples below are given using V-REP as it is safer when playing with your robot. You do not risk to break anything, in the worst case all you need to do is reset the simulation :-) All the examples below can be directly switched to a real robot.
Simple example using a real robot
First load you robot with the usual code:
End of explanation
"""
print(poppy.kinematic_chains)
print(poppy.l_arm_chain)
print(poppy.r_arm_chain)
"""
Explanation: Then, you can directly access the chains:
End of explanation
"""
[m.name for m in poppy.l_arm_chain.motors]
"""
Explanation: And their respective motors:
End of explanation
"""
poppy.l_arm_chain.end_effector
"""
Explanation: Forward kinematics
You can directly retrieve the current cartesian position of the end effector of a chain. For instance, assuming your robot is still in the rest positions:
End of explanation
"""
poppy.l_arm_chain.joints_position
"""
Explanation: This means that the end of the left arm is 0.10 meters on the right of the base of the robot, 0.17 meters in front and 0.07 meters up.
The end effector position is computed from the joints position and the kinematic model of the robot. It is thus a theoretical position, which may differs from the real position due to the model imperfections.
The joints position of a chain can also be directly retrieved (the values are expressed in degrees):
End of explanation
"""
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
zero = [0] * 7
ax = matplotlib.pyplot.figure().add_subplot(111, projection='3d')
ax.scatter([0], [0],[0])
poppy.l_arm_chain.plot(poppy.l_arm_chain.convert_to_ik_angles(poppy.l_arm_chain.joints_position), ax, target = (0.2, -0.2, 0.2))
poppy.r_arm_chain.plot(poppy.r_arm_chain.convert_to_ik_angles(poppy.r_arm_chain.joints_position), ax)
"""
Explanation: Plotting
Thanks to the IKPy library, you can also directly plot a kinematic chain configuration using matplotlib.
For instance, you can plot the current position of your robot :
End of explanation
"""
poppy.l_arm_chain.goto((0.0, -0.15, 0.35), 2., wait=True)
"""
Explanation: Note the use of the convert_to_ik_angles method which converts from pypot representation (using degrees, motor offset and orientation) to IKPy internal representation.
Inverse kinematics
You can also directly ask the robot, more precisely one of its kynematic chain, to reach a specific cartesian position. The IKPy library will use an optimization technique to try to find the configuration of the joints that best match your target position.
For instance, if you want to move the robot left hand in front of its head, you can ask it to go the position [0, -0.15, 0.35]:
End of explanation
"""
import numpy as np
r = .13
x0, y0, z0 = (0.2, -0.2, 0.2)
poppy.l_arm_chain.goto((x0, y0, z0), 1., wait=True)
for alpha in np.arange(0, 4*np.pi, .08):
x = r * np.cos(alpha) + x0
z = r * np.sin(alpha) + z0
poppy.l_arm_chain.goto((x, y0, z), 0.03, wait=True)
"""
Explanation: Drawing a circle
This is particularly useful when you want to directly control your robot in the cartesian space. For instance, it become rather easy to make the hand of the robot follow a circle: (two times here!)
End of explanation
"""
from ipywidgets import interact, FloatSlider
c = poppy.l_arm_chain
x, y, z = c.end_effector
size = 0.3
def goto(x, y, z):
c.goto((x, y, z), .1)
interact(goto,
x=FloatSlider(min=x-size, max=x+size, value=x, step=0.01),
y=FloatSlider(min=y-size, max=y+size, value=y, step=0.01),
z=FloatSlider(min=z-size, max=z+size, value=z, step=0.01))
"""
Explanation: The optimizer used by IKPy can be tweaked depending on what you want to do. This is beyond the scope of this tutorial, please refer directly to IKPy documentation for details.
Finally, thanks to widgets you can also easily design a user interface for directly controlling your robot's hand!
End of explanation
"""
|
kbase/data_api
|
examples/notebooks/data_api-display.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import qgrid
qgrid.nbinstall()
from biokbase import data_api
from biokbase.data_api import display
display.nbviewer_mode(True)
"""
Explanation: Example
of building a notebook-friendly object into the output of the data API
Author: Dan Gunter
Initialization
Imports
Set up matplotlib, the qgrid (nice table), and import biokbase
End of explanation
"""
import os
os.environ['KB_AUTH_TOKEN'] = open('/tmp/kb_auth_token.txt').read().strip()
"""
Explanation: Authorization
In the vanilla notebook, you need to manually set an auth. token. You'll need your own value for this, of course.
Get this from the running narrative, e.g. write a narrative code cell that has:
import os; print(os.environ('KB_AUTH_TOKEN'))
End of explanation
"""
b = data_api.browse(1019)
x = b[0].object # Assembly object
"""
Explanation: Find and load an object
Open the workspace (1019) and get a Rhodobacter assembly from it
End of explanation
"""
cid_strings = x.get_contig_ids() # 1 min
cids = display.Contigs(cid_strings)
"""
Explanation: Get the contigs for the assembly
This takes a while because the current implementation loads the whole assembly, not just the 300 or so strings with the contig values.
End of explanation
"""
from biokbase import data_api
from biokbase.data_api import display
list(b)
rg = b[0]
rgo = rg.object
type(rgo)
"""
Explanation: View the contigs
The Contigs object wraps the list of contigs as a Pandas DataFrame (with the qgrid output enabled), so as you can see the plot() function is immediately available. The list of strings in the raw contig IDs is parsed to a set of columns and values for the DataFrame.
The default display is the nice sortable, scrollable, etc. table from the qgrid package.
End of explanation
"""
|
emiliom/stuff
|
MMW_API_watershed_demo.ipynb
|
cc0-1.0
|
import json
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
s = requests.Session()
APIToken = 'Token YOURTOKEN' # Replace YOURTOKEN with your actual token string/key
s.headers.update({
'Authorization': APIToken,
'Content-Type': 'application/json'
})
"""
Explanation: Model My Watershed (MMW) API Demo
Emilio Mayorga, University of Washington, Seattle. 2018-5-10. Demo put together using as a starting point instructions from Azavea from October 2017.
Introduction
The Model My Watershed API allows you to delineate watersheds and analyze geo-data for watersheds and arbitrary areas. You can read more about the work at WikiWatershed or use the web app.
MMW users can discover their API keys through the user interface, and test the MMW geoprocessing API on either the live or staging apps. An Account page with the API key is available from either app (live or staging). To see it, go to the app, log in, and click on "Account" in the dropdown that appears when you click on your username in the top right. Your key is different between staging and production. For testing with the staging API and key, go to https://staging.app.wikiwatershed.org/api/docs/
The API can be tested from the command line using curl. This example uses the staging API to test the watershed endpoint:
bash
curl -H "Content-Type: application/json" -H "Authorization: Token YOUR_API_KEY" -X POST
-d '{ "location": [39.67185,-75.76743] }' https://staging.app.wikiwatershed.org/api/watershed/
MMW API: Extract drainage area (watershed) from a point
1. Set up
End of explanation
"""
post_url = 'https://staging.app.wikiwatershed.org/api/watershed/'
"""
Explanation: MMW staging (test) API Rapid Watershed Delineation (RWD) "watershed" endpoint:
End of explanation
"""
payload = {
'location': [40.746054, -111.847987], # [latitude, longitude]
'snappingOn': True,
'dataSource': 'nhd'}
json_dat = json.dumps(payload)
post_req = requests_retry_session(session=s).post(post_url, data=json_dat)
json_out = json.loads(post_req.content)
json_out
"""
Explanation: 2. Construct and issue the job request
Parameters passed to the RWD ("watershed") API request. This is a point in Salt Lake City
End of explanation
"""
get_url = 'https://staging.app.wikiwatershed.org/api/jobs/{job}/'.format
result = ''
while not result:
get_req = requests_retry_session(session=s).get(get_url(job=json_out['job']))
result = json.loads(get_req.content)['result']
"""
Explanation: 3. Fetch the job result once it's confirmed as done
The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary.
End of explanation
"""
type(result), result.keys()
"""
Explanation: 4. Examine the results
Everything below is just exploration of the results.
Examine the content of the results (as JSON, GeoJSON, and Python dictionaries)
End of explanation
"""
input_pt_geojson = result['input_pt']
input_pt_geojson
"""
Explanation: The results (result) are made up of two dictionary items: input_pt and watershed. Each one of those is a GeoJSON-like object already converted to a Python dictionary.
input_pt:
End of explanation
"""
watershed_geojson = result['watershed']
watershed_geojson.keys(), watershed_geojson['geometry'].keys()
watershed_geojson['properties']
# watershed has just one feature -- a single polygon
print("Number of polygon features: {} \nFeature type: {} \nNumber of vertices in polygon: {}".format(
len(watershed_geojson['geometry']['coordinates']),
watershed_geojson['geometry']['type'],
len(watershed_geojson['geometry']['coordinates'][0])
))
"""
Explanation: watershed:
End of explanation
"""
%matplotlib inline
import folium
# Initialize Folium map
m = folium.Map(location=[input_pt_geojson['properties']['Lat'], input_pt_geojson['properties']['Lon']],
tiles='CartoDB positron', zoom_start=12)
# Add RWD watershed and drainage point onto map
folium.GeoJson(watershed_geojson).add_to(m);
folium.GeoJson(input_pt_geojson).add_to(m);
m
"""
Explanation: Render the geospatial results on an interactive map
End of explanation
"""
|
joannekoong/neuroscience_tutorials
|
basic/3. Imagined movement.ipynb
|
bsd-2-clause
|
%pylab inline
import numpy as np
import scipy.io
m = scipy.io.loadmat('data_set_IV/BCICIV_calib_ds1d.mat', struct_as_record=True)
# SciPy.io.loadmat does not deal well with Matlab structures, resulting in lots of
# extra dimensions in the arrays. This makes the code a bit more cluttered
sample_rate = m['nfo']['fs'][0][0][0][0]
EEG = m['cnt'].T
nchannels, nsamples = EEG.shape
channel_names = [s[0].encode('utf8') for s in m['nfo']['clab'][0][0][0]]
event_onsets = m['mrk'][0][0][0]
event_codes = m['mrk'][0][0][1]
labels = np.zeros((1, nsamples), int)
labels[0, event_onsets] = event_codes
cl_lab = [s[0].encode('utf8') for s in m['nfo']['classes'][0][0][0]]
cl1 = cl_lab[0]
cl2 = cl_lab[1]
nclasses = len(cl_lab)
nevents = len(event_onsets)
"""
Explanation: 3. Imagined movement
In this tutorial we will look at imagined movement. Our movement is controlled in the motor cortex where there is an increased level of mu activity (8–12 Hz) when we perform movements. This is accompanied by a reduction of this mu activity in specific regions that deal with the limb that is currently moving. This decrease is called Event Related Desynchronization (ERD). By measuring the amount of mu activity at different locations on the motor cortex, we can determine which limb the subject is moving. Through mirror neurons, this effect also occurs when the subject is not actually moving his limbs, but merely imagining it.
Credits
The CSP code was originally written by Boris Reuderink of the Donders
Institute for Brain, Cognition and Behavior. It is part of his Python EEG
toolbox: https://github.com/breuderink/eegtools
Inspiration for this tutorial also came from the excellent code example
given in the book chapter:
Arnaud Delorme, Christian Kothe, Andrey Vankov, Nima Bigdely-Shamlo,
Robert Oostenveld, Thorsten Zander, and Scott Makeig. MATLAB-Based Tools
for BCI Research, In (B+H)CI: The Human in Brain-Computer Interfaces and
the Brain in Human-Computer Interaction. Desney S. Tan and Anton Nijholt
(eds.), 2009, 241-259, http://dx.doi.org/10.1007/978-1-84996-272-8
Obtaining the data
The dataset for this tutorial is provided by the fourth BCI competition,
which you will have to download youself. First, go to http://www.bbci.de/competition/iv/#download
and fill in your name and email address. An email will be sent to you
automatically containing a username and password for the download area.
Download Data Set 1, from Berlin, the 100Hz version in MATLAB format:
http://bbci.de/competition/download/competition_iv/BCICIV_1_mat.zip
and unzip it in a subdirectory called 'data_set_IV'. This subdirectory
should be inside the directory in which you've store the tutorial files.
Description of the data
If you've followed the instructions above, the following code should load
the data:
End of explanation
"""
# Print some information
print 'Shape of EEG:', EEG.shape
print 'Sample rate:', sample_rate
print 'Number of channels:', nchannels
print 'Channel names:', channel_names
print 'Number of events:', len(event_onsets)
print 'Event codes:', np.unique(event_codes)
print 'Class labels:', cl_lab
print 'Number of classes:', nclasses
"""
Explanation: Now we have the data in the following python variables:
End of explanation
"""
# Dictionary to store the trials in, each class gets an entry
trials = {}
# The time window (in samples) to extract for each trial, here 0.5 -- 2.5 seconds
win = np.arange(int(0.5*sample_rate), int(2.5*sample_rate))
# Length of the time window
nsamples = len(win)
# Loop over the classes (right, foot)
for cl, code in zip(cl_lab, np.unique(event_codes)):
# Extract the onsets for the class
cl_onsets = event_onsets[event_codes == code]
# Allocate memory for the trials
trials[cl] = np.zeros((nchannels, nsamples, len(cl_onsets)))
# Extract each trial
for i, onset in enumerate(cl_onsets):
trials[cl][:,:,i] = EEG[:, win+onset]
# Some information about the dimensionality of the data (channels x time x trials)
print 'Shape of trials[cl1]:', trials[cl1].shape
print 'Shape of trials[cl2]:', trials[cl2].shape
"""
Explanation: This is a large recording: 118 electrodes where used, spread across the entire scalp. The subject was given a cue and then imagined either right hand movement or the movement of his feet. As can be seen from the Homunculus, foot movement is controlled at the center of the motor cortex (which makes it hard to distinguish left from right foot), while hand movement is controlled more lateral.
Plotting the data
The code below cuts trials for the two classes and should look familiar if you've completed the previous tutorials. Trials are cut in the interval [0.5–2.5 s] after the onset of the cue.
End of explanation
"""
from matplotlib import mlab
def psd(trials):
'''
Calculates for each trial the Power Spectral Density (PSD).
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEG signal
Returns
-------
trial_PSD : 3d-array (channels x PSD x trials)
the PSD for each trial.
freqs : list of floats
Yhe frequencies for which the PSD was computed (useful for plotting later)
'''
ntrials = trials.shape[2]
trials_PSD = np.zeros((nchannels, 101, ntrials))
# Iterate over trials and channels
for trial in range(ntrials):
for ch in range(nchannels):
# Calculate the PSD
(PSD, freqs) = mlab.psd(trials[ch,:,trial], NFFT=int(nsamples), Fs=sample_rate)
trials_PSD[ch, :, trial] = PSD.ravel()
return trials_PSD, freqs
# Apply the function
psd_r, freqs = psd(trials[cl1])
psd_f, freqs = psd(trials[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
"""
Explanation: Since the feature we're looking for (a decrease in $\mu$-activity) is a frequency feature, lets plot the PSD of the trials in a similar manner as with the SSVEP data. The code below defines a function that computes the PSD for each trial (we're going to need it again later on):
End of explanation
"""
import matplotlib.pyplot as plt
def plot_psd(trials_PSD, freqs, chan_ind, chan_lab=None, maxy=None):
'''
Plots PSD data calculated with psd().
Parameters
----------
trials : 3d-array
The PSD data, as returned by psd()
freqs : list of floats
The frequencies for which the PSD is defined, as returned by psd()
chan_ind : list of integers
The indices of the channels to plot
chan_lab : list of strings
(optional) List of names for each channel
maxy : float
(optional) Limit the y-axis to this value
'''
plt.figure(figsize=(12,5))
nchans = len(chan_ind)
# Maximum of 3 plots per row
nrows = np.ceil(nchans / 3)
ncols = min(3, nchans)
# Enumerate over the channels
for i,ch in enumerate(chan_ind):
# Figure out which subplot to draw to
plt.subplot(nrows,ncols,i+1)
# Plot the PSD for each class
for cl in trials.keys():
plt.plot(freqs, np.mean(trials_PSD[cl][ch,:,:], axis=1), label=cl)
# All plot decoration below...
plt.xlim(1,30)
if maxy != None:
plt.ylim(0,maxy)
plt.grid()
plt.xlabel('Frequency (Hz)')
if chan_lab == None:
plt.title('Channel %d' % (ch+1))
else:
plt.title(chan_lab[i])
plt.legend()
plt.tight_layout()
"""
Explanation: The function below plots the PSDs that are calculated with the above function. Since plotting it for 118 channels will clutter the display, it takes the indices of the desired channels as input, as well as some metadata to decorate the plot.
End of explanation
"""
plot_psd(
trials_PSD,
freqs,
[channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],
chan_lab=['left', 'center', 'right'],
maxy=500
)
"""
Explanation: Lets put the plot_psd() function to use and plot three channels:
C3: Central, left
Cz: Central, central
C4: Central, right
End of explanation
"""
import scipy.signal
def bandpass(trials, lo, hi, sample_rate):
'''
Designs and applies a bandpass filter to the signal.
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEGsignal
lo : float
Lower frequency bound (in Hz)
hi : float
Upper frequency bound (in Hz)
sample_rate : float
Sample rate of the signal (in Hz)
Returns
-------
trials_filt : 3d-array (channels x samples x trials)
The bandpassed signal
'''
# The iirfilter() function takes the filter order: higher numbers mean a sharper frequency cutoff,
# but the resulting signal might be shifted in time, lower numbers mean a soft frequency cutoff,
# but the resulting signal less distorted in time. It also takes the lower and upper frequency bounds
# to pass, divided by the niquist frequency, which is the sample rate divided by 2:
a, b = scipy.signal.iirfilter(6, [lo/(sample_rate/2.0), hi/(sample_rate/2.0)])
# Applying the filter to each trial
ntrials = trials.shape[2]
trials_filt = np.zeros((nchannels, nsamples, ntrials))
for i in range(ntrials):
trials_filt[:,:,i] = scipy.signal.filtfilt(a, b, trials[:,:,i], axis=1)
return trials_filt
# Apply the function
trials_filt = {cl1: bandpass(trials[cl1], 8, 15, sample_rate),
cl2: bandpass(trials[cl2], 8, 15, sample_rate)}
"""
Explanation: A spike of mu activity can be seen on each channel for both classes. At the right hemisphere, the mu for the left hand movement is lower than for the right hand movement due to the ERD. At the left electrode, the mu for the right hand movement is reduced and at the central electrode the mu activity is about equal for both classes. This is in line with the theory that the left hand is controlled by the left hemiphere and the feet are controlled centrally.
Classifying the data
We will use a machine learning algorithm to construct a model that can distinguish between the right hand and foot movement of this subject. In order to do this we need to:
find a way to quantify the amount of mu activity present in a trial
make a model that describes expected values of mu activity for each class
finally test this model on some unseen data to see if it can predict the correct class label
We will follow a classic BCI design by Blankertz et al. [1] where they use the logarithm of the variance of the signal in a certain frequency band as a feature for the classifier.
[1] Blankertz, B., Dornhege, G., Krauledat, M., Müller, K.-R., & Curio, G. (2007). The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539–550. doi:10.1016/j.neuroimage.2007.01.051
The script below designs a band pass filter using scipy.signal.irrfilter that will strip away frequencies outside the 8--15Hz window. The filter is applied to all trials:
End of explanation
"""
psd_r, freqs = psd(trials_filt[cl1])
psd_f, freqs = psd(trials_filt[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
plot_psd(
trials_PSD,
freqs,
[channel_names.index(ch) for ch in ['C3', 'Cz', 'C4']],
chan_lab=['left', 'center', 'right'],
maxy=300
)
"""
Explanation: Plotting the PSD of the resulting trials_filt shows the suppression of frequencies outside the passband of the filter:
End of explanation
"""
# Calculate the log(var) of the trials
def logvar(trials):
'''
Calculate the log-var of each channel.
Parameters
----------
trials : 3d-array (channels x samples x trials)
The EEG signal.
Returns
-------
logvar - 2d-array (channels x trials)
For each channel the logvar of the signal
'''
return np.log(np.var(trials, axis=1))
# Apply the function
trials_logvar = {cl1: logvar(trials_filt[cl1]),
cl2: logvar(trials_filt[cl2])}
"""
Explanation: As a feature for the classifier, we will use the logarithm of the variance of each channel. The function below calculates this:
End of explanation
"""
def plot_logvar(trials):
'''
Plots the log-var of each channel/component.
arguments:
trials - Dictionary containing the trials (log-vars x trials) for 2 classes.
'''
plt.figure(figsize=(12,5))
x0 = np.arange(nchannels)
x1 = np.arange(nchannels) + 0.4
y0 = np.mean(trials[cl1], axis=1)
y1 = np.mean(trials[cl2], axis=1)
plt.bar(x0, y0, width=0.5, color='b')
plt.bar(x1, y1, width=0.4, color='r')
plt.xlim(-0.5, nchannels+0.5)
plt.gca().yaxis.grid(True)
plt.title('log-var of each channel/component')
plt.xlabel('channels/components')
plt.ylabel('log-var')
plt.legend(cl_lab)
# Plot the log-vars
plot_logvar(trials_logvar)
"""
Explanation: Below is a function to visualize the logvar of each channel as a bar chart:
End of explanation
"""
from numpy import linalg
def cov(trials):
''' Calculate the covariance for each trial and return their average '''
ntrials = trials.shape[2]
covs = [ trials[:,:,i].dot(trials[:,:,i].T) / nsamples for i in range(ntrials) ]
return np.mean(covs, axis=0)
def whitening(sigma):
''' Calculate a whitening matrix for covariance matrix sigma. '''
U, l, _ = linalg.svd(sigma)
return U.dot( np.diag(l ** -0.5) )
def csp(trials_r, trials_f):
'''
Calculate the CSP transformation matrix W.
arguments:
trials_r - Array (channels x samples x trials) containing right hand movement trials
trials_f - Array (channels x samples x trials) containing foot movement trials
returns:
Mixing matrix W
'''
cov_r = cov(trials_r)
cov_f = cov(trials_f)
P = whitening(cov_r + cov_f)
B, _, _ = linalg.svd( P.T.dot(cov_f).dot(P) )
W = P.dot(B)
return W
def apply_mix(W, trials):
''' Apply a mixing matrix to each trial (basically multiply W with the EEG signal matrix)'''
ntrials = trials.shape[2]
trials_csp = np.zeros((nchannels, nsamples, ntrials))
for i in range(ntrials):
trials_csp[:,:,i] = W.T.dot(trials[:,:,i])
return trials_csp
# Apply the functions
W = csp(trials_filt[cl1], trials_filt[cl2])
trials_csp = {cl1: apply_mix(W, trials_filt[cl1]),
cl2: apply_mix(W, trials_filt[cl2])}
"""
Explanation: We see that most channels show a small difference in the log-var of the signal between the two classes. The next step is to go from 118 channels to only a few channel mixtures. The CSP algorithm calculates mixtures of channels that are designed to maximize the difference in variation between two classes. These mixures are called spatial filters.
End of explanation
"""
trials_logvar = {cl1: logvar(trials_csp[cl1]),
cl2: logvar(trials_csp[cl2])}
plot_logvar(trials_logvar)
"""
Explanation: To see the result of the CSP algorithm, we plot the log-var like we did before:
End of explanation
"""
psd_r, freqs = psd(trials_csp[cl1])
psd_f, freqs = psd(trials_csp[cl2])
trials_PSD = {cl1: psd_r, cl2: psd_f}
plot_psd(trials_PSD, freqs, [0,58,-1], chan_lab=['first component', 'middle component', 'last component'], maxy=0.75 )
"""
Explanation: Instead of 118 channels, we now have 118 mixtures of channels, called components. They are the result of 118 spatial filters applied to the data.
The first filters maximize the variation of the first class, while minimizing the variation of the second. The last filters maximize the variation of the second class, while minimizing the variation of the first.
This is also visible in a PSD plot. The code below plots the PSD for the first and last components as well as one in the middle:
End of explanation
"""
def plot_scatter(left, foot):
plt.figure()
plt.scatter(left[0,:], left[-1,:], color='b')
plt.scatter(foot[0,:], foot[-1,:], color='r')
plt.xlabel('Last component')
plt.ylabel('First component')
plt.legend(cl_lab)
plot_scatter(trials_logvar[cl1], trials_logvar[cl2])
"""
Explanation: In order to see how well we can differentiate between the two classes, a scatter plot is a useful tool. Here both classes are plotted on a 2-dimensional plane: the x-axis is the first CSP component, the y-axis is the last.
End of explanation
"""
# Percentage of trials to use for training (50-50 split here)
train_percentage = 0.5
# Calculate the number of trials for each class the above percentage boils down to
ntrain_r = int(trials_filt[cl1].shape[2] * train_percentage)
ntrain_f = int(trials_filt[cl2].shape[2] * train_percentage)
ntest_r = trials_filt[cl1].shape[2] - ntrain_r
ntest_f = trials_filt[cl2].shape[2] - ntrain_f
# Splitting the frequency filtered signal into a train and test set
train = {cl1: trials_filt[cl1][:,:,:ntrain_r],
cl2: trials_filt[cl2][:,:,:ntrain_f]}
test = {cl1: trials_filt[cl1][:,:,ntrain_r:],
cl2: trials_filt[cl2][:,:,ntrain_f:]}
# Train the CSP on the training set only
W = csp(train[cl1], train[cl2])
# Apply the CSP on both the training and test set
train[cl1] = apply_mix(W, train[cl1])
train[cl2] = apply_mix(W, train[cl2])
test[cl1] = apply_mix(W, test[cl1])
test[cl2] = apply_mix(W, test[cl2])
# Select only the first and last components for classification
comp = np.array([0,-1])
train[cl1] = train[cl1][comp,:,:]
train[cl2] = train[cl2][comp,:,:]
test[cl1] = test[cl1][comp,:,:]
test[cl2] = test[cl2][comp,:,:]
# Calculate the log-var
train[cl1] = logvar(train[cl1])
train[cl2] = logvar(train[cl2])
test[cl1] = logvar(test[cl1])
test[cl2] = logvar(test[cl2])
"""
Explanation: We will apply a linear classifier to this data. A linear classifier can be thought of as drawing a line in the above plot to separate the two classes. To determine the class for a new trial, we just check on which side of the line the trial would be if plotted as above.
The data is split into a train and a test set. The classifier will fit a model (in this case, a straight line) on the training set and use this model to make predictions about the test set (see on which side of the line each trial in the test set falls). Note that the CSP algorithm is part of the model, so for fairness sake it should be calculated using only the training data.
End of explanation
"""
def train_lda(class1, class2):
'''
Trains the LDA algorithm.
arguments:
class1 - An array (observations x features) for class 1
class2 - An array (observations x features) for class 2
returns:
The projection matrix W
The offset b
'''
nclasses = 2
nclass1 = class1.shape[0]
nclass2 = class2.shape[0]
# Class priors: in this case, we have an equal number of training
# examples for each class, so both priors are 0.5
prior1 = nclass1 / float(nclass1 + nclass2)
prior2 = nclass2 / float(nclass1 + nclass1)
mean1 = np.mean(class1, axis=0)
mean2 = np.mean(class2, axis=0)
class1_centered = class1 - mean1
class2_centered = class2 - mean2
# Calculate the covariance between the features
cov1 = class1_centered.T.dot(class1_centered) / (nclass1 - nclasses)
cov2 = class2_centered.T.dot(class2_centered) / (nclass2 - nclasses)
W = (mean2 - mean1).dot(np.linalg.pinv(prior1*cov1 + prior2*cov2))
b = (prior1*mean1 + prior2*mean2).dot(W)
return (W,b)
def apply_lda(test, W, b):
'''
Applies a previously trained LDA to new data.
arguments:
test - An array (features x trials) containing the data
W - The project matrix W as calculated by train_lda()
b - The offsets b as calculated by train_lda()
returns:
A list containing a classlabel for each trial
'''
ntrials = test.shape[1]
prediction = []
for i in range(ntrials):
# The line below is a generalization for:
# result = W[0] * test[0,i] + W[1] * test[1,i] - b
result = W.dot(test[:,i]) - b
if result <= 0:
prediction.append(1)
else:
prediction.append(2)
return np.array(prediction)
"""
Explanation: For a classifier the Linear Discriminant Analysis (LDA) algorithm will be used. It fits a gaussian distribution to each class, characterized by the mean and covariance, and determines an optimal separating plane to divide the two. This plane is defined as $r = W_0 \cdot X_0 + W_1 \cdot X_1 + \ldots + W_n \cdot X_n - b$, where $r$ is the classifier output, $W$ are called the feature weights, $X$ are the features of the trial, $n$ is the dimensionality of the data and $b$ is called the offset.
In our case we have 2 dimensional data, so the separating plane will be a line: $r = W_0 \cdot X_0 + W_1 \cdot X_1 - b$. To determine a class label for an unseen trial, we can calculate whether the result is positive or negative.
End of explanation
"""
W,b = train_lda(train[cl1].T, train[cl2].T)
print 'W:', W
print 'b:', b
"""
Explanation: Training the LDA using the training data gives us $W$ and $b$:
End of explanation
"""
# Scatterplot like before
plot_scatter(train[cl1], train[cl2])
title('Training data')
# Calculate decision boundary (x,y)
x = np.arange(-5, 1, 0.1)
y = (b - W[0]*x) / W[1]
# Plot the decision boundary
plt.plot(x,y, linestyle='--', linewidth=2, color='k')
plt.xlim(-5, 1)
plt.ylim(-2.2, 1)
"""
Explanation: It can be informative to recreate the scatter plot and overlay the decision boundary as determined by the LDA classifier. The decision boundary is the line for which the classifier output is exactly 0. The scatterplot used $X_0$ as $x$-axis and $X_1$ as $y$-axis. To find the function $y = f(x)$ describing the decision boundary, we set $r$ to 0 and solve for $y$ in the equation of the separating plane:
<div style="width:600px">
$$\begin{align}
W_0 \cdot X_0 + W_1 \cdot X_1 - b &= r &&\text{the original equation} \\\
W_0 \cdot x + W_1 \cdot y - b &= 0 &&\text{filling in $X_0=x$, $X_1=y$ and $r=0$} \\\
W_0 \cdot x + W_1 \cdot y &= b &&\text{solving for $y$}\\\
W_1 \cdot y &= b - W_0 \cdot x \\\
\\\
y &= \frac{b - W_0 \cdot x}{W_1}
\end{align}$$
</div>
We first plot the decision boundary with the training data used to calculate it:
End of explanation
"""
plot_scatter(test[cl1], test[cl2])
title('Test data')
plt.plot(x,y, linestyle='--', linewidth=2, color='k')
plt.xlim(-5, 1)
plt.ylim(-2.2, 1)
"""
Explanation: The code below plots the boundary with the test data on which we will apply the classifier. You will see the classifier is going to make some mistakes.
End of explanation
"""
# Print confusion matrix
conf = np.array([
[(apply_lda(test[cl1], W, b) == 1).sum(), (apply_lda(test[cl2], W, b) == 1).sum()],
[(apply_lda(test[cl1], W, b) == 2).sum(), (apply_lda(test[cl2], W, b) == 2).sum()],
])
print 'Confusion matrix:'
print conf
print
print 'Accuracy: %.3f' % (np.sum(np.diag(conf)) / float(np.sum(conf)))
"""
Explanation: Now the LDA is constructed and fitted to the training data. We can now apply it to the test data. The results are presented as a confusion matrix:
<table>
<tr><td></td><td colspan='2' style="font-weight:bold">True labels →</td></tr>
<tr><td style="font-weight:bold">↓ Predicted labels</td><td>Right</td><td>Foot</td></tr>
<tr><td>Right</td><td></td><td></td></tr>
<tr><td>Foot</td><td></td><td></td></tr>
</table>
The number at the diagonal will be trials that were correctly classified, any trials incorrectly classified (either a false positive or false negative) will be in the corners.
End of explanation
"""
|
google-research/google-research
|
aptamers_mlpd/figures/Figure_2_Machine_learning_guided_aptamer_discovery_(submission).ipynb
|
apache-2.0
|
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
"""
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Overview
This notebook summarizes the numbers of aptamers that appear to be enriched in positive pools for particular particule display experiments. These values are turned into venn diagrams and pie charts in Figure 2.
The inputs are csvs, where each row is an aptamer and columns indicate the sequencing counts within each particle display subexperiment.
End of explanation
"""
# Required coverage level for analysis. This is in units of number of apatamer
# particles (beads). This is used to minimize potential contamination.
# For example, a tolerated bead fraction of 0.2 means that if, based on read
# depth and number of beads, there are 100 reads expected per bead, then
# sequences with fewer than 20 reads would be excluded from analysis.
TOLERATED_BEAD_FRAC = 0.2
# Ratio cutoff between positive and negative pools to count as being real.
# The ratio is calculated normalized by read depth, so if the ratio is 0.5,
# then positive sequences are expected to have equal read depth (or more) in
# the positive pool as the negative pool. So, as a toy example, if the
# positive pool had 100 reads total and the negative pool had 200 reads total,
# then a sequence with 5 reads in the positive pool and 10 reads in the
# negative pool would have a ratio of 0.5.
POS_NEG_RATIO_CUTOFF = 0.5
# Minimum required reads (when 0 it uses only the above filters)
MIN_READ_THRESH = 0
"""
Explanation: Parameters used in Manuscript
End of explanation
"""
#@title Original PD Data Parameters
# Since these are small I'm going to embed in the colab.
apt_screened_list = [ 2.4*10**6, 2.4*10**6, 1.24*10**6]
apt_collected_list = [3.5 * 10**4, 8.5 * 10**4, 8 * 10**4]
seq_input = [10**5] * 3
conditions = ['round2_high_no_serum_positive',
'round2_medium_no_serum_positive',
'round2_low_no_serum_positive']
flags = ['round2_high_no_serum_flag', 'round2_medium_no_serum_flag',
'round2_low_no_serum_flag']
stringency = ['High', 'Medium', 'Low']
pd_param_df = pd.DataFrame.from_dict({'apt_screened': apt_screened_list,
'apt_collected': apt_collected_list,
'seq_input': seq_input,
'condition': conditions,
'condition_flag': flags,
'stringency': stringency})
pd_param_df
#@title MLPD Data Parameters
apt_screened_list = [ 3283890.016, 6628573.952, 5801469.696, 3508412.512]
apt_collected_list = [12204, 50353, 153845, 201255]
seq_input = [200000] * 4
conditions = ['round1_very_positive',
'round1_high_positive',
'round1_medium_positive',
'round1_low_positive']
flags = ['round1_very_flag', 'round1_high_flag', 'round1_medium_flag',
'round1_low_flag']
stringency = ['Very High', 'High', 'Medium', 'Low']
mlpd_param_df = pd.DataFrame.from_dict({'apt_screened': apt_screened_list,
'apt_collected': apt_collected_list,
'seq_input': seq_input,
'condition': conditions,
'condition_flag': flags,
'stringency': stringency})
mlpd_param_df
"""
Explanation: Load in data
Load in experimental conditions for Particle Display experiments
The mlpd_params_df contains the experimental information for MLPD.
Parameters are:
* apt_collected: The number of aptamer bead particles collected during the FACs experiment of particle display.
* apt_screened: The number of aptamer bead particles screened in order to get the apt_collected beads.
* seq_input: The estimated number of unique sequences in the input sequence library during bead construction.
End of explanation
"""
# PD and MLPD sequencing counts across experiments
# Upload pd_clustered_input_data_manuscript.csv and mlpd_input_data_manuscript.csv
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Load PD Data
with open('pd_clustered_input_data_manuscript.csv') as f:
pd_input_df = pd.read_csv(f)
# Load MLPD data
with open('mlpd_input_data_manuscript.csv') as f:
mlpd_input_df = pd.read_csv(f)
"""
Explanation: Load CSVs
End of explanation
"""
def generate_cutoffs_via_PD_stats(df, col, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh):
"""Use the experimental parameters to determine sequences passing thresholds.
Args:
df: Pandas dataframe with experiment results. Must have columns named
after the col function parameter, containing the read count, and a
column 'sequence'.
col: The string name of the column in the experiment dataframe with the
read count.
apt_screened: The integer number of aptamers screened, from the experiment
parameters.
apt_collected: The integer number of aptamers collected, from the experiment
parameters.
seq_input: The integer number of unique sequences in the sequence library
used to construct the aptamer particles.
tolerated_bead_frac: The float tolerated bead fraction threshold. In other
words, the sequencing depth required to keep a sequence, in units of
fractions of a bead based on the average expected read depth per bead.
min_read_threshold: The integer minimum number of reads that a sequence
must have in order not to be filtered.
Returns:
Pandas series of the sequences from the dataframe that pass filter.
"""
expected_bead_coverage = apt_screened / seq_input
tolerated_bead_coverage = expected_bead_coverage * tolerated_bead_frac
bead_full_min_sequence_coverage = (1. / apt_collected) * tolerated_bead_coverage
col_sum = df[col].sum()
# Look at sequenced counts calculated observed fraction of pool and raw count.
seqs = df[((df[col]/col_sum) > bead_full_min_sequence_coverage) & # Pool frac.
(df[col] > min_read_thresh) # Raw count
].sequence
return seqs
def generate_pos_neg_normalized_ratio(df, col_prefix):
"""Adds fraction columns to the dataframe with the calculated pos/neg ratio.
Args:
df: Pandas dataframe, expected to have columns [col_prefix]_positive and
[col_prefix]_negative contain read counts for the positive and negative
selection conditions, respectively.
col_prefix: String prefix of the columns to use to calculate the ratio.
For example 'round1_very_positive'.
Returns:
The original dataframe with three new columns:
[col_prefix]_positive_frac contains the fraction of the total positive
pool that is this sequence.
[col_prefix]_negative_frac contains the fraction of the total negative
pool that is this sequence.
[col_prefix]_pos_neg_ratio: The read-depth normalized fraction of the
sequence that ended in the positive pool.
"""
col_pos = col_prefix + '_' + 'positive'
col_neg = col_prefix + '_' + 'negative'
df[col_pos + '_frac'] = df[col_pos] / df[col_pos].sum()
df[col_neg + '_frac'] = df[col_neg] / df[col_neg].sum()
df[col_prefix + '_pos_neg_ratio'] = df[col_pos + '_frac'] / (
df[col_pos + '_frac'] + df[col_neg + '_frac'])
return df
def build_seq_sets_from_df (input_param_df, input_df, tolerated_bead_frac,
pos_neg_ratio, min_read_thresh):
"""Sets flags for sequences based on whether they clear stringencies.
This function adds a column 'seq_set' to the input_param_df (one row per
stringency level of a particle display experiment) containing all the
sequences in the experiment that passed that stringency level in the
experiment.
Args:
input_param_df: Pandas dataframe with experimental parameters. Expected
to have one row per stringency level in the experiment and
columns 'apt_screened', 'apt_collected', 'seq_input', 'condition', and
'condition_flag'.
input_df: Pandas dataframe with the experimental results (counts per
sequence) for the experiment covered in the input_param_df. Expected
to have a [col_prefix]_pos_neg_ratio column for each row of the
input_param_df (i.e. each stringency level).
tolerated_bead_frac: Float representing the minimum sequence depth, in
units of expected beads, for a sequence to be used in analysis.
pos_neg_ratio: The threshold for the pos_neg_ratio column for a sequence
to be used in the analysis.
min_read_thresh: The integer minimum number of reads for a sequence to
be used in the analysis (not normalized, a straight count.)
Returns:
Nothing.
"""
for _, row in input_param_df.iterrows():
# Get parameters to calculate bead fraction.
apt_screened = row['apt_screened']
apt_collected = row['apt_collected']
seq_input = row['seq_input']
condition = row['condition']
flag = row['condition_flag']
# Get sequences above tolerated_bead_frac in positive pool.
tolerated_bead_frac_seqs = generate_cutoffs_via_PD_stats(
input_df, condition, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh)
# Intersect with seqs > normalized positive sequencing count ratio.
condition_pre = condition.split('_positive')[0]
ratio_col = '%s_pos_neg_ratio' % (condition_pre)
pos_frac_seqs = input_df[input_df[ratio_col] > pos_neg_ratio].sequence
seqs = set(tolerated_bead_frac_seqs) & set(pos_frac_seqs)
input_df[flag] = input_df.sequence.isin(set(seqs))
"""
Explanation: Helper functions
End of explanation
"""
#@title Add positive_frac / (positive_frac + negative_frac) col to df
for col_prefix in ['round1_very', 'round1_high', 'round1_medium', 'round1_low']:
mlpd_input_df = generate_pos_neg_normalized_ratio(mlpd_input_df, col_prefix)
for col_prefix in ['round2_high_no_serum', 'round2_medium_no_serum', 'round2_low_no_serum']:
pd_input_df = generate_pos_neg_normalized_ratio(pd_input_df, col_prefix)
#@title Measure consistency of particle display data when increasing stringency thresholds within each experimental set (i.e PD and MLPD)
build_seq_sets_from_df(pd_param_df, pd_input_df, TOLERATED_BEAD_FRAC,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
build_seq_sets_from_df(mlpd_param_df, mlpd_input_df, TOLERATED_BEAD_FRAC,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
"""
Explanation: Data Analysis
End of explanation
"""
#@title Figure 2B Raw Data
pd_input_df.groupby('round2_low_no_serum_flag round2_medium_no_serum_flag round2_high_no_serum_flag'.split()).count()[['sequence']]
#@title Figure 2C Raw Data
# To build venn (green), sum preceding True flags to get consistent sets
# 512 nM = 5426+3 = 5429
# 512 & 128 nM = 2360+15 = 2375
# 512 & 128 & 32nM (including 8 nM) = 276+84 = 360
# To build venn (grey) Inconsistent flags are summed (ignoring 8nM)
# 128 nM only = 185 + 1 = 186
# 128 nM & 32 nM = 12+1 = 13
# 32 nM only = 2
# 32 nM and 512 nM only = 22+1 = 23
#
# To build pie, look at all round1_very_flags = True
# Green = 84
# Grey = 15+1+3+1+1 = 21
mlpd_input_df.groupby('round1_low_flag round1_medium_flag round1_high_flag round1_very_flag'.split()).count()[['sequence']]
"""
Explanation: Generate Figure Data
Here, we generate the raw data used to build Venn diagrams. The final figures were render in Figma.
End of explanation
"""
|
sthuggins/phys202-2015-work
|
assignments/assignment07/AlgorithmsEx01.ipynb
|
mit
|
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
"""
Explanation: Algorithms Exercise 1
Imports
End of explanation
"""
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
"""Split a string into a list of words, removing punctuation and stop words."""
all_words= []
for line in s.splitlines():
words = line.split(" ")
all_words.extend(words)
for words in all_words:
filter(all_words, punctuation)
return all_words
tokenize("There is no cow level \nWow, sally that was great.")
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland = """
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
"""
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
"""
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
"""
def count_words(data):
"""Return a word count dictionary from the list of words in data."""
# YOUR CODE HERE
raise NotImplementedError()
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
"""
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
"""
def sort_word_counts(wc):
"""Return a list of 2-tuples of (word, count), sorted by count descending."""
# YOUR CODE HERE
raise NotImplementedError()
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
"""
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert swc[0]==('i',43)
assert len(swc)==848
"""
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the dotplot
"""
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation
"""
|
feffenberger/StatisticalMethods
|
examples/Cepheids/PeriodMagnitudeRelation.ipynb
|
gpl-2.0
|
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 8.0)
"""
Explanation: A Period - Magnitude Relation in Cepheid Stars
Cepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).
A lot of monitoring data - repeated imaging and subsequent "photometry" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.
Let's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).
Our goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.
End of explanation
"""
# First, we need to know what's in the data file.
!head -15 R11ceph.dat
class Cepheids(object):
def __init__(self,filename):
# Read in the data and store it in this master array:
self.data = np.loadtxt(filename)
self.hosts = self.data[:,1].astype('int').astype('str')
# We'll need the plotting setup to be the same each time we make a plot:
colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']
self.colors = dict(zip(self.list_hosts(), colornames))
self.xlimits = np.array([0.3,2.3])
self.ylimits = np.array([30.0,17.0])
return
def list_hosts(self):
# The list of (9) unique galaxy host names:
return np.unique(self.hosts)
def select(self,ID):
# Pull out one galaxy's data from the master array:
index = (self.hosts == str(ID))
self.mobs = self.data[index,2]
self.merr = self.data[index,3]
self.logP = np.log10(self.data[index,4])
return
def plot(self,X):
# Plot all the points in the dataset for host galaxy X.
ID = str(X)
self.select(ID)
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)
plt.xlabel('$\\log_{10} P / {\\rm days}$',fontsize=20)
plt.ylabel('${\\rm magnitude (AB)}$',fontsize=20)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)
return
def overlay_straight_line_with(self,a=0.0,b=24.0):
# Overlay a straight line with gradient a and intercept b.
x = self.xlimits
y = a*x + b
plt.plot(x, y, 'k-', alpha=0.5, lw=2)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
return
def add_legend(self):
plt.legend(loc='upper left')
return
data = Cepheids('R11ceph.dat')
print(data.colors)
"""
Explanation: A Look at Each Host Galaxy's Cepheids
Let's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.
End of explanation
"""
data.plot(4258)
# for ID in data.list_hosts():
# data.plot(ID)
data.overlay_straight_line_with(a=-2.0,b=24.0)
data.add_legend()
"""
Explanation: OK, now we are all set up! Let's plot one of the datasets.
End of explanation
"""
def log_likelihood(logP,mobs,merr,a,b):
return 0.0 # m given a,b? mobs given m? Combining all data points?
def log_prior(a,b):
return 0.0 # Ranges? Functions?
def log_posterior(logP,mobs,merr,a,b):
return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)
"""
Explanation: Q: Is the Cepheid Period-Luminosity relation likely to be well-modeled by a power law ?
Is it easy to find straight lines that "fit" all the data from each host? And do we get the same "fit" for each host?
Inferring the Period-Magnitude Relation
Let's try inferring the parameters $a$ and $b$ of the following linear relation:
$m = a\;\log_{10} P + b$
We have data consisting of observed magnitudes with quoted uncertainties, of the form
$m^{\rm obs} = 24.51 \pm 0.31$ at $\log_{10} P = \log_{10} (13.0/{\rm days})$
Let's draw the PGM together, on the whiteboard, imagining our way through what we would do to generate a mock dataset like the one we have.
Q: What is the PDF for $m$, ${\rm Pr}(m|a,b,H)$?
Q: What are reasonable assumptions about the sampling distribution for the $k^{\rm th}$ datapoint, ${\rm Pr}(m^{\rm obs}_k|m,H)$?
Q: What is the conditional PDF ${\rm Pr}(m_k|a,b,\log{P_k},H)$?
Q: What is the resulting joint likelihood, ${\rm Pr}(m^{\rm obs}|a,b,H)$?
Q: What could be reasonable assumptions for the prior ${\rm Pr}(a,b|H)$?
We should now be able to code up functions for the log likelihood, log prior and log posterior, such that we can evaluate them on a 2D parameter grid. Let's fill them in:
End of explanation
"""
# Select a Cepheid dataset:
data.select(4258)
# Set up parameter grids:
npix = 100
amin,amax = -4.0,-2.0
bmin,bmax = 25.0,27.0
agrid = np.linspace(amin,amax,npix)
bgrid = np.linspace(bmin,bmax,npix)
logprob = np.zeros([npix,npix])
# Loop over parameters, computing unnormlized log posterior PDF:
for i,a in enumerate(agrid):
for j,b in enumerate(bgrid):
logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)
# Normalize and exponentiate to get posterior density:
Z = np.max(logprob)
prob = np.exp(logprob - Z)
norm = np.sum(prob)
prob /= norm
"""
Explanation: Now, let's set up a suitable parameter grid and compute the posterior PDF!
End of explanation
"""
sorted = np.sort(prob.flatten())
C = sorted.cumsum()
# Find the pixel values that lie at the levels that contain
# 68% and 95% of the probability:
lvl68 = np.min(sorted[C > (1.0 - 0.68)])
lvl95 = np.min(sorted[C > (1.0 - 0.95)])
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])
plt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])
plt.grid()
plt.xlabel('slope a')
plt.ylabel('intercept b / AB magnitudes')
"""
Explanation: Now, plot, with confidence contours:
End of explanation
"""
data.plot(4258)
data.overlay_straight_line_with(a=-3.0,b=26.3)
data.add_legend()
"""
Explanation: Are these inferred parameters sensible?
Let's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.
End of explanation
"""
prob_a_given_data = np.sum(prob,axis=0)
prob_b_given_data = np.sum(prob,axis=1)
print(prob_a_given_data.shape, np.sum(prob_a_given_data))
# Plot 1D distributions:
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].plot(agrid, prob_a_given_data)
ax[0].set_title('${\\rm Pr}(a|d)$')
ax[0].set_xlabel('slope $a$')
ax[0].set_ylabel('Posterior probability density')
right = ax[1].plot(bgrid, prob_b_given_data)
ax[1].set_title('${\\rm Pr}(a|d)$')
ax[0].set_xlabel('intercept $b$ / AB magnitudes')
ax[1].set_ylabel('Posterior probability density')
# Compress each PDF into a median and 68% credible interval, and report:
def compress_1D_pdf(x,pr,ci=68,dp=1):
# Interpret credible interval request:
low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68
high = 1.0 - low # 0.84 for ci=68
# Find cumulative distribution and compute percentiles:
cumulant = pr.cumsum()
pctlow = x[cumulant>low].min()
median = x[cumulant>0.50].min()
pcthigh = x[cumulant>high].min()
# Convert to error bars, and format a string:
errplus = np.abs(pcthigh - median)
errminus = np.abs(median - pctlow)
report = "$ "+str(round(median,dp))+"^{+"+str(round(errplus,dp))+"}_{-"+str(round(errminus,dp))+"} $"
return report
print("a = ",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))
print("b = ",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))
"""
Explanation: Summarizing our Inferences
Let's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and 68% credible interval.
End of explanation
"""
|
zmechz/CarND-TrafficSign-P2
|
Traffic_Sign_Classifier.ipynb
|
mit
|
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "traffic-signs/train.p"
validation_file= "traffic-signs/valid.p"
testing_file = "traffic-signs/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_validation, y_validation = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
"""
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
"""
# Basic Summary and Data Set info
import numpy as np
import matplotlib.pyplot as plt
import csv
# TODO: Number of training / validation / testing examples
n_train = X_train.shape[0]
n_validation = X_validation.shape[0]
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
image_shape_v = X_validation[0].shape
image_shape_t = X_test[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).shape[0]
n_classes_v = np.unique(y_validation).shape[0]
n_classes_t = np.unique(y_test).shape[0]
class_list = []
with open('signnames.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
class_list.append(row['SignName'])
n_classes_csv = len(class_list)
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image Shape:")
print(" train dataset = ", image_shape)
print(" validation dataset = ", image_shape_v)
print(" test dataset = ", image_shape_t)
print("Number of classes:")
print(" distinct labels in train dataset = ", n_classes)
print(" distinct labels in validation dataset = ", n_classes_v)
print(" distinct labels in test dataset = ", n_classes_t)
print(" labels in csv = ", n_classes_csv)
"""
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
"""
print(" ")
print("Training samples distribution per class")
n_samples=[]
for i in range(0, n_classes):
n_samples.append(X_train[y_train == i].shape[0])
class_list = np.asarray(list(zip(class_list, n_samples)))
plt.figure(figsize=(10, 2))
plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black')
plt.title("Training samples per class")
plt.xlabel("Id")
plt.ylabel("Number of samples")
plt.show()
print(" ")
print("Validation samples distribution per class")
n_samples=[]
for i in range(0, n_classes_v):
n_samples.append(X_validation[y_validation == i].shape[0])
plt.figure(figsize=(10, 2))
plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black')
plt.title("Validation samples per class")
plt.xlabel("Id")
plt.ylabel("Number of samples")
plt.show()
print(" ")
print("Testing samples distribution per class")
n_samples=[]
for i in range(0, n_classes_t):
n_samples.append(X_test[y_test == i].shape[0])
plt.figure(figsize=(10, 2))
plt.bar(range(0, n_classes), n_samples,color='blue',edgecolor='black')
plt.title("Testing samples per class")
plt.xlabel("Id")
plt.ylabel("Number of samples")
plt.show()
"""
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
End of explanation
"""
### German sign images are already 32x32
import cv2
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
n_classes_csv = len(class_list)
n_samples=[]
for i in range(0, n_classes):
n_samples.append(X_train[y_train == i].shape[0])
class_list = np.asarray(list(zip(class_list, n_samples)))
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print("Classifier ID = ", y_train[index], ", Description = ", class_list[y_train[index],0])
"""
Explanation: Select one train image:
End of explanation
"""
def perform_grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
def perform_hist_equalization(grayscale_image):
return cv2.equalizeHist(grayscale_image)
def perform_image_normalization(equalized_image):
return equalized_image/255.-.5
def pre_process_image(image):
image = perform_grayscale(image)
image = perform_hist_equalization(image)
image = perform_image_normalization(image)
return np.expand_dims(image,axis=3)
original_image = X_train[index].squeeze()
grayscale_image = perform_grayscale(original_image)
equalized_image = perform_hist_equalization(grayscale_image)
normalized_image = perform_image_normalization(equalized_image)
image_shape = np.shape(normalized_image)
print("Original image:")
plt.figure(figsize=(1,1))
plt.imshow(original_image)
print(y_train[index])
plt.show()
print("Grayscale image data shape =", image_shape)
print("Preprocess Image techiniques applied")
print("Converted to grayscale")
plt.figure(figsize=(1,1))
plt.imshow(grayscale_image, cmap='gray')
plt.show()
print("Converted to grayscale + histogram equalization:")
plt.figure(figsize=(1,1))
plt.imshow(equalized_image, cmap='gray')
plt.show()
print("Converted to grayscale + histogram equalization + normalization:")
plt.figure(figsize=(1,1))
plt.imshow(normalized_image, cmap='gray')
plt.show()
new_image = pre_process_image(image)
new_image_shape = np.shape(new_image)
print("New Image data shape =", new_image_shape)
"""
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
"""
import cv2
img_resize = 32
N_classes = 43
image_shape = (img_resize,img_resize)
img_size_flat = img_resize*img_resize
image_S_train = np.array([pre_process_image(X_train[i]) for i in range(len(X_train))],
dtype = np.float32)
image_S_valid = np.array([pre_process_image(X_validation[i]) for i in range(len(X_validation))],
dtype = np.float32)
image_S_test = np.array([pre_process_image(X_test[i]) for i in range(len(X_test))],
dtype = np.float32)
### Shuffle the training data.
from sklearn.utils import shuffle
image_S_train, y_train = shuffle(image_S_train, y_train)
"""
Explanation: Changing training data
End of explanation
"""
import tensorflow as tf
EPOCHS = 80
BATCH_SIZE = 128
"""
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
"""
from tensorflow.contrib.layers import flatten
n_channels = 1
def dropout_layer(layer, keep_prob):
layer_drop = tf.nn.dropout(layer, keep_prob)
return layer_drop
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
keep_prob = 0.75
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x12. 3 inputs colour channels
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, n_channels, 12), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(12))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x12. Output = 14x14x12.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#layer_conv1_drop = dropout_layer(conv1, 0.5)
# SOLUTION: Layer 2: Convolutional. Output = 10x10x32.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 12, 32), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(32))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x32. Output = 5x5x32.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2-b: Convolutional. Input = 5x5x32. Output = 3x3x64.
conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 32, 64), mean = mu, stddev = sigma))
conv3_b = tf.Variable(tf.zeros(64))
conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b
# TODO: Activation.
conv3 = tf.nn.relu(conv3)
# SOLUTION: Flatten. Input = 3x3x64. Output = 800.
fc0 = flatten(conv2)
fc0 = tf.nn.dropout(fc0, keep_prob)
# SOLUTION: Layer 3: Fully Connected. Input = 800. Output = 256.
fc1_W = tf.Variable(tf.truncated_normal(shape=(800, 256), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(256))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(256, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits, conv1, conv2, conv3
"""
Explanation: Model Architecture
Using LeNet-5 based architecture
Implement the LeNet-5 neural network architecture.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since German sign images are 32x32 RGB, C is 3 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 43 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 32, 32, n_channels))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
keep_prob = tf.placeholder(tf.float32)
"""
Explanation: Features and Labels
Train LeNet to classify the German signs.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
"""
rate = 0.0005
logits, conv1, conv2, conv3 = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify German sign images.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0.0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(image_S_train)
print("Training...")
print()
val_accu_list = []
batch_acc_list = []
for i in range(EPOCHS):
# X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = image_S_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
training_accuracy = evaluate(image_S_train, y_train)
validation_accuracy = evaluate(image_S_valid, y_validation)
batch_accuracy = evaluate(batch_x, batch_y)
val_accu_list.append(validation_accuracy)
batch_acc_list.append(batch_accuracy)
print("EPOCH {} ...".format(i+1))
print("Training Accuracy = {:.3f}".format(training_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './traffic_classifier_data')
print("Model saved")
"""
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
"""
plt.plot(batch_acc_list, label="Train Accuracy")
plt.plot(val_accu_list, label="Validation Accuracy")
plt.ylim(.4,1.1)
plt.xlim(0,EPOCHS)
"""
Explanation: Plot data
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(image_S_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation
"""
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import cv2
import matplotlib.pyplot as plt
new_images_original = []
test_image_labels = list()
test_image_labels.append(27)
test_image_labels.append(25)
test_image_labels.append(14)
test_image_labels.append(33)
test_image_labels.append(13)
path = "new_images/"
files = sorted(os.listdir(path))
print("Original images:")
i = 0
for file in files:
print(path+file)
image = cv2.imread(path+file)
image = image[...,::-1] # Convert from BGR <=> RGB
resized_image = cv2.resize(image,(32,32))
new_images_original.append(resized_image)
label = test_image_labels[i]
desc = class_list[[label],0]
print("Label = ", label, ". Desc = ", desc)
i += 1
plt.figure(figsize=(1, 1))
plt.imshow(image)
plt.show()
print(test_image_labels)
test_images = []
print("Preprocessed images:")
for image in new_images_original:
preprocessed_image = pre_process_image(image)
test_images.append(preprocessed_image)
plt.figure(figsize=(1, 1))
plt.imshow(preprocessed_image[:,:,0], cmap='gray')
plt.show()
"""
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
"""
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, './traffic_classifier_data')
top5_prob = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5, sorted=True), feed_dict = {x: test_images, keep_prob:1})
# predicted_logits = sess.run(logits, feed_dict={x:test_images, keep_prob:1})
# predicts = sess.run(tf.nn.top_k(top5_prob, k=5, sorted=True))
predicted_labels = np.argmax(top5_prob, axis=1)
# predictions_labels = np.argmax(predictions, axis=1)
i=0
for image in test_images:
plt.figure(figsize=(1, 1))
print("Index=", top5_prob.indices[i, 0])
plt.xlabel(class_list[top5_prob.indices[i, 0],0])
plt.imshow(image[:,:,0], cmap='gray')
plt.show()
i += 1
"""
Explanation: Predict the Sign Type for Each Image
End of explanation
"""
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(test_images, test_image_labels)
print("Test Accuracy = {:.2f}".format(test_accuracy))
"""
Explanation: Analyze Performance
End of explanation
"""
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
test_images = np.asarray(test_images)
print(test_images.shape)
plt.figure(figsize=(16, 21))
for i in range(5):
plt.subplot(12, 2, 2*i+1)
plt.imshow(test_images[i][:,:,0], cmap="gray")
plt.axis('off')
plt.title(i)
plt.subplot(12, 2, 2*i+2)
plt.axis([0, 1., 0, 6])
plt.barh(np.arange(1, 6, 1), (np.absolute(top5_prob.values[i, :]/sum(np.absolute(top5_prob.values[i, :])))))
labs=[class_list[j][0] for j in top5_prob.indices[i, :]]
plt.yticks(np.arange(1, 6, 1), labs)
plt.show()
"""
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
"""
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(8,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
with tf.Session() as sess:
saver.restore(sess, './traffic_classifier_data')
print("Convolution #1")
print(test_images[0].shape)
print(test_images.shape)
outputFeatureMap(test_images,conv1)
with tf.Session() as sess:
saver.restore(sess, './traffic_classifier_data')
print("Convolution #2")
outputFeatureMap(test_images,conv2)
with tf.Session() as sess:
saver.restore(sess, './traffic_classifier_data')
print("Convolution #3")
outputFeatureMap(test_images,conv3)
"""
Explanation: Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
End of explanation
"""
|
stevetjoa/stanford-mir
|
basic_feature_extraction.ipynb
|
mit
|
kick_signals = [
librosa.load(p)[0] for p in Path().glob('audio/drum_samples/train/kick_*.mp3')
]
snare_signals = [
librosa.load(p)[0] for p in Path().glob('audio/drum_samples/train/snare_*.mp3')
]
len(kick_signals)
len(snare_signals)
"""
Explanation: ← Back to Index
Basic Feature Extraction
Somehow, we must extract the characteristics of our audio signal that are most relevant to the problem we are trying to solve. For example, if we want to classify instruments by timbre, we will want features that distinguish sounds by their timbre and not their pitch. If we want to perform pitch detection, we want features that distinguish pitch and not timbre.
This process is known as feature extraction.
Let's begin with twenty audio files: ten kick drum samples, and ten snare drum samples. Each audio file contains one drum hit.
Read and store each signal:
End of explanation
"""
plt.figure(figsize=(15, 6))
for i, x in enumerate(kick_signals):
plt.subplot(2, 5, i+1)
librosa.display.waveplot(x[:10000])
plt.ylim(-1, 1)
"""
Explanation: Display the kick drum signals:
End of explanation
"""
plt.figure(figsize=(15, 6))
for i, x in enumerate(snare_signals):
plt.subplot(2, 5, i+1)
librosa.display.waveplot(x[:10000])
plt.ylim(-1, 1)
"""
Explanation: Display the snare drum signals:
End of explanation
"""
def extract_features(signal):
return [
librosa.feature.zero_crossing_rate(signal)[0, 0],
librosa.feature.spectral_centroid(signal)[0, 0],
]
"""
Explanation: Constructing a Feature Vector
A feature vector is simply a collection of features. Here is a simple function that constructs a two-dimensional feature vector from a signal:
End of explanation
"""
kick_features = numpy.array([extract_features(x) for x in kick_signals])
snare_features = numpy.array([extract_features(x) for x in snare_signals])
"""
Explanation: If we want to aggregate all of the feature vectors among signals in a collection, we can use a list comprehension as follows:
End of explanation
"""
plt.figure(figsize=(14, 5))
plt.hist(kick_features[:,0], color='b', range=(0, 0.2), alpha=0.5, bins=20)
plt.hist(snare_features[:,0], color='r', range=(0, 0.2), alpha=0.5, bins=20)
plt.legend(('kicks', 'snares'))
plt.xlabel('Zero Crossing Rate')
plt.ylabel('Count')
plt.figure(figsize=(14, 5))
plt.hist(kick_features[:,1], color='b', range=(0, 4000), bins=30, alpha=0.6)
plt.hist(snare_features[:,1], color='r', range=(0, 4000), bins=30, alpha=0.6)
plt.legend(('kicks', 'snares'))
plt.xlabel('Spectral Centroid (frequency bin)')
plt.ylabel('Count')
"""
Explanation: Visualize the differences in features by plotting separate histograms for each of the classes:
End of explanation
"""
feature_table = numpy.vstack((kick_features, snare_features))
print(feature_table.shape)
"""
Explanation: Feature Scaling
The features that we used in the previous example included zero crossing rate and spectral centroid. These two features are expressed using different units. This discrepancy can pose problems when performing classification later. Therefore, we will normalize each feature vector to a common range and store the normalization parameters for later use.
Many techniques exist for scaling your features. For now, we'll use sklearn.preprocessing.MinMaxScaler. MinMaxScaler returns an array of scaled values such that each feature dimension is in the range -1 to 1.
Let's concatenate all of our feature vectors into one feature table:
End of explanation
"""
scaler = sklearn.preprocessing.MinMaxScaler(feature_range=(-1, 1))
training_features = scaler.fit_transform(feature_table)
print(training_features.min(axis=0))
print(training_features.max(axis=0))
"""
Explanation: Scale each feature dimension to be in the range -1 to 1:
End of explanation
"""
plt.scatter(training_features[:10,0], training_features[:10,1], c='b')
plt.scatter(training_features[10:,0], training_features[10:,1], c='r')
plt.xlabel('Zero Crossing Rate')
plt.ylabel('Spectral Centroid')
"""
Explanation: Plot the scaled features:
End of explanation
"""
|
NeuroDataDesign/fngs
|
docs/ebridge2/fngs_merge/week_0602/timeseries.ipynb
|
apache-2.0
|
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
fngs_ts = np.load('/home/eric/cmp/fngs/outputs/ts_roi/pp264-2mm/sub-0025864_ses-1_bold_pp264-2mm.npy')
cpac_ts = np.load('/home/eric/cmp/cpac/pipeline_HCPtest/sub-0025864_ses-1/roi_timeseries/_scan_rest_1_rest/_csf_threshold_0.96/_gm_threshold_0.7/_wm_threshold_0.96/_compcor_ncomponents_5_selector_pc10.linear1.wm0.global0.motion1.quadratic1.gm0.compcor1.csf0/_mask_pp264-2mm/roi_pp264-2mm.npz')['roi_data']
fig = plt.figure()
tr = 2
ax = fig.add_subplot(111)
print
# visualize FNGS timeseries
ax.plot(np.arange(0, fngs_ts.shape[1]*tr, tr), np.transpose(fngs_ts), alpha=0.6)
ax.set_ylabel('Intensity')
ax.set_xlabel('Time (s)')
ax.set_title('FNGS Timeseries')
fig.show()
fig = plt.figure()
ax = fig.add_subplot(111)
# visualize CPAC timeseries
ax.plot(np.arange(0, cpac_ts.shape[1]*tr, tr), cpac_ts.transpose(), alpha=0.7)
ax.set_ylabel('Normalized Intensity')
ax.set_xlabel('Time (s)')
ax.set_ylim([-200, 200])
ax.set_title('CPAC Timeseries')
fig.show()
"""
Explanation: Timeseries Comparison
In this notebook, we compare the impact on the fMRI timeseries that the respective nuisance-correction strategies used by FNGS and CPAC have on the resulting timeseries.
As CPAC does not produce intermediate derivatives, we unfortunately cannot make step-by-step comparisons, and instead must rely on end-timeseries to make our comparisons.
First, we will begin by visualizing the timeseries produced by each service:
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.imshow(np.abs(np.corrcoef(fngs_ts)))
ax.set_xlabel('ROI')
ax.set_ylabel('ROI')
ax.set_title('FNGS Correlation Matrix')
cbar = fig.colorbar(cax)
fig.show()
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.imshow(np.abs(np.corrcoef(cpac_ts)))
ax.set_xlabel('ROI')
ax.set_ylabel('ROI')
ax.set_title('CPAC Correlation Matrix')
cbar = fig.colorbar(cax)
fig.show()
"""
Explanation: Things to Note:
CPAC timeseries has been z-scored by default. Also of note, the CPAC timeseries clearly removes much of the global trending. The timeseries appears relatively "flat" in comparison due to the WM compcor.
Note that the CPAC timeseries has some clear low-frequency drift present (particularly, you can see there is a gradual upwards trend from 150 - 250 seconds, and then a slow downwards trend to 300 s, and then again an upwards trend, etc).
Also, note that the CPAC timeseries appear to be slightly decorrelated. This is because aCompCor can behave similarly to Global Signal Regression, in that it can remove some of the global fluctuations that may be present, which are thought to be due to physiological noise, ie heartbeat, breathing, etc.
Correlation Comparison
Next, we look at the correlation matrices produced. We note that the CPAC timeseries above has much of the global correlation between timeseries removed due to the WM compcor that is performed. We would expect to get significantly sparser connectomes as a result:
End of explanation
"""
|
empet/Plotly-plots
|
Plotly-cube.ipynb
|
gpl-3.0
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
img=plt.imread('Data/Plotly-logo3.png')
plt.imshow(img)
print 'image shape', img.shape
"""
Explanation: Plotly Cube: a cube with Plotly logo mapped on its faces
Our aim is to plot a cube having on each face the Plotly logo.
For, we choose a png image representing the Plotly logo, read it via matplotlib, and crop it such that to get a numpy array of shape (L,L).
Each cube face will be defined as a Plotly Surface, colored via a discrete colorscale, according to the values in the third array of the image, representing the blue chanel.
Read the image:
End of explanation
"""
my_img=img[10:-10, :, :]
my_img.shape
plt.imshow(my_img)
"""
Explanation: Crop the image:
End of explanation
"""
pl_img=my_img[:,:, 2] #
L, C=pl_img.shape
assert L==C
plotly_blue='rgb(68, 122, 219)'# the blue color in Plotly logo
import plotly.plotly as py
from plotly.graph_objs import *
"""
Explanation: Since our image contains only two colors (white and blue) we select from my_img the array corresponding to the blue chanel:
End of explanation
"""
pl_scl=[ [0.0, 'rgb(68, 122, 219)'], #plotly_blue
[0.5, 'rgb(68, 122, 219)'],
[0.5, 'rgb(255,255,255)' ], #white
[1.0, 'rgb(255,255,255)' ]]
"""
Explanation: Define a discrete colorscale from plotly_blue, and the white color:
End of explanation
"""
x=np.linspace(0, L-1, L)
y=np.linspace(0, L-1, L)
X, Y = np.meshgrid(x, y)
"""
Explanation: Prepare data to represent a cube face as a Plotly Surface:
End of explanation
"""
zm=np.zeros(X.shape)
zM=(L-1)*np.ones(X.shape)
"""
Explanation: Define the array "equations" of cube faces.
The upper face has the equation zM=L-1, the lower one, zm=0, and similarly for x=constant faces and y=constant faces:
End of explanation
"""
def make_cube_face(x,y,z, colorscale=pl_scl, is_scl_reversed=False,
surfacecolor=pl_img, text='Plotly cube'):
return Surface(x=x, y=y, z=z,
colorscale=colorscale,
reversescale=is_scl_reversed,
showscale=False,
surfacecolor=surfacecolor,
text=text,
hoverinfo='text'
)
"""
Explanation: The next function returns a Surface:
End of explanation
"""
trace_zm=make_cube_face(x=X, y=Y, z=zm, is_scl_reversed=True, surfacecolor=pl_img)
trace_zM=make_cube_face(x=X, y=Y, z=zM, is_scl_reversed=True, surfacecolor=np.flipud(pl_img))
trace_xm=make_cube_face(x=zm, y=Y, z=X, surfacecolor=np.flipud(pl_img))
trace_xM=make_cube_face(x=zM, y=Y, z=X, surfacecolor=pl_img)
trace_ym=make_cube_face(x=Y, y=zm, z=X, surfacecolor=pl_img)
trace_yM=make_cube_face(x=Y, y=zM, z=X, surfacecolor=np.fliplr(pl_img))
"""
Explanation: In order to define a cube face as a Plotly Surface, it is referenced to a positively oriented cartesian system of coordinates, (X,Y), associated to the induced planar coordinate system of that face (when looking at it from the outside) from the 3d system of coordinates of the cube.
The image represented by pl_img is then fitted to this system of coordinates, eventually by flipping its rows or columns.
The Surface instances, representing the cube faces, are defined as follows:
End of explanation
"""
noaxis=dict(
showbackground=False,
showgrid=False,
showline=False,
showticklabels=False,
ticks='',
title='',
zeroline=False)
min_val=-0.01
max_val=L-1+0.01
layout = Layout(
title="",
width=500,
height=500,
scene=Scene(xaxis=XAxis(noaxis, range=[min_val, max_val]),
yaxis=YAxis(noaxis, range=[min_val, max_val]),
zaxis=ZAxis(noaxis, range=[min_val, max_val]),
aspectratio=dict(x=1,
y=1,
z=1
),
camera=dict(eye=dict(x=-1.25, y=-1.25, z=1.25)),
),
paper_bgcolor='rgb(240,240,240)',
hovermode='closest',
margin=dict(t=50)
)
fig=Figure(data=Data([trace_zm, trace_zM, trace_xm, trace_xM, trace_ym, trace_yM]), layout=layout)
py.sign_in('empet', 'api_key')
py.iplot(fig, filename='Plotly-cube')
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Set the plot layout:
End of explanation
"""
|
bspalding/research_public
|
lectures/drafts/Graphic presentation of data.ipynb
|
apache-2.0
|
import numpy as np
import matplotlib.pyplot as plt
# Get returns data for S&P 500
start = '2014-01-01'
end = '2015-01-01'
spy = get_pricing('SPY', fields='price', start_date=start, end_date=end).pct_change()[1:]
# Plot a histogram using 20 bins
fig = plt.figure(figsize = (16, 7))
_, bins, _ = plt.hist(spy, 20)
labels = ['%.3f' % a for a in bins] # Reduce precision so labels are legible
plt.xticks(bins, labels)
plt.xlabel('Returns')
plt.ylabel('Number of Days')
plt.title('Frequency distribution of S&P 500 returns, 2014');
"""
Explanation: Histogram
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
A histogram displays a frequency distribution using bars. It lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval.
End of explanation
"""
# Example of a cumulative histogram
fig = plt.figure(figsize = (16, 7))
_, bins, _ = plt.hist(spy, 20, cumulative='True')
labels = ['%.3f' % a for a in bins]
plt.xticks(bins, labels)
plt.xlabel('Returns')
plt.ylabel('Number of Days')
plt.title('Cumulative distribution of S&P 500 returns, 2014');
"""
Explanation: The graph above shows, for example, that the daily returns on the S&P 500 were between 0.010 and 0.013 on 10 of the days in 2014. Note that we are completely discarding the dates corresponding to these returns.
An alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.
End of explanation
"""
# Get returns data for some security
asset = get_pricing('MSFT', fields='price', start_date=start, end_date=end).pct_change()[1:]
# Plot the asset returns vs S&P 500 returns
plt.scatter(asset, spy)
plt.xlabel('MSFT')
plt.ylabel('SPY')
plt.title('Returns in 2014');
"""
Explanation: Scatter plot
A scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.
End of explanation
"""
spy.plot()
plt.ylabel('Returns');
"""
Explanation: Line graph
A line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves "connecting the dots" between the data points.
End of explanation
"""
|
kadamkaustubh/project-Goldilocks
|
Ch2_MorePyMC_PyMC2.ipynb
|
mit
|
import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
"""
Explanation: Chapter 2
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
A little more on PyMC
Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables.
parent variables are variables that influence another variable.
child variable are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
End of explanation
"""
print("Children of `parameter`: ")
print(parameter.children)
print("\nParents of `data_generator`: ")
print(data_generator.parents)
print("\nChildren of `data_generator`: ")
print(data_generator.children)
"""
Explanation: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Likewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.
End of explanation
"""
print("parameter.value =", parameter.value)
print("data_generator.value =", data_generator.value)
print("data_plus_one.value =", data_plus_one.value)
"""
Explanation: Of course a child can have more than one parent, and a parent can have many children.
PyMC Variables
All PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:
End of explanation
"""
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value, "\n")
lambda_1.random(), lambda_2.random(), tau.random()
print("After calling random() on the variables...")
print("lambda_1.value = %.3f" % lambda_1.value)
print("lambda_2.value = %.3f" % lambda_2.value)
print("tau.value = %.3f" % tau.value)
"""
Explanation: PyMC is concerned with two types of programming variables: stochastic and deterministic.
stochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.
deterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is.
We will detail each below.
Initializing Stochastic variables
Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:
some_variable = pm.DiscreteUniform("discrete_uni_var", 0, 4)
where 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)
The name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.
For multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays.
The size argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:
beta_1 = pm.Uniform("beta_1", 0, 1)
beta_2 = pm.Uniform("beta_2", 0, 1)
...
we can instead wrap them into a single variable:
betas = pm.Uniform("betas", 0, 1, size=N)
Calling random()
We can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
End of explanation
"""
type(lambda_1 + lambda_2)
"""
Explanation: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Warning: Don't update stochastic variables' values in-place.
Straight from the PyMC docs, we quote [4]:
Stochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:
A.value = new_value
The following are in-place updates and should never be used:
A.value += 3
A.value[2,1] = 5
A.value.attribute = new_attribute_value
Deterministic variables
Since most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:
@pm.deterministic
def some_deterministic_var(v1=v1,):
#jelly goes here.
For all purposes, we can treat the object some_deterministic_var as a variable and not a Python function.
Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:
End of explanation
"""
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
"""
Explanation: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
And in PyMC code:
End of explanation
"""
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
"""
Explanation: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:
@pm.deterministic
def some_deterministic(stoch=some_stochastic_var):
return stoch.value**2
will return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values.
Including observations in the Model
At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?"
End of explanation
"""
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print("value: ", fixed_variable.value)
print("calling .random()")
fixed_variable.random()
print("value: ", fixed_variable.value)
"""
Explanation: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:
End of explanation
"""
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print(obs.value)
"""
Explanation: This is how we include data into our models: initializing a stochastic variable to have a fixed value.
To complete our text message example, we fix the PyMC variable observations to the observed dataset.
End of explanation
"""
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
"""
Explanation: Finally...
We wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)
End of explanation
"""
tau = pm.rdiscrete_uniform(0, 80)
print(tau)
"""
Explanation: Modeling approaches
A good starting point in Bayesian modeling is to think about how your data might have been generated. Put yourself in an omniscient position, and try to imagine how you would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$.
Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here.
What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
Below we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )
<img src="http://i.imgur.com/7J30oCG.png" width = 700/>
PyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:
Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.
Same story; different ending.
Interestingly, we can create new datasets by retelling the story.
For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
1. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$:
End of explanation
"""
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print(lambda_1, lambda_2)
"""
Explanation: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
End of explanation
"""
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
"""
Explanation: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
End of explanation
"""
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
"""
Explanation: 4. Plot the artificial dataset:
End of explanation
"""
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.suptitle("More examples of artificial datasets", fontsize=14)
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
"""
Explanation: It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
End of explanation
"""
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
"""
Explanation: Later we will see how we use this to make predictions and test the appropriateness of our models.
Example: Bayesian A/B testing
A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results.
Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards.
Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural.
A Simple Case
As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
fraction of users who make purchases,
frequency of social attributes,
percent of internet users with cats etc.
are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.
The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
To set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
End of explanation
"""
# set constants
p_true = 0.05 # remember, this is unknown.
N = 15000
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(occurrences.sum())
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % occurrences.mean())
print("Does this equal the true frequency? %s" % (occurrences.mean() == p_true))
"""
Explanation: Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
The observed frequency is:
End of explanation
"""
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
"""
Explanation: We combine the observations into the PyMC observed variable, and run our inference algorithm:
End of explanation
"""
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 250, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend();
"""
Explanation: We plot the posterior distribution of the unknown $p_A$ below:
End of explanation
"""
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print("Obs from Site A: ", observations_A[:30].astype(int), "...")
print("Obs from Site B: ", observations_B[:30].astype(int), "...")
print(observations_A.mean())
print(observations_B.mean())
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
"""
Explanation: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
A and B Together
A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
End of explanation
"""
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
"""
Explanation: Below we plot the posterior distributions for the three unknowns:
End of explanation
"""
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean())
print("Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean())
"""
Explanation: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
End of explanation
"""
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
"""
Explanation: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
An algorithm for human deceit
Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated).
To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.
The Binomial Distribution
The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
$$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
End of explanation
"""
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
"""
Explanation: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
Example: Cheating among students
We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:
In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.
I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars.
Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
End of explanation
"""
true_answers = pm.Bernoulli("truths", p, size=N)
"""
Explanation: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
End of explanation
"""
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print(first_coin_flips.value)
"""
Explanation: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.
End of explanation
"""
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
"""
Explanation: Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
End of explanation
"""
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
"""
Explanation: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable:
End of explanation
"""
observed_proportion.value
"""
Explanation: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
End of explanation
"""
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
"""
Explanation: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:
End of explanation
"""
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
"""
Explanation: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
\begin{align}
P(\text{"Yes"}) &= P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\
& = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\
& = \frac{p}{2} + \frac{1}{4}
\end{align}
Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
End of explanation
"""
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
"""
Explanation: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
End of explanation
"""
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
"""
Explanation: More PyMC Tricks
Protip: Lighter deterministic variables with Lambda class
Sometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example,
beta = pm.Normal("coefficients", 0, size=(N, 1))
x = np.random.randn((N, 1))
linear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))
Protip: Arrays of PyMC variables
There is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:
End of explanation
"""
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
"""
Explanation: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:
Example: Challenger Space Shuttle Disaster <span id="challenger"/>
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):
End of explanation
"""
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14)
plt.legend();
"""
Explanation: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
End of explanation
"""
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14)
plt.legend(loc="lower left");
"""
Explanation: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
Some plots are below, with differing $\alpha$.
End of explanation
"""
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt(_tau)), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
"""
Explanation: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
The probability density function of a $N( \mu, 1/\tau)$ random variable is:
$$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
We plot some different density functions below.
End of explanation
"""
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
"""
Explanation: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
$$ E[ X | \mu, \tau] = \mu$$
and its variance is equal to the inverse of $\tau$:
$$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
Below we continue our modeling of the Challenger space craft:
End of explanation
"""
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
"""
Explanation: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
$$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.
End of explanation
"""
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
"""
Explanation: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
End of explanation
"""
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
"""
Explanation: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
End of explanation
"""
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
"""
Explanation: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
End of explanation
"""
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
"""
Explanation: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.
What about the day of the Challenger disaster?
On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
End of explanation
"""
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
"""
Explanation: Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.
We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:
observed = pm.Bernoulli( "bernoulli_obs", p, value=D, observed=True)
Hence we create:
simulated_data = pm.Bernoulli("simulation_data", p)
Let's simulate 10 000:
End of explanation
"""
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
"""
Explanation: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
End of explanation
"""
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
"""
Explanation: Next we sort each column by the posterior probabilities:
End of explanation
"""
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
"""
Explanation: We can present the above data better in a figure: I've wrapped this up into a separation_plot function.
End of explanation
"""
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
"""
Explanation: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
the perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.
a completely random model, which predicts random probabilities regardless of temperature.
a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
End of explanation
"""
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
"""
Explanation: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
Exercises
1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: References
[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.
[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.
[4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.
[5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.
[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000
[7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.
[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.
End of explanation
"""
|
eds-uga/csci1360-fa16
|
lectures/L15.ipynb
|
mit
|
# File "csv_file.txt" contains the following:
# 1,2,3,4
# 5,6,7,8
# 9,10,11,12
matrix = []
with open("csv_file.txt", "r") as f:
full_file = f.read()
# Split into lines.
lines = full_file.strip().split("\n")
for line in lines:
# Split on commas.
elements = line.strip().split(",")
matrix.append([])
# Convert to integers and store in the list.
for e in elements:
matrix[-1].append(int(e))
print(matrix)
"""
Explanation: Lecture 15: Other file formats
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
In the last lecture, we looked at some ways of interacting with the filesystem through Python and how to read data off files stored on the hard drive. We looked at raw text files; however, there are numerous structured formats that these files can take, and we'll explore some of those here. By the end of this lecture, you should be able to:
Identify some of the primary data storage formats
Explain how to use other tools for some of the more exotic data types
Part 1: Comma-separated value (CSV) files
We've discussed text formats: each line of text in a file can be treated as a string in a list of strings. What else might we encounter in our data science travels?
Easily the most common text file format is the CSV, or comma-separated values format. This is pretty much what it sounds like: if you have (semi-) structured data, you can delineate spaces between data using commas (or, to generalize, other characters like tabs).
As an example, we could represent a matrix very easily using the CSV format. The file storing a 3x3 matrix would look something like this:
<pre>
1,2,3
4,5,6
7,8,9
</pre>
Each row is on one line by itself, and the columns are separated by commas.
How can we read a CSV file? One way, potentially, is just do it yourself:
End of explanation
"""
import csv
with open("eggs.csv", "w") as csv_file:
file_writer = csv.writer(csv_file)
row1 = ["Sunny-side up", "Over easy", "Scrambled"]
row2 = ["Spam", "Spam", "More spam"]
file_writer.writerow(row1)
file_writer.writerow(row2)
with open("eggs.csv", "r") as csv_file:
print(csv_file.read())
"""
Explanation: If, however, we'd prefer to use something a little less strip()-y and split()-y, Python also has a core csv module built-in:
End of explanation
"""
with open("eggs.csv", "r") as csv_file:
file_reader = csv.reader(csv_file)
for csv_row in file_reader:
print(csv_row)
"""
Explanation: Notice that you first create a file reference, just like before. The one added step, though, is passing that reference to the csv.writer() function.
Once you've created the file_writer object, you can call its writerow() function and pass in a list to the function, and it is automatically written to the file in CSV format!
The CSV readers let you do the opposite: read a line of text from a CSV file directly into a list.
End of explanation
"""
person = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""
"""
Explanation: You can use a for loop to iterate over the rows in the CSV file. In turn, each row is a list, where each element of the list was separated by a comma.
Part 2: JavaScript Object Notation (JSON) files
"JSON", short for "JavaScript Object Notation", has emerged as more or less the de facto standard format for interacting with online services. Like CSV, it's a text-based format, but is much more flexible than CSV.
Here's an example: an object in JSON format that represents a person.
End of explanation
"""
import json
"""
Explanation: It looks kind of a like a Python dictionary, doesn't it? You have key-value pairs, and they can accommodate almost any data type. In fact, when JSON objects are converted into native Python data structures, they are represented using dictionaries.
For reading and writing JSON objects, we can use the built-in json Python module.
End of explanation
"""
python_dict = json.loads(person)
print(python_dict)
"""
Explanation: (Aside: with CSV files, it was fairly straightforward to eschew the built-in csv module and do it yourself. With JSON, it is much harder; in fact, there really isn't a case where it's advisable to roll your own over using the built-in json module)
There are two functions of interest: dumps() and loads(). One of them takes a JSON string and converts it to a native Python object, while the other does the opposite.
First, we'll take our JSON string and convert it into a Python dictionary:
End of explanation
"""
json_string = json.dumps(python_dict)
print(json_string)
"""
Explanation: And if you want to take a Python dictionary and convert it into a JSON string--perhaps you're about to save it to a file, or send it over the network to someone else--we can do that.
End of explanation
"""
import xml.etree.ElementTree as ET # See, even the import statement is stupid complicated.
tree = ET.parse('xml_file.txt')
root = tree.getroot()
print(root.tag) # The root node is "data", so that's what we should see here.
"""
Explanation: At first glance, these two print-outs may look the same, but if you look closely you'll see some differences. Plus, if you tried to index json_string["name"] you'd get some very strange errors. python_dict["name"], on the other hand, should nicely return "Wes".
Part 3: Extensible Markup Language (XML) files
AVOID AT ALL COSTS.
...but if you have to interact with XML data (e.g., you're manually parsing a web page!), Python has a built-in xml library.
XML is about as general as it gets when it comes to representing data using structured text; you can represent pretty much anything. HTML is an example of XML in practice.
```
<?xml version="1.0" standalone="yes"?>
<conversation>
<greeting>Hello, world!</greeting>
<response>Stop the planet, I want to get off!</response>
</conversation>
```
This is about the simplest excerpt of XML in existence. The basic idea is you have tags (delineated by < and > symbols) that identify where certain fields begin and end.
Each field has an opening tag, with the name of the field in angled brackets: <field>. The closing tag is exactly the same, except with a backslash in front of the tag to indicate closing: </field>
These tags can also have their own custom attributes that slightly tweak their behavior (e.g. the standalone="yes" attribute in the opening <?xml tag).
You've probably noticed there is a very strong hierarchy of terms in XML. This is not unlike JSON in many ways, and for this reason the following piece of advice is the same: don't try to roll your own XML parser. You'll pull out your hair.
The XML file we'll look at comes directly from the Python documentation for its XML parser:
```
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
```
End of explanation
"""
for child in root:
print("Tag: \"{}\" :: Name: \"{}\"".format(child.tag, child.attrib["name"]))
"""
Explanation: With the root node, we have access to all the "child" data beneath it, such as the various country names:
End of explanation
"""
import pickle
# We'll use the `python_dict` object from before.
binary_object = pickle.dumps(python_dict)
print(binary_object)
"""
Explanation: Part 4: Binary files
What happens when we're not dealing with text? After all, images and videos are most certainly not encoded using text. Furthermore, if memory is an issue, converting text into binary formats can help save space.
There are two primary options for reading and writing binary files.
pickle, or "pickling", is native in Python and very flexible.
NumPy's binary format, which works very well for NumPy arrays but not much else.
Pickle has some similarities with JSON. In particular, it uses the same method names, dumps() and loads(), for converting between native Python objects and the raw data format. There are several differences, however.
Most notably, JSON is text-based whereas pickle is binary. You could open up a JSON file and read the text yourself. Not the case with pickled files.
While JSON is used widely outside of Python, pickle is specific to Python and its objects. Consequently, JSON only works on a subset of Python data structures; pickle, on the other hand, works on just about everything.
Here's an example of saving (or "serializing") a dictionary using pickle instead of JSON:
End of explanation
"""
import numpy as np
# Generate some data and save it.
some_data = np.random.randint(10, size = (3, 3))
print(some_data)
np.save("my_data.npy", some_data)
"""
Explanation: You can kinda see some English in there--mainly, the string constants. But everything else has been encoded in binary. It's much more space-efficient, but complete gibberish until you convert it back into a text format (e.g. JSON) or native Python object (e.g. dictionary).
If, on the other hand, you're using NumPy arrays, then you can use its own built-in binary format for saving and loading your arrays.
End of explanation
"""
my_data = np.load("my_data.npy")
print(my_data)
"""
Explanation: Now we can load it back:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/cmcc-cm2-hr4/toplevel.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR4
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
tensorflow/probability
|
tensorflow_probability/examples/jupyter_notebooks/Factorial_Mixture.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import tensorflow as tf
import numpy as np
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
tfd = tfp.distributions
# Use try/except so we can easily re-execute the whole notebook.
try:
tf.enable_eager_execution()
except:
pass
"""
Explanation: Factorial Mixture
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Factorial_Mixture"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this notebook we show how to use TensorFlow Probability (TFP) to sample from a factorial Mixture of Gaussians distribution defined as:
$$p(x_1, ..., x_n) = \prod_i p_i(x_i)$$ where: $$\begin{align} p_i &\equiv \frac{1}{K}\sum_{k=1}^K \pi_{ik}\,\text{Normal}\left(\text{loc}=\mu_{ik},\, \text{scale}=\sigma_{ik}\right)\1&=\sum_{k=1}^K\pi_{ik}, \forall i.\hphantom{MMMMMMMMMMM}\end{align}$$
Each variable $x_i$ is modeled as a mixture of Gaussians, and the joint distribution over all $n$ variables is a product of these densities.
Given a dataset $x^{(1)}, ..., x^{(T)}$, we model each dataponit $x^{(j)}$ as a factorial mixture of Gaussians:
$$p(x^{(j)}) = \prod_i p_i (x_i^{(j)})$$
Factorial mixtures are a simple way of creating distributions with a small number of parameters and a large number of modes.
End of explanation
"""
num_vars = 2 # Number of variables (`n` in formula).
var_dim = 1 # Dimensionality of each variable `x[i]`.
num_components = 3 # Number of components for each mixture (`K` in formula).
sigma = 5e-2 # Fixed standard deviation of each component.
# Choose some random (component) modes.
component_mean = tfd.Uniform().sample([num_vars, num_components, var_dim])
factorial_mog = tfd.Independent(
tfd.MixtureSameFamily(
# Assume uniform weight on each component.
mixture_distribution=tfd.Categorical(
logits=tf.zeros([num_vars, num_components])),
components_distribution=tfd.MultivariateNormalDiag(
loc=component_mean, scale_diag=[sigma])),
reinterpreted_batch_ndims=1)
"""
Explanation: Build the Factorial Mixture of Gaussians using TFP
End of explanation
"""
plt.figure(figsize=(6,5))
# Compute density.
nx = 250 # Number of bins per dimension.
x = np.linspace(-3 * sigma, 1 + 3 * sigma, nx).astype('float32')
vals = tf.reshape(tf.stack(np.meshgrid(x, x), axis=2), (-1, num_vars, var_dim))
probs = factorial_mog.prob(vals).numpy().reshape(nx, nx)
# Display as image.
from matplotlib.colors import ListedColormap
cmap = ListedColormap(sns.color_palette("Blues", 256))
p = plt.pcolor(x, x, probs, cmap=cmap)
ax = plt.axis('tight');
# Plot locations of means.
means_np = component_mean.numpy().squeeze()
for mu_x in means_np[0]:
for mu_y in means_np[1]:
plt.scatter(mu_x, mu_y, s=150, marker='*', c='r', edgecolor='none');
plt.axis(ax);
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.title('Density of factorial mixture of Gaussians');
"""
Explanation: Notice our use of tfd.Independent. This "meta-distribution" applies a reduce_sum in the log_prob calculation over the rightmost reinterpreted_batch_ndims batch dimensions. In our case, this sums out the variables dimension leaving only the batch dimension when we compute log_prob. Note that this does not affect sampling.
Plot the Density
Compute the density on a grid of points, and show the locations of the modes with red stars. Each mode in the factorial mixture corresponds to a pair of modes from the underlying individual-variable mixture of Gaussians. We can see 9 modes in the plot below, but we only needed 6 parameters (3 to specify the locations of the modes in $x_1$, and 3 to specify the locations of the modes in $x_2$). In contrast, a mixture of Gaussians distribution in the 2d space $(x_1, x_2)$ would require 2 * 9 = 18 parameters to specify the 9 modes.
End of explanation
"""
samples = factorial_mog.sample(1000).numpy()
g = sns.jointplot(
x=samples[:, 0, 0],
y=samples[:, 1, 0],
kind="scatter",
marginal_kws=dict(bins=50))
g.set_axis_labels("$x_1$", "$x_2$");
"""
Explanation: Plot samples and marginal density estimates
End of explanation
"""
|
azhurb/deep-learning
|
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Project 1).ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
|
amkatrutsa/MIPT-Opt
|
Fall2021/03-MatrixCalculus/jax_autodiff_tutorial.ipynb
|
mit
|
import jax
import jax.numpy as jnp
"""
Explanation: Automatic differentiation with JAX
Main features
Numpy wrapper
Auto-vectorization
Auto-parallelization (SPMD paradigm)
Auto-differentiation
XLA backend and JIT support
How to compute gradient of your objective?
Define it as a standard Python function
Call jax.grad and voila!
Do not forget to wrap these functions with jax.jit to speed up
End of explanation
"""
from jax.config import config
config.update("jax_enable_x64", True)
@jax.jit
def f(x, A, b):
res = A @ x - b
return res @ res
gradf = jax.grad(f, argnums=0, has_aux=False)
"""
Explanation: By default, JAX exploits single-precision numbers float32
You can enable double precision (float64) by hands.
End of explanation
"""
n = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(gradf(x, A, b) - 2 * A.T @ (A @ x - b)))
print("Compare speed")
print("Analytical gradient")
%timeit 2 * A.T @ (A @ x - b)
print("Grad function")
%timeit gradf(x, A, b).block_until_ready()
jit_gradf = jax.jit(gradf)
print("Jitted grad function")
%timeit jit_gradf(x, A, b).block_until_ready()
hess_func = jax.jit(jax.hessian(f))
print("Check correctness", jnp.linalg.norm(2 * A.T @ A - hess_func(x, A, b)))
print("Time for hessian")
%timeit hess_func(x, A, b).block_until_ready()
print("Emulate hessian and check correctness",
jnp.linalg.norm(jax.jit(hess_func)(x, A, b) - jax.jacfwd(jax.jacrev(f))(x, A, b)))
print("Time of emulating hessian")
hess_umul_func = jax.jit(jax.jacfwd(jax.jacrev(f)))
%timeit hess_umul_func(x, A, b).block_until_ready()
"""
Explanation: Random numbers in JAX
JAX focuses on the reproducibility of the runs
Analogue of random seed is the necessary argument of all functions that generate something random
More details and references on the design of random submodule are here
End of explanation
"""
fmode_f = jax.jit(jax.jacfwd(f))
bmode_f = jax.jit(jax.jacrev(f))
print("Check correctness", jnp.linalg.norm(fmode_f(x, A, b) - bmode_f(x, A, b)))
print("Forward mode")
%timeit fmode_f(x, A, b).block_until_ready()
print("Backward mode")
%timeit bmode_f(x, A, b).block_until_ready()
"""
Explanation: Forward mode vs. backward mode: $m \ll n$
End of explanation
"""
def fvec(x, A, b):
y = A @ x + b
return jnp.exp(y - jnp.max(y)) / jnp.sum(jnp.exp(y - jnp.max(y)))
grad_fvec = jax.jit(jax.grad(fvec))
jac_fvec = jax.jacobian(fvec)
fmode_fvec = jax.jit(jax.jacfwd(fvec))
bmode_fvec = jax.jit(jax.jacrev(fvec))
n = 1000
m = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (m, n))
b = jax.random.normal(jax.random.PRNGKey(0), (m, ))
J = jac_fvec(x, A, b)
print(J.shape)
grad_fvec(x, A, b)
print("Check correctness", jnp.linalg.norm(fmode_fvec(x, A, b) - bmode_fvec(x, A, b)))
print("Check shape", fmode_fvec(x, A, b).shape, bmode_fvec(x, A, b).shape)
print("Time forward mode")
%timeit fmode_fvec(x, A, b).block_until_ready()
print("Time backward mode")
%timeit bmode_fvec(x, A, b).block_until_ready()
n = 10
m = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (m, n))
b = jax.random.normal(jax.random.PRNGKey(0), (m, ))
print("Check correctness", jnp.linalg.norm(fmode_fvec(x, A, b) - bmode_fvec(x, A, b)))
print("Check shape", fmode_fvec(x, A, b).shape, bmode_fvec(x, A, b).shape)
print("Time forward mode")
%timeit fmode_fvec(x, A, b).block_until_ready()
print("Time backward mode")
%timeit bmode_fvec(x, A, b).block_until_ready()
"""
Explanation: Forward mode vs. backward mode: $m \geq n$
End of explanation
"""
def hvp(f, x, z, *args):
def g(x):
return f(x, *args)
return jax.jvp(jax.grad(g), (x,), (z,))[1]
n = 3000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
z = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(2 * A.T @ (A @ z) - hvp(f, x, z, A, b)))
print("Time for hvp by hands")
%timeit (2 * A.T @ (A @ z)).block_until_ready()
print("Time for hvp via jvp, NO jit")
%timeit hvp(f, x, z, A, b).block_until_ready()
print("Time for hvp via jvp, WITH jit")
%timeit jax.jit(hvp, static_argnums=0)(f, x, z, A, b).block_until_ready()
"""
Explanation: Hessian-by-vector product
End of explanation
"""
|
tarashor/vibrations
|
py/notebooks/draft/.ipynb_checkpoints/Corrugated geometries simplified-checkpoint.ipynb
|
mit
|
from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
"""
Explanation: Corrugated Shells
Init symbols for sympy
End of explanation
"""
a1 = pi / 2 + (L / 2 - alpha1)/R
x = (R + alpha3 + ga * cos(gv * a1)) * cos(a1)
y = alpha2
z = (R + alpha3 + ga * cos(gv * a1)) * sin(a1)
r = x*N.i + y*N.j + z*N.k
"""
Explanation: Corrugated cylindrical coordinates
End of explanation
"""
R1=r.diff(alpha1)
R2=r.diff(alpha2)
R3=r.diff(alpha3)
trigsimp(R1)
R2
R3
"""
Explanation: Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
End of explanation
"""
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
R_1
R_2
R_3
"""
Explanation: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
End of explanation
"""
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = A**-1
trigsimp(A_inv[0,0])
trigsimp(A.det())
"""
Explanation: Jacobi matrix:
$ A = \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right)$
$ \left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right) = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot A$
$ \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] =\left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] \cdot A^{-1}$
End of explanation
"""
g11=R1.dot(R1)
g12=R1.dot(R2)
g13=R1.dot(R3)
g21=R2.dot(R1)
g22=R2.dot(R2)
g23=R2.dot(R3)
g31=R3.dot(R1)
g32=R3.dot(R2)
g33=R3.dot(R3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
"""
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
"""
g_11=R_1.dot(R_1)
g_12=R_1.dot(R_2)
g_13=R_1.dot(R_3)
g_21=R_2.dot(R_1)
g_22=R_2.dot(R_2)
g_23=R_2.dot(R_3)
g_31=R_3.dot(R_1)
g_32=R_3.dot(R_2)
g_33=R_3.dot(R_3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
"""
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
"""
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
"""
Explanation: Derivatives of vectors
Derivative of base vectors
End of explanation
"""
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
"""
Explanation: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
End of explanation
"""
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
"""
Explanation: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
"""
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
"""
Explanation: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
"""
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
"""
Explanation: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Derivative of vectors
$ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$
Then
$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$
$ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} =
\frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$
Gradient of vector
$\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $
$\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $
$\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$
$\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
$\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$
$\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $
$\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$
$ \nabla \vec{u} = \left(
\begin{array}{ccc}
\nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \
\nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \
\nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \
\end{array}
\right)$
End of explanation
"""
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
"""
Explanation: $
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
\left(
\begin{array}{c}
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \
\left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
\frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
"""
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
Q=E*B
Q=simplify(Q)
Q
"""
Explanation: Deformations tensor
End of explanation
"""
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
"""
Explanation: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
"""
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
"""
Explanation: Elasticity tensor(stiffness tensor)
General form
End of explanation
"""
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
"""
Explanation: Include symmetry
End of explanation
"""
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
"""
Explanation: Isotropic material
End of explanation
"""
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
"""
Explanation: Orthotropic material
End of explanation
"""
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
"""
Explanation: Orthotropic material in shell coordinates
End of explanation
"""
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
"""
Explanation: Physical coordinates
$u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$
$\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $
End of explanation
"""
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
"""
Explanation: Stiffness tensor
End of explanation
"""
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
"""
Explanation: Tymoshenko
End of explanation
"""
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
"""
Explanation: Square of segment
$A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$
End of explanation
"""
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
"""
Explanation: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
End of explanation
"""
simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p)
"""
Explanation: Virtual work
Isotropic material physical coordinates
End of explanation
"""
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2)
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
"""
Explanation: Isotropic material physical coordinates - Tymoshenko
End of explanation
"""
|
nimish-jose/dlnd
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
gpl-3.0
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
cnt = Counter()
for word in text:
cnt[word] += 1
vocab_to_int = {}
int_to_vocab = {}
for i, (word, count) in enumerate(cnt.most_common()):
vocab_to_int[word] = i
int_to_vocab[i] = word
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'--': '||dash||',
'\n': '||return|'}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
input = tf.placeholder(tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(tf.int32, shape=[None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
lstm_layers = 2
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# TODO: The script generation script doesn't have the capability to set keep_prob to 1.
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim], minval=-1, maxval=1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
embed_dim = 300
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
batch = batch_size*seq_length
n_batches = (len(int_text) - 1)//batch
int_text = int_text[:n_batches*batch + 1]
batches = np.zeros((n_batches, 2, batch_size, seq_length), dtype=np.int32)
for i in range(0, n_batches):
for j in range(0, batch_size):
idx = (j*n_batches + i)*seq_length
batches[i][0][j] = int_text[idx:idx+seq_length]
batches[i][1][j] = int_text[idx+1:idx+seq_length+1]
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 500
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 30
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 16
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: lstm_layers = 1
batch_size = 256
rnn_size = 512
train_loss = 0.726 (200 epochs)
lstm_layers = 2
batch_size = 256
rnn_size = 512
train_loss = 2.163 (200 epochs), 0.123 (500 epochs)
Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
input = loaded_graph.get_tensor_by_name('input:0')
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probabilities = loaded_graph.get_tensor_by_name('probs:0')
return input, initial_state, final_state, probabilities
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
vocab_size = len(int_to_vocab)
return int_to_vocab[np.random.choice(vocab_size, 1, p=probabilities)[0]]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/automl/showcase_automl_tabular_regression_online_bq.ipynb
|
apache-2.0
|
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: AutoML tabular regression model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular regression models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the GSOD dataset from BigQuery public datasets. The version of the dataset you will use only the fields year, month and day to predict the value of mean daily temperature (mean_temp).
Objective
In this tutorial, you create an AutoML tabular regression model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
"""
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
"""
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
"""
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular regression model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
"""
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
"""
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
Location of BigQuery training data.
Now set the variable IMPORT_FILE to the location of the data table in BigQuery.
End of explanation
"""
!bq head -n 10 $IMPORT_FILE
"""
Explanation: Quick peek at your data
You will use a version of the NOAA historical weather data dataset that is stored in a public BigQuery table.
TODO
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the table (wc -l) and then peek at the first few rows.
End of explanation
"""
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("gsod-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
"""
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
"""
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
"""
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
"""
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
"""
Explanation: Train the model
Now train an AutoML tabular regression model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
"""
TRANSFORMATIONS = [
{"auto": {"column_name": "year"}},
{"auto": {"column_name": "month"}},
{"auto": {"column_name": "day"}},
]
label_column = "mean_temp"
PIPE_NAME = "gsod_pipe-" + TIMESTAMP
MODEL_NAME = "gsod_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="regression"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
"""
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
"""
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
"""
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
"""
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
"""
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
"""
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
"""
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
"""
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("rootMeanSquaredError", metrics["rootMeanSquaredError"])
print("meanAbsoluteError", metrics["meanAbsoluteError"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
"""
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (rootMeanSquaredError and meanAbsoluteError) you will print the result.
End of explanation
"""
ENDPOINT_NAME = "gsod_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
"""
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
"""
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
"""
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
"""
DEPLOYED_NAME = "gsod_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
"""
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
"""
INSTANCE = {"year": "1932", "month": "11", "day": "6"}
"""
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
"""
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, tabular models do not support additional parameters.
Request
The format of each instance is, where values must be specified as a string:
{ 'feature_1': 'value_1', 'feature_2': 'value_2', ... }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:
value: The predicted value.
End of explanation
"""
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
"""
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/numpy/02.03-Computation-on-arrays-ufuncs.ipynb
|
mit
|
import numpy as np
np.random.seed(0)
def compute_reciprocals(values):
output = np.empty(len(values))
for i in range(len(values)):
output[i] = 1.0 / values[i]
return output
values = np.random.randint(1, 10, size=5)
compute_reciprocals(values)
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< The Basics of NumPy Arrays | Contents | Aggregations: Min, Max, and Everything In Between >
Computation on NumPy Arrays: Universal Functions
Up until now, we have been discussing some of the basic nuts and bolts of NumPy; in the next few sections, we will dive into the reasons that NumPy is so important in the Python data science world.
Namely, it provides an easy and flexible interface to optimized computation with arrays of data.
Computation on NumPy arrays can be very fast, or it can be very slow.
The key to making it fast is to use vectorized operations, generally implemented through NumPy's universal functions (ufuncs).
This section motivates the need for NumPy's ufuncs, which can be used to make repeated calculations on array elements much more efficient.
It then introduces many of the most common and useful arithmetic ufuncs available in the NumPy package.
The Slowness of Loops
Python's default implementation (known as CPython) does some operations very slowly.
This is in part due to the dynamic, interpreted nature of the language: the fact that types are flexible, so that sequences of operations cannot be compiled down to efficient machine code as in languages like C and Fortran.
Recently there have been various attempts to address this weakness: well-known examples are the PyPy project, a just-in-time compiled implementation of Python; the Cython project, which converts Python code to compilable C code; and the Numba project, which converts snippets of Python code to fast LLVM bytecode.
Each of these has its strengths and weaknesses, but it is safe to say that none of the three approaches has yet surpassed the reach and popularity of the standard CPython engine.
The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element.
For example, imagine we have an array of values and we'd like to compute the reciprocal of each.
A straightforward approach might look like this:
End of explanation
"""
big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
"""
Explanation: This implementation probably feels fairly natural to someone from, say, a C or Java background.
But if we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
We'll benchmark this with IPython's %timeit magic (discussed in Profiling and Timing Code):
End of explanation
"""
print(compute_reciprocals(values))
print(1.0 / values)
"""
Explanation: It takes several seconds to compute these million operations and to store the result!
When even cell phones have processing speeds measured in Giga-FLOPS (i.e., billions of numerical operations per second), this seems almost absurdly slow.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
Each time the reciprocal is computed, Python first examines the object's type and does a dynamic lookup of the correct function to use for that type.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
Introducing UFuncs
For many types of operations, NumPy provides a convenient interface into just this kind of statically typed, compiled routine. This is known as a vectorized operation.
This can be accomplished by simply performing an operation on the array, which will then be applied to each element.
This vectorized approach is designed to push the loop into the compiled layer that underlies NumPy, leading to much faster execution.
Compare the results of the following two:
End of explanation
"""
%timeit (1.0 / big_array)
"""
Explanation: Looking at the execution time for our big array, we see that it completes orders of magnitude faster than the Python loop:
End of explanation
"""
np.arange(5) / np.arange(1, 6)
"""
Explanation: Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
End of explanation
"""
x = np.arange(9).reshape((3, 3))
2 ** x
"""
Explanation: And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
End of explanation
"""
x = np.arange(4)
print("x =", x)
print("x + 5 =", x + 5)
print("x - 5 =", x - 5)
print("x * 2 =", x * 2)
print("x / 2 =", x / 2)
print("x // 2 =", x // 2) # floor division
"""
Explanation: Computations using vectorization through ufuncs are nearly always more efficient than their counterpart implemented using Python loops, especially as the arrays grow in size.
Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression.
Exploring NumPy's UFuncs
Ufuncs exist in two flavors: unary ufuncs, which operate on a single input, and binary ufuncs, which operate on two inputs.
We'll see examples of both these types of functions here.
Array arithmetic
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators.
The standard addition, subtraction, multiplication, and division can all be used:
End of explanation
"""
print("-x = ", -x)
print("x ** 2 = ", x ** 2)
print("x % 2 = ", x % 2)
"""
Explanation: There is also a unary ufunc for negation, and a ** operator for exponentiation, and a % operator for modulus:
End of explanation
"""
-(0.5*x + 1) ** 2
"""
Explanation: In addition, these can be strung together however you wish, and the standard order of operations is respected:
End of explanation
"""
np.add(x, 2)
"""
Explanation: Each of these arithmetic operations are simply convenient wrappers around specific functions built into NumPy; for example, the + operator is a wrapper for the add function:
End of explanation
"""
x = np.array([-2, -1, 0, 1, 2])
abs(x)
"""
Explanation: The following table lists the arithmetic operators implemented in NumPy:
| Operator | Equivalent ufunc | Description |
|---------------|---------------------|---------------------------------------|
|+ |np.add |Addition (e.g., 1 + 1 = 2) |
|- |np.subtract |Subtraction (e.g., 3 - 2 = 1) |
|- |np.negative |Unary negation (e.g., -2) |
|* |np.multiply |Multiplication (e.g., 2 * 3 = 6) |
|/ |np.divide |Division (e.g., 3 / 2 = 1.5) |
|// |np.floor_divide |Floor division (e.g., 3 // 2 = 1) |
|** |np.power |Exponentiation (e.g., 2 ** 3 = 8) |
|% |np.mod |Modulus/remainder (e.g., 9 % 4 = 1)|
Additionally there are Boolean/bitwise operators; we will explore these in Comparisons, Masks, and Boolean Logic.
Absolute value
Just as NumPy understands Python's built-in arithmetic operators, it also understands Python's built-in absolute value function:
End of explanation
"""
np.absolute(x)
np.abs(x)
"""
Explanation: The corresponding NumPy ufunc is np.absolute, which is also available under the alias np.abs:
End of explanation
"""
x = np.array([3 - 4j, 4 - 3j, 2 + 0j, 0 + 1j])
np.abs(x)
"""
Explanation: This ufunc can also handle complex data, in which the absolute value returns the magnitude:
End of explanation
"""
theta = np.linspace(0, np.pi, 3)
"""
Explanation: Trigonometric functions
NumPy provides a large number of useful ufuncs, and some of the most useful for the data scientist are the trigonometric functions.
We'll start by defining an array of angles:
End of explanation
"""
print("theta = ", theta)
print("sin(theta) = ", np.sin(theta))
print("cos(theta) = ", np.cos(theta))
print("tan(theta) = ", np.tan(theta))
"""
Explanation: Now we can compute some trigonometric functions on these values:
End of explanation
"""
x = [-1, 0, 1]
print("x = ", x)
print("arcsin(x) = ", np.arcsin(x))
print("arccos(x) = ", np.arccos(x))
print("arctan(x) = ", np.arctan(x))
"""
Explanation: The values are computed to within machine precision, which is why values that should be zero do not always hit exactly zero.
Inverse trigonometric functions are also available:
End of explanation
"""
x = [1, 2, 3]
print("x =", x)
print("e^x =", np.exp(x))
print("2^x =", np.exp2(x))
print("3^x =", np.power(3, x))
"""
Explanation: Exponents and logarithms
Another common type of operation available in a NumPy ufunc are the exponentials:
End of explanation
"""
x = [1, 2, 4, 10]
print("x =", x)
print("ln(x) =", np.log(x))
print("log2(x) =", np.log2(x))
print("log10(x) =", np.log10(x))
"""
Explanation: The inverse of the exponentials, the logarithms, are also available.
The basic np.log gives the natural logarithm; if you prefer to compute the base-2 logarithm or the base-10 logarithm, these are available as well:
End of explanation
"""
x = [0, 0.001, 0.01, 0.1]
print("exp(x) - 1 =", np.expm1(x))
print("log(1 + x) =", np.log1p(x))
"""
Explanation: There are also some specialized versions that are useful for maintaining precision with very small input:
End of explanation
"""
from scipy import special
# Gamma functions (generalized factorials) and related functions
x = [1, 5, 10]
print("gamma(x) =", special.gamma(x))
print("ln|gamma(x)| =", special.gammaln(x))
print("beta(x, 2) =", special.beta(x, 2))
# Error function (integral of Gaussian)
# its complement, and its inverse
x = np.array([0, 0.3, 0.7, 1.0])
print("erf(x) =", special.erf(x))
print("erfc(x) =", special.erfc(x))
print("erfinv(x) =", special.erfinv(x))
"""
Explanation: When x is very small, these functions give more precise values than if the raw np.log or np.exp were to be used.
Specialized ufuncs
NumPy has many more ufuncs available, including hyperbolic trig functions, bitwise arithmetic, comparison operators, conversions from radians to degrees, rounding and remainders, and much more.
A look through the NumPy documentation reveals a lot of interesting functionality.
Another excellent source for more specialized and obscure ufuncs is the submodule scipy.special.
If you want to compute some obscure mathematical function on your data, chances are it is implemented in scipy.special.
There are far too many functions to list them all, but the following snippet shows a couple that might come up in a statistics context:
End of explanation
"""
x = np.arange(5)
y = np.empty(5)
np.multiply(x, 10, out=y)
print(y)
"""
Explanation: There are many, many more ufuncs available in both NumPy and scipy.special.
Because the documentation of these packages is available online, a web search along the lines of "gamma function python" will generally find the relevant information.
Advanced Ufunc Features
Many NumPy users make use of ufuncs without ever learning their full set of features.
We'll outline a few specialized features of ufuncs here.
Specifying output
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored.
Rather than creating a temporary array, this can be used to write computation results directly to the memory location where you'd like them to be.
For all ufuncs, this can be done using the out argument of the function:
End of explanation
"""
y = np.zeros(10)
np.power(2, x, out=y[::2])
print(y)
"""
Explanation: This can even be used with array views. For example, we can write the results of a computation to every other element of a specified array:
End of explanation
"""
x = np.arange(1, 6)
np.add.reduce(x)
"""
Explanation: If we had instead written y[::2] = 2 ** x, this would have resulted in the creation of a temporary array to hold the results of 2 ** x, followed by a second operation copying those values into the y array.
This doesn't make much of a difference for such a small computation, but for very large arrays the memory savings from careful use of the out argument can be significant.
Aggregates
For binary ufuncs, there are some interesting aggregates that can be computed directly from the object.
For example, if we'd like to reduce an array with a particular operation, we can use the reduce method of any ufunc.
A reduce repeatedly applies a given operation to the elements of an array until only a single result remains.
For example, calling reduce on the add ufunc returns the sum of all elements in the array:
End of explanation
"""
np.multiply.reduce(x)
"""
Explanation: Similarly, calling reduce on the multiply ufunc results in the product of all array elements:
End of explanation
"""
np.add.accumulate(x)
np.multiply.accumulate(x)
"""
Explanation: If we'd like to store all the intermediate results of the computation, we can instead use accumulate:
End of explanation
"""
x = np.arange(1, 6)
np.multiply.outer(x, x)
"""
Explanation: Note that for these particular cases, there are dedicated NumPy functions to compute the results (np.sum, np.prod, np.cumsum, np.cumprod), which we'll explore in Aggregations: Min, Max, and Everything In Between.
Outer products
Finally, any ufunc can compute the output of all pairs of two different inputs using the outer method.
This allows you, in one line, to do things like create a multiplication table:
End of explanation
"""
|
hich28/mytesttxx
|
tests/python/highlighting.ipynb
|
gpl-3.0
|
a = spot.translate('a U b U c')
"""
Explanation: This notebook shows you different ways in which states or transitions can be highlighted in Spot.
It should be noted that highlighting works using some special named properties: basically, two maps that are attached to the automaton, and associated state or edge numbers to color numbers. This named properties are fragile: they will be lost if the automaton is transformed into a new automaton, and they can become meaningless of the automaton is modified in place (e.g., if the transitions or states are reordered).
Nonetheless, highlighting is OK to use right before displaying or printing the automaton. The dot and hoa printer both know how to represent highlighted states and transitions.
Manual highlighting
End of explanation
"""
a.show('.#')
"""
Explanation: The # option of print_dot() can be used to display the internal number of each transition
End of explanation
"""
a.highlight_edges([2, 4, 5], 1)
"""
Explanation: Using these numbers you can selectively hightlight some transitions. The second argument is a color number (from a list of predefined colors).
End of explanation
"""
a.highlight_edge(6, 2).highlight_states((0, 1), 0)
"""
Explanation: Note that these highlight_ functions work for edges and states, and come with both singular (changing the color of single state or edge) and plural versions.
They modify the automaton in place.
End of explanation
"""
print(a.to_str('HOA', '1'))
print()
print(a.to_str('HOA', '1.1'))
"""
Explanation: Saving to HOA 1.1
When saving to HOA format, the highlighting is only output if version 1.1 of the format is selected, because the headers spot.highlight.edges and spot.highlight.states contain dots, which are disallowed in version 1. Compare these two outputs:
End of explanation
"""
b = spot.translate('X (F(Ga <-> b) & GF!b)'); b
r = b.accepting_run(); r
r.highlight(5) # the parameter is a color number
"""
Explanation: Highlighting a run
One use of this highlighting is to highlight a run in an automaton.
The following few command generate an automaton, then an accepting run on this automaton, and highlight that accepting run on the automaton. Note that a run knows the automaton from which it was generated, so calling highlight() will directly decorate that automaton.
End of explanation
"""
b
"""
Explanation: The call of highlight(5) on the accepting run r modified the original automaton b:
End of explanation
"""
left = spot.translate('a U b')
right = spot.translate('GFa')
display(left, right)
prod = spot.product(left, right); prod
run = prod.accepting_run(); run
run.highlight(5)
# Note that by default project() needs to know on which side you project, but it cannot
# guess it. The left-side is assumed unless you pass True as a second argument.
run.project(left).highlight(5)
run.project(right, True).highlight(5)
display(prod, left, right)
"""
Explanation: Highlighting from a product
Pretty often, accepting runs are found in a product but we want to display them on one of the original automata. This can be done by projecting the runs on those automata before displaying them.
End of explanation
"""
left2 = spot.translate('!b & FG a')
right2 = spot.translate('XXXb')
prod2 = spot.otf_product(left2, right2) # Note "otf_product()"
run2 = prod2.accepting_run()
run2.project(left2).highlight(5)
run2.project(right2, True).highlight(5)
display(run2, prod2, left2, right2)
"""
Explanation: The projection also works for products generated on-the-fly, but the on-the-fly product itself cannot be highlighted (it does not store states or transitions).
End of explanation
"""
b = spot.translate('X (F(Ga <-> b) & GF!b)')
spot.highlight_nondet_states(b, 5)
spot.highlight_nondet_edges(b, 4)
b
"""
Explanation: Highlighting nondeterminism
Sometimes its is hard to locate non-deterministic states inside a large automaton. Here are two functions that can help for that.
End of explanation
"""
spot.randomize(b); b
"""
Explanation: Disappearing highlights
As explained at the top of this notebook, named properties (such as highlights) are fragile, and you should not really on them being preserved across algorithms. In-place algorithm are probably the worst, because they might modify the automaton and ignore the attached named properties.
randomize() is one such in-place algorithm: it reorder states or transitions of the automaton. By doing so it renumber the states and edges, and that process would completely invalidate the highlights information. Fortunately randomize() know about highlights: it will preserve highlighted states, but it will drop all highlighted edges.
End of explanation
"""
spot.highlight_nondet_edges(b, 4) # let's get those highlighted edges back
display(b, b.show('.<4'), b.show('.<2'))
"""
Explanation: Highlighting with partial output
For simplicity, rendering of partial automata is actually implemented by copying the original automaton and marking some states as "incomplete". This also allows the same display code to work with automata generated on-the-fly. However since there is a copy, propagating the highlighting information requires extra work. Let's make sure it has been done:
End of explanation
"""
aut = spot.translate('(b W Xa) & GF(c <-> Xb) | a', 'generic', 'det')
spot.highlight_languages(aut)
aut.show('.bas')
"""
Explanation: Highlighting languages
For deterministic automata, the function spot.highlight_languages() can be used to highlight states that recognize the same language. This can be a great help in reading automata. States with a colored border share their language, and states with a black border all have a language different from all other states.
End of explanation
"""
|
oscaribv/pyaneti
|
inpy/example_toyp1/toy_model1.ipynb
|
gpl-3.0
|
#Imort modules
from __future__ import print_function, division, absolute_import
import numpy as np
#Import citlalatonac from pyaneti_extras, note that pyaneti has to be compiled in your machine
#and pyaneti has to be in your PYTHONPATH, e.g., you have to add in your bashrc file
#export PYTHONPATH=${PYTHONPATH}:/pathtopyaneti/pyaneti
#and replacing pathtopyaneti with the location of pyaneti in your machine
from pyaneti_extras.citlalatonac import citlali
#citlalatonac is the class that creates the spectroscopic-like time-series
"""
Explanation: Creation of synthetic spectroscopic-like time-series
Oscar Barragán, May 2021
End of explanation
"""
#Do all the previous description with one Python command
star = citlali(tmin=0,tmax=50,kernel='QPK',kernel_parameters=[20,0.3,5],
amplitudes=[0.005,0.05,0.05,0.0,0.005,-0.05],time_series=['s2','s3'],seed=13)
#Let us see how the 3 time-series look in the 50 day window we created
star.plot()
"""
Explanation: let us summon citlalatonac powers and create synthetic stellar data
We will use a quasi-periodic kernel (kernel='QPK')
$$
\gamma_{i,j}^{G,G} = \exp
\left(
- \frac{\sin^2\left[\pi \left(t_i - t_j \right)/P_{\rm GP}\right]}{2 \lambda_{\rm p}^2}
- \frac{\left(t_i - t_j\right)^2}{2\lambda_{\rm e}^2}
\right)
$$
with hyper-parameters $$\lambda_e = 20, \lambda_p = 0.3, P_{\rm GP} = 5 $$
given as kernel_parameters=[20,0.3,5]. In this case we will create 3 timeseries between 0 (tmin) and 50 days (tmax) following
$$
S_1 = A_1 G(t) + B_1 \dot{G}(t), \
S_2 = A_2 G(t) + B_2 \dot{G}(t), \
S_3 = A_3 G(t) + B_3 \dot{G}(t),
$$
with amplitudes $A_1 = 0.005, B_1=0.05, A_2=0.05, B_2=0.0, A_3=0.005,A_4=-0.05$ (amplitudes=[0.005,0.05,0.05,0.0,0.005,-0.05]). In this case, citlalatonac knows that we want to create 3 time-series because we are passing 6 amplitudes. The last thing is to name the time-series, by default, the first one is always called rv, so we only need to name the last 2, time_series=['s2','s3'].
We can also pas a seed for the random number generator.
End of explanation
"""
#Create the random observation times
t = np.random.uniform(0,50,50)
#Let us create the data at times t
star.create_data(t=t)
#Let us plot where our data points are
star.plot()
"""
Explanation: At this point we have a model of the three signals created following the same underlying GP $G(t)$. Now it is time to create data taken at the times $\mathbf{t}$ that we can specify. The times vector $\mathbf{t}$ can be optimised to follow schedule requirements of given targets at different observatories, but for this example, we will just create 50 random times between 0 and 50 days.
End of explanation
"""
#The input vector err has to have one white noise term per each time-series
star.add_white_noise(err=[0.001,0.005,0.010])
star.plot()
"""
Explanation: The previous plot shows the positions at which we have created our observations. We still need to add some white noise to make the situation more realistic. We do this by passing an error bar for each time-series that we have created. We will add an error bar for 0.001 for $S_1$ (RV), 0.005 for $S_2$, and 0.10 for $S_3$ as err=[0.001,0.005,0.010].
End of explanation
"""
fname = 'data_3mdgp.dat'
star.save_data(fname)
"""
Explanation: Save the file as requested by pyaneti
End of explanation
"""
|
ActivisionGameScience/blog
|
_notebooks/IPython Parallel Introduction.ipynb
|
apache-2.0
|
# You can also use the IPython magic shell command. but errors are harder to see and stopping the cluster can be janky.
!ipcluster start -n 4 --daemon
"""
Explanation: How to Deploy an IPython Cluster Using Mesos and Docker
John Dennison
April 19th, 2016
The members of the Analytics Services team here at Activision are heavy users of Mesos and Marathon to deploy and manage services on our clusters. We are also huge fans of Python and the Jupyter project.
The Jupyter project was recently reorganized from IPython, in a move referred to as "the split": One part that was originally part of IPython (IPython.parallel) was split off into a separate project ipyparallel. This powerful component of the IPython ecosystem is generally overlooked.
In this post I will give a quick introduction to the ipyparallel project and then introduce a new launcher we have open sourced to deploy IPython clusters into Mesos clusters. While we have published this notebook in HTML, please feel free to download the original to follow along.
Introduction to ipyparallel
The ipyparallel project is the new home of IPython.parallel module that was hosted within IPython core before 2015. The focus of the project is interactive cluster computing. This focus on interactive computing and first-class integration with the IPython project is a distinguishing feature. For a more complete dive into the internals of ipyparallel, please visit the docs. I aim to give the bare minimum to get you started.
At the most basic level an IPython cluster is a set of Python interpreters that can be accessed over TCP. Under the hood, it works similarly to how Jupyter/IPython work today. When you open a new notebook in the browser, a Python process (called a kernel) will be started to run the code you submit. ipyparallel does the same thing except instead of a single Python kernel, you can start many distributed kernels over many machines.
There are three main components to the stack.
- Client: A Python process which submits work. Usually this is an IPython session or a Jupyter notebook.
- Controller: The central coordinator which accepts work from the client and passes it to engines, collects results and sends back to the client.
- Engine: A Python interpreter that communicates with the controller to accept work and submit results. Roughly equivalent to an IPython kernel.
Starting your first cluster
The easiest way to get your hands dirty is to spin up a cluster locally. That is you will run a Client, Controller, and Engines all on your local machine. The hardest part of provisioning distributed clusters is making sure all the pieces can talk to each other (as usual the easiest solution to a distributed problem is to make it local).
Getting your environment started
Our team are users of conda to help manage our computational environments (Python and beyond). Here is a quick run through to get setup (our public conda recipes are here). A combination of pip and virtualenv will also work, but when you start installing packages from the scipy stack we find conda the easiest to use.
First find your version of Miniconda from here
If you're using linux these commands will work:
```bash
wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
bash Miniconda-latest-Linux-x86_64.sh # follow prompts
conda update --all
make a new python 3 env named py3
conda create -n py3 python=3 ipython ipyparallel ipython-notebook
source activate py3
```
While there are lower level commands to start and configure Controllers and Engines, the primary command you will use is ipcluster. This is a helpful utility to start all the components and configure your local client. By default, it uses the LocalControllerLauncher and the LocalEngineSetLauncher which is exactly what we want to start.
Open a terminal install ipyparallel and start a cluster.
bash
(py3)➜ ipcluster start --n=4
2016-04-11 22:24:15.514 [IPClusterStart] Starting ipcluster with [daemon=False]
2016-04-11 22:24:15.515 [IPClusterStart] Creating pid file: /home/vagrant/.ipython/profile_default/pid/ipcluster.pid
2016-04-11 22:24:15.515 [IPClusterStart] Starting Controller with LocalControllerLauncher
2016-04-11 22:24:16.519 [IPClusterStart] Starting 2 Engines with LocalEngineSetLauncher
2016-04-11 22:24:46.633 [IPClusterStart] Engines appear to have started successfully
End of explanation
"""
import ipyparallel as ipp
rc = ipp.Client()
rc.ids # list the ids of the engine the client can communicate with
"""
Explanation: If started correctly we should now have four engines running on our local machine. Now to actually interact with them. First we need to import the client.
End of explanation
"""
dv = rc[:]
dv
"""
Explanation: The client has two primary way to farm out work to the engines. First is a direct view. This is used to apply the same work to all engines. To create a DirectView just slice the client.
The second way is a LoadBalancedView which we will cover later in the post.
End of explanation
"""
def get_engine_pid():
import os
return os.getpid()
dv.apply_sync(get_engine_pid)
"""
Explanation: With a direct view you can issue a function to execute within the context of that engine's Python process.
End of explanation
"""
%%px
import os
os.getpid()
"""
Explanation: This pattern is so common that ipyparallel provides a IPython magic function to execute a code cell to all engines: %%px
End of explanation
"""
%%px
foo = 'bar on pid {}'.format(os.getpid())
%%px
foo
"""
Explanation: It is key to notice that the engines are fully running stateful Python interpreters. If you set a varible within %%px code block, it will remain there.
End of explanation
"""
dv['foo']
"""
Explanation: The DirectView object provides some syntactic sugar to help distributing data to each engine. First is dictionary style retrieval and assignment. First let's retrieve the value of foo from each engine.
End of explanation
"""
dv['foo'] = 'bar'
dv['foo']
"""
Explanation: Now we can overwrite it's its value.
End of explanation
"""
# start with a list of ids to work on
user_ids = list(range(1000))
dv.scatter('user_id_chunk', user_ids)
"""
Explanation: There are many cases where you don't want the same data on each machine, but rather you want to chuck an list and distribute each chunk to an engine. The DirectView provides the .scatter and the .gather methods for this.
End of explanation
"""
dv.scatter('user_id_chunk', user_ids).get()
"""
Explanation: Notice that this method completed almost immediately and returned an AsyncResult. All the methods we have used up to now have be blocking and synchronous. The scatter method is aysnc. To turn this scatter into a blocking call we can chain a .get() to the call.
End of explanation
"""
%%px
print("Len", len(user_id_chunk))
print("Max", max(user_id_chunk))
"""
Explanation: Now we have a variable on each engine that holds an equal amount of the original list.
End of explanation
"""
%%px --local
def the_most_interesting_transformation_ever(user_id):
"""
This function is really interesting
"""
return "ID:{}".format(user_id * 3)
the_most_interesting_transformation_ever(1)
%%px
transformed_user_ids = list(map(the_most_interesting_transformation_ever, user_id_chunk))
"""
Explanation: Let's apply a simple function to each list. First, declare a function within each engine. The --local flag also executes the code block in your local client. This is very useful to help debug your code.
End of explanation
"""
all_transformed_user_ids = dv.gather('transformed_user_ids').get()
print(len(all_transformed_user_ids))
print(all_transformed_user_ids[0:10])
"""
Explanation: Now we have 4 separate list of transformed ids. We want to stitch the disparate lists into one list on our local notebook. gather is used for that.
End of explanation
"""
%%px --local
import random
import time
def fake_external_io(url):
# Simulate variable complexity/latency
time.sleep(random.random())
return "HTML for URL: {}".format(url)
%time fake_external_io(1)
%time fake_external_io(1)
"""
Explanation: Obviously, this example is contrived. The serialization cost of shipping Python objects over the wire to each engine is more expensive than the calculation we performed. This tradeoff between serialization/transport vs computation cost is central to any decision to use distributed processing. However, there are many highly parallelizable problems where this project can be extremely useful. Some of the main use cases we use ipyparallel for are hyperparameter searches and bulk loading/writing from storage systems.
LoadBalancedView
The previous example where you scatter a list, perform a calculation, and then gather a result works for lots of problems. One issue with this approach is that each engine does an identical amount of work. If the complexity of the process each engine is performing is variable, this naive scheduling approach can waste processing power and time. Take for example this function:
End of explanation
"""
lview = rc.load_balanced_view()
lview
@lview.parallel()
@ipp.require('time', 'random')
def p_fake_external_io(url):
# Simulate variable complexity/latency
time.sleep(random.random())
return "HTML for URL: {}".format(url)
"""
Explanation: If you had a list of urls to scrape and gave each worker an equal share, some workers would finish early and have to sit around doing nothing. A better approach is to assign work to each engine as it finishes. This way the work will be load balanced over the cluster and you will complete your process earlier. ipyparallel provides the LoadBalancedView for this exact use case. For this specific problem, threading or an async event loop would likely be a better approach to speeding up or scaling out, but suspend your disbelief for this exercise.
End of explanation
"""
urls = ['foo{}.com'.format(i) for i in range(100)]
# Naive single threaded
%time res = list(map(fake_external_io, urls))
dv.scatter('urls', urls).get()
# seed for some semblance reproducability
%px random.seed(99)
# Naive aassignment
%time %px results = list(map(fake_external_io, urls))
# Load balanced version
%time res = p_fake_external_io.map(urls).get()
"""
Explanation: Here we used two ipyparallel decorators. First we used lview.parallel() to declared this a parallel function. Second, we declared that this function depends on the modules time and random. Now that we have a load balanced function we can compare timings with our naive approach.
End of explanation
"""
|
manifoldai/merf
|
notebooks/MERF Gain Experiment.ipynb
|
mit
|
# Globals
num_clusters_each_size = 20
train_sizes = [1, 3, 5, 7, 9]
known_sizes = [9, 27, 45, 63, 81]
new_sizes = [10, 30, 50, 70, 90]
n_estimators = 300
max_iterations = 100
train_cluster_sizes = MERFDataGenerator.create_cluster_sizes_array(train_sizes, num_clusters_each_size)
known_cluster_sizes = MERFDataGenerator.create_cluster_sizes_array(known_sizes, num_clusters_each_size)
new_cluster_sizes = MERFDataGenerator.create_cluster_sizes_array(new_sizes, num_clusters_each_size)
# Number of times to run each experiemnts
N_per_experiment = 10
# Defining the experiments to run
experiments = [{'id': 0, 'm': .8, 'sigma_b_sq': 0.9, 'sigma_e': 1},
{'id': 1, 'm': .7, 'sigma_b_sq': 2.7, 'sigma_e': 1},
{'id': 2, 'm': .6, 'sigma_b_sq': 4.5, 'sigma_e': 1},
{'id': 3, 'm': .3, 'sigma_b_sq': 0.2, 'sigma_e': 1},
{'id': 4, 'm': .3, 'sigma_b_sq': 0.5, 'sigma_e': 1},
{'id': 5, 'm': .2, 'sigma_b_sq': 0.8, 'sigma_e': 1}]
"""
Explanation: Experimental Setup
There are some global parameters for all the experiments. Each experiment is run N_per_experiment times. The experiment itself is parametrized by three parameters of the generative model. We collect up the results of the experiments in a big list of dictinaries. This is then used to compute certain summary statistics after all the experiments are over.
End of explanation
"""
# Creating a dictionary to hold the results of the experiments
results = []
for experiment in experiments:
results.append({'id': experiment['id'], 'ptev': [], 'prev': [],
'mse_known_rf_fixed': [], 'mse_known_rf_ohe': [], 'mse_known_merf': [],
'mse_new_rf_fixed': [], 'mse_new_rf_ohe': [], 'mse_new_merf': []})
for experiment, result in zip(experiments, results):
for experiment_iteration in range(0, N_per_experiment):
print("Experiment iteration: {}".format(experiment_iteration))
# Generate data for experiment
dgm = MERFDataGenerator(m=experiment['m'], sigma_b=np.sqrt(experiment['sigma_b_sq']), sigma_e=experiment['sigma_e'])
train, test_known, test_new, train_cluster_ids, ptev, prev = dgm.generate_split_samples(train_cluster_sizes, known_cluster_sizes, new_cluster_sizes)
# Store off PTEV and PREV
result['ptev'].append(ptev)
result['prev'].append(prev)
# Training Data Extract
X_train = train[['X_0', 'X_1', 'X_2']]
Z_train = train[['Z']]
clusters_train = train['cluster']
y_train = train['y']
# Known Cluster Data Extract
X_known = test_known[['X_0', 'X_1', 'X_2']]
Z_known = test_known[['Z']]
clusters_known = test_known['cluster']
y_known = test_known['y']
# New Cluster Data Extract
X_new = test_new[['X_0', 'X_1', 'X_2']]
Z_new = test_new[['Z']]
clusters_new = test_new['cluster']
y_new = test_new['y']
# MERF
print("---------------------MERF----------------------")
mrf = MERF(n_estimators=n_estimators, max_iterations=max_iterations)
mrf.fit(X_train, Z_train, clusters_train, y_train)
y_hat_known_merf = mrf.predict(X_known, Z_known, clusters_known)
y_hat_new_merf = mrf.predict(X_new, Z_new, clusters_new)
mse_known_merf = np.mean((y_known - y_hat_known_merf) ** 2)
mse_new_merf = np.mean((y_new - y_hat_new_merf) ** 2)
result['mse_known_merf'].append(mse_known_merf)
result['mse_new_merf'].append(mse_new_merf)
# Random Forest Fixed Only
print("---------------------Random Forest Fixed Effect Only----------------------")
rf = RandomForestRegressor(n_estimators=n_estimators, n_jobs=-1)
rf.fit(X_train, y_train)
y_hat_known_rf_fixed = rf.predict(X_known)
y_hat_new_rf_fixed = rf.predict(X_new)
mse_known_rf_fixed = np.mean((y_known - y_hat_known_rf_fixed) ** 2)
mse_new_rf_fixed = np.mean((y_new - y_hat_new_rf_fixed) ** 2)
result['mse_known_rf_fixed'].append(mse_known_rf_fixed)
result['mse_new_rf_fixed'].append(mse_new_rf_fixed)
# Random Forest with OHE Cluster
print("---------------------Random Forest w OHE Cluster----------------------")
X_train_w_ohe = MERFDataGenerator.create_X_with_ohe_clusters(X_train, clusters_train, train_cluster_ids)
X_known_w_ohe = MERFDataGenerator.create_X_with_ohe_clusters(X_known, clusters_known, train_cluster_ids)
X_new_w_ohe = MERFDataGenerator.create_X_with_ohe_clusters(X_new, clusters_new, train_cluster_ids)
rf_ohe = RandomForestRegressor(n_estimators=n_estimators, n_jobs=-1)
rf_ohe.fit(X_train_w_ohe, y_train)
y_hat_known_rf_ohe = rf_ohe.predict(X_known_w_ohe)
y_hat_new_rf_ohe = rf_ohe.predict(X_new_w_ohe)
mse_known_rf_ohe = np.mean((y_known - y_hat_known_rf_ohe) ** 2)
mse_new_rf_ohe = np.mean((y_new - y_hat_new_rf_ohe) ** 2)
result['mse_known_rf_ohe'].append(mse_known_rf_ohe)
result['mse_new_rf_ohe'].append(mse_new_rf_ohe)
"""
Explanation: Run Experiments
End of explanation
"""
import pickle
# pickle.dump(results, open("results_merf100_n10.pkl", "wb" ))
results = pickle.load(open("results_merf100_n10.pkl", "rb"))
"""
Explanation: Save and Load Results
End of explanation
"""
def merf_gain(merf_mse, non_merf_mse):
return 100 * np.mean((np.array(non_merf_mse) - np.array(merf_mse)) / np.array(non_merf_mse))
summary_results = pd.DataFrame()
for experiment, result in zip(experiments, results):
summary_results.loc[result['id'], 'm'] = experiment['m']
summary_results.loc[result['id'], 'sigma_b2'] = experiment['sigma_b_sq']
summary_results.loc[result['id'], 'sigma_e2'] = experiment['sigma_e']
summary_results.loc[result['id'], 'PTEV'] = np.round(np.mean(np.array(result['ptev'])), 2)
summary_results.loc[result['id'], 'PREV'] = np.round(np.mean(np.array(result['prev'])), 2)
summary_results.loc[result['id'], 'Gain RF (Known)'] = np.round(merf_gain(result['mse_known_merf'], result['mse_known_rf_fixed']), 2)
summary_results.loc[result['id'], 'Gain RF (New)']= np.round(merf_gain(result['mse_new_merf'], result['mse_new_rf_fixed']), 2)
summary_results.loc[result['id'], 'Gain RFOHE (Known)'] = np.round(merf_gain(result['mse_known_merf'], result['mse_known_rf_ohe']), 2)
summary_results.loc[result['id'], 'Gain RFOHE (New)'] = np.round(merf_gain(result['mse_new_merf'], result['mse_new_rf_ohe']), 2)
summary_results
plt.figure(figsize=[16, 8])
plt.subplot(121)
plt.plot(summary_results.loc[0:2, 'PREV'],
summary_results.loc[0:2, 'Gain RF (Known)'], 'bs-', label='RF, PTEV=90')
plt.plot(summary_results.loc[3:5, 'PREV'],
summary_results.loc[3:5, 'Gain RF (Known)'], 'rs-', label='RF, PTEV=60')
plt.plot(summary_results.loc[0:2, 'PREV'],
summary_results.loc[0:2, 'Gain RFOHE (Known)'], 'b^--', label='RFOHE, PTEV=90')
plt.plot(summary_results.loc[3:5, 'PREV'],
summary_results.loc[3:5, 'Gain RFOHE (Known)'], 'r^--', label='RFOHE, PTEV=60')
plt.grid('on')
plt.xlabel('PREV')
plt.ylabel('MERF Gain over Compared Algorithm')
#plt.legend()
plt.title('Known Clusters')
plt.ylim([-5, 75])
plt.xlim([0, 65])
plt.subplot(122)
plt.plot(summary_results.loc[0:2, 'PREV'],
summary_results.loc[0:2, 'Gain RF (New)'], 'bs-', label='RF, PTEV=90')
plt.plot(summary_results.loc[3:5, 'PREV'],
summary_results.loc[3:5, 'Gain RF (New)'], 'rs-', label='RF, PTEV=60')
plt.plot(summary_results.loc[0:2, 'PREV'],
summary_results.loc[0:2, 'Gain RFOHE (New)'], 'b^--', label='RFOHE, PTEV=90')
plt.plot(summary_results.loc[3:5, 'PREV'],
summary_results.loc[3:5, 'Gain RFOHE (New)'], 'r^--', label='RFOHE, PTEV=60')
plt.grid('on')
plt.xlabel('PREV')
#plt.ylabel('MERF %-gain over Compared Algorithm')
plt.legend()
plt.title('New Clusters')
plt.ylim([-5, 75])
plt.xlim([0, 65])
"""
Explanation: Summarize Results
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_topo_compare_conditions.ipynb
|
bsd-3-clause
|
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne.viz import plot_evoked_topo
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Compare evoked responses for different conditions
In this example, an Epochs object for visual and
auditory responses is created. Both conditions
are then accessed by their respective names to
create a sensor layout plot of the related
evoked responses.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channels ['STI 014']
# bad channels in raw.info['bads'] will be automatically excluded
# Set up amplitude-peak rejection values for MEG channels
reject = dict(grad=4000e-13, mag=4e-12)
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Create epochs including different events
event_id = {'audio/left': 1, 'audio/right': 2,
'visual/left': 3, 'visual/right': 4}
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), reject=reject)
# Generate list of evoked objects from conditions names
evokeds = [epochs[name].average() for name in ('left', 'right')]
"""
Explanation: Set parameters
End of explanation
"""
colors = 'yellow', 'green'
title = 'MNE sample data - left vs right (A/V combined)'
plot_evoked_topo(evokeds, color=colors, title=title)
conditions = [e.comment for e in evokeds]
for cond, col, pos in zip(conditions, colors, (0.025, 0.07)):
plt.figtext(0.99, pos, cond, color=col, fontsize=12,
horizontalalignment='right')
plt.show()
"""
Explanation: Show topography for two different conditions
End of explanation
"""
|
keras-team/keras-io
|
examples/keras_recipes/ipynb/quasi_svm.ipynb
|
apache-2.0
|
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import RandomFourierFeatures
"""
Explanation: A Quasi-SVM in Keras
Author: fchollet<br>
Date created: 2020/04/17<br>
Last modified: 2020/04/17<br>
Description: Demonstration of how to train a Keras model that approximates a SVM.
Introduction
This example demonstrates how to train a Keras model that approximates a Support Vector
Machine (SVM).
The key idea is to stack a RandomFourierFeatures layer with a linear layer.
The RandomFourierFeatures layer can be used to "kernelize" linear models by applying
a non-linear transformation to the input
features and then training a linear model on top of the transformed features. Depending
on the loss function of the linear model, the composition of this layer and the linear
model results to models that are equivalent (up to approximation) to kernel SVMs (for
hinge loss), kernel logistic regression (for logistic loss), kernel linear regression
(for MSE loss), etc.
In our case, we approximate SVM using a hinge loss.
Setup
End of explanation
"""
model = keras.Sequential(
[
keras.Input(shape=(784,)),
RandomFourierFeatures(
output_dim=4096, scale=10.0, kernel_initializer="gaussian"
),
layers.Dense(units=10),
]
)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.hinge,
metrics=[keras.metrics.CategoricalAccuracy(name="acc")],
)
"""
Explanation: Build the model
End of explanation
"""
# Load MNIST
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data by flattening & scaling it
x_train = x_train.reshape(-1, 784).astype("float32") / 255
x_test = x_test.reshape(-1, 784).astype("float32") / 255
# Categorical (one hot) encoding of the labels
y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)
"""
Explanation: Prepare the data
End of explanation
"""
model.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.2)
"""
Explanation: Train the model
End of explanation
"""
|
gully/adrasteia
|
notebooks/adrasteia_02-03_get_real_gaia_data.ipynb
|
mit
|
! wget 'http://cdn.gea.esac.esa.int/Gaia/gaia_source/csv/GaiaSource_000-000-001.csv.gz'
! ls
! gzip -d GaiaSource_000-000-000.csv.gz
"""
Explanation: Gaia
Real data!
gully
Sept 14, 2016
Outline:
Download the data
Estimate how much data it will be
Batch download more
1. Download the data
End of explanation
"""
! du -hs GaiaSource_*
"""
Explanation: 2. Estimate how much data it will be
End of explanation
"""
20*256+111
5231*98/1000.0
import pandas as pd
%time g000 = pd.read_csv('GaiaSource_000-000-000.csv')
g000.columns
len(g000)
p_i = g000.parallax == g000.parallax
p000 = g000[p_i]
plt.plot(p000.ra, p000.dec, '.')
plt.plot(p000.parallax, p000.parallax_error, '.')
plt.xscale('log')
sns.distplot(p000.parallax)
sns.distplot(p000.parallax_error)
bins = np.arange(0, 160, 10)
sns.distplot(p000.astrometric_n_obs_ac, bins=bins, kde=False)
sns.distplot(p000.astrometric_n_bad_obs_ac, bins=bins, kde=False)
sns.distplot(p000.astrometric_n_good_obs_ac, bins=bins, kde=False)
#bins = np.arange(0, 160, 10)
#sns.distplot(p000.astrometric_n_obs_al, bins=bins, kde=False)
#sns.distplot(p000.astrometric_n_bad_obs_al, bins=bins, kde=False)
#sns.distplot(p000.astrometric_n_good_obs_al, bins=bins, kde=False)
sns.distplot(p000.phot_g_mean_mag)
bins = np.arange(0,40,1)
sns.distplot(p000.matched_observations, bins=bins,kde=False)
p000.count()
p000.iloc[0]
"""
Explanation: Wow, 100 Mb csv file... There are 20 groups of 256 files plus 111 extra files.
End of explanation
"""
|
anhaidgroup/py_entitymatching
|
notebooks/guides/step_wise_em_guides/Reading CSV Files from Disk.ipynb
|
bsd-3-clause
|
import py_entitymatching as em
import pandas as pd
import os, sys
"""
Explanation: Introduction
This IPython notebook illustrates how to read a CSV file from disk as a table and set its metadata.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the path of the input table
path_A = datasets_dir + os.sep + 'person_table_A.csv'
# Display the contents of the file in path_A
!cat $path_A | head -3
"""
Explanation: Different Ways to Read a CSV File and Set Metadata
First, we need to get the path of the CSV file in disk. For the convenience of the user, we have included some sample files in the package. The path of a sample CSV file can be obtained like this:
End of explanation
"""
A = em.read_csv_metadata(path_A)
A.head()
# Display the 'type' of A
type(A)
"""
Explanation: Once we get the CSV file path, we can use it read the contents and set metadata.
Different Ways to Read a CSV File and Set Metadata
There are three different ways to read a CSV file and set metadata:
Read a CSV file first, and then set the metadata
Read a CSV file and set the metadata together
Read a CSV file and set the metadata from a file in disk
Read the CSV file First and Then Set the Metadata
First, read the CSV files as follows:
End of explanation
"""
em.set_key(A, 'ID')
# Get the metadata that were set for table A
em.get_key(A)
"""
Explanation: Then set the metadata for the table. We see ID is the key attribute (since it contains unique values and no value is missing) for the table. We can set this metadata as follows:
End of explanation
"""
A = em.read_csv_metadata(path_A, key='ID')
# Display the 'type' of A
type(A)
# Get the metadata that were set for the table A
em.get_key(A)
"""
Explanation: Now the CSV file is read into the memory and the metadata (i.e. key) is set for the table.
Read a CSV File and Set Metadata Together
In the above, we saw that we first read in the CSV file and then set the metadata. These two steps can be combined into a single step like this:
End of explanation
"""
# Specify the metadata for table A (stored in person_table_A.csv).
# Get the file name (with full path) where the metadata file must be stored
metadata_fname = 'person_table_A.metadata'
metadata_file = datasets_dir + os.sep + metadata_fname
# Specify the metadata for table A . Here we specify that 'ID' is the key attribute for the table.
# Note that this step requires write permission to the datasets directory.
with open(metadata_file, 'w') as the_file:
the_file.write('#key=ID')
"""
Explanation: Read a CSV File and Set Metadata from a File in Disk
The user can specify the metadata in a file.
This file MUST be in the same directory as the CSV file and the file name
should be same, except the extension is set to '.metadata'.
End of explanation
"""
# If you donot have write permissions to the datasets directory, first copy the file to the local directory and
# then create a metadata file like this (you need to uncomment the following lines and then execute):
# import shutil
# shutil.copy2('path_A', './person_table_A.metadata')
# metadata_local_file = 'person_table_A.metadata'
# with open(metadata_local_file, 'w') as the_file:
# the_file.write('#key=ID'))
# Read the CSV file for table A
A = em.read_csv_metadata(path_A)
# Get the key for table A
em.get_key(A)
# Remove the metadata file
os.remove(metadata_file) if os.path.exists(metadata_file) else None
os.remove('person_table_A.csv') if os.path.exists('person_table_A.csv') else None
os.remove(metadata_fname) if os.path.exists(metadata_fname) else None
"""
Explanation: Note: In the above, we used Unix shell command echo to write the metadata contents. If you are on Windows, you can use echo|set /p instead of echo to acheive the same effect.
End of explanation
"""
|
Azure/azure-sdk-for-python
|
sdk/digitaltwins/azure-digitaltwins-core/samples/notebooks/04_Lots_on_Queries.ipynb
|
mit
|
from azure.identity import AzureCliCredential
from azure.digitaltwins.core import DigitalTwinsClient
# using yaml instead of
import yaml
import uuid
# using altair instead of matplotlib for vizuals
import numpy as np
import pandas as pd
# you will get this from the ADT resource at portal.azure.com
your_digital_twin_url = "home-test-twin.api.wcus.digitaltwins.azure.net"
azure_cli = AzureCliCredential()
service_client = DigitalTwinsClient(
your_digital_twin_url, azure_cli)
service_client
"""
Explanation: Complex queries
An example use of the Digital Twin
In this notebook we are going to dive deep into queries:
* Examining the customer experience through the lens of different aspects of the experience.
In our previous steps we made:
* Patrons, that have a customer satisfaction, a relationship with tickets, and locations.
* Tickets that are owned by customers
* Lines that lead to areas
* Areas where people are located.
We will be doing a bunch of different queries on this theme.
This is the SDK repo on Github
Here is the doc on the query language
End of explanation
"""
def query_ADT(query):
query_result = service_client.query_twins(query)
values = [i for i in query_result]
return values
def query_to_df(query):
query_result = query_ADT(query)
values = pd.DataFrame(query_result)
return values
query_expression = "SELECT * FROM digitaltwins"
query_to_df(query_expression)
"""
Explanation: I'm going to set up a generic function that runs queries and gets the data. This will keep me from doing it over and over.
Note that with really large models this might perform poorly I'm only doing this here as this example is very small.
End of explanation
"""
query_expression = "SELECT * FROM digitaltwins where IS_OF_MODEL('dtmi:mymodels:patron;1')"
customers = query_to_df(query_expression)
customers.satisfaction.describe()
query_expression ="""
SELECT T, CT
FROM DIGITALTWINS T
JOIN CT RELATED T.locatedIn
WHERE CT.$dtId = 'line-2'
"""
customers_in_area_2 = query_to_df(query_expression)
customers_in_area_2
"""
Explanation: Note that the larger query will give you back all of the values, so you can pop it into a dataframe and filter on the $metadata to get the values you want
IS_OF_MODEL
The process for most analysis is to query the items that are relevant into a dataframe and do your analysis on them.
End of explanation
"""
customers_in_area_2.loc[0]
customers_in_area_2.loc[0,'CT']
"""
Explanation: Ok let's unpack that:
SELECT T, CT - means to give short names to all the different classes, so in this case T refers to all twins, and CT refers to a related item. The related item comes out in the second column
RELATED T.locatedIn - gits all of elements that have a locatedIn relationship and stores it in CT.
WHERE CT.$dtId = 'line-2' - limits the query to items that have that relationship with line-2. This is the filter part.
Note it seems that all joined queries require a specific twin by name.
End of explanation
"""
customers_in_area_2.loc[0,'T']
l2_cust = pd.DataFrame(customers_in_area_2['T'].tolist())
l2_cust
l2_cust.satisfaction.describe()
customers.satisfaction.describe()
"""
Explanation: So let's look at the customers in line-2
End of explanation
"""
query = """
SELECT COUNT()
FROM DIGITALTWINS T
JOIN CT RELATED T.locatedIn
WHERE CT.$dtId IN ['line-0','line-1','line-2', 'line-3']
"""
customers_in_lines = query_to_df(query)
customers_in_lines
query = """
SELECT COUNT()
FROM DIGITALTWINS T
JOIN CT RELATED T.locatedIn
WHERE CT.$dtId IN ['line-2']
"""
customers_in_lines = query_to_df(query)
customers_in_lines
"""
Explanation: Customers in line 2 have higher satisfaction than customers in general.
How many people in each line
End of explanation
"""
query = """
SELECT line, customer
FROM DIGITALTWINS customer
JOIN line RELATED customer.locatedIn
WHERE line.$dtId IN ['line-0','line-1','line-2', 'line-3']
AND IS_OF_MODEL(customer, 'dtmi:mymodels:patron;1')
"""
customers_in_lines = query_to_df(query)
customers_in_lines
"""
Explanation: The rough part is that you can only get one count back, not a count per line, like you could with propper SQL. You also have to hard code all of your $dtID as they require literal values. Lame.
Here is the way around that.
SELECT line, customer <- select the columns that you want the query to return
AND IS_OF_MODEL(customer, 'dtmi:billmanh:patron;1') <- specify that customer are twins of the patron model.
You still have to hard type the names of the lines, or rooms or whatever, but it returns all of the customers in all of the lines.
End of explanation
"""
c_in_line = pd.concat(
[pd.DataFrame(customers_in_lines['line'].tolist()),
pd.DataFrame(customers_in_lines['customer'].tolist())],
axis=1
)
cols = c_in_line.columns.tolist()
cols[0] = 'line'
cols[4] = 'customer'
c_in_line.columns = cols
c_in_line
"""
Explanation: Easy enough to munge it into a dataframe:
End of explanation
"""
c_in_line.groupby('line').count()['customer']
"""
Explanation: How many people are in each line:
End of explanation
"""
c_in_line.groupby('line').mean()['satisfaction']
"""
Explanation: Which group of people has the highest satisfaction?
End of explanation
"""
|
bakerjd99/jacks
|
notebooks/Extracting SQL code from SSIS dtsx packages with Python lxml.ipynb
|
unlicense
|
# imports
import os
from lxml import etree
# set sql output directory
sql_out = r"C:\temp\dtsxsql"
if not os.path.isdir(sql_out):
os.makedirs(sql_out)
# set dtsx package file
ssis_dtsx = r'C:\temp\dtsx\ParseXML.dtsx'
if not os.path.isfile(ssis_dtsx):
print("no package file")
# read and parse ssis package
tree = etree.parse(ssis_dtsx)
root = tree.getroot()
root.tag
# collect unique lxml transformed element tags
ele_tags = set()
for ele in root.xpath(".//*"):
ele_tags.add(ele.tag)
print(ele_tags)
print(len(ele_tags))
"""
Explanation: Extracting SQL code from SSIS dtsx packages with Python lxml
Code for the blog post Extracting SQL code from SSIS dtsx packages with Python lxml
From Analyze the Data not the Drivel
End of explanation
"""
pfx = '{www.microsoft.com/'
exe_tag = pfx + 'SqlServer/Dts}Executable'
obj_tag = pfx + 'SqlServer/Dts}ObjectName'
dat_tag = pfx + 'SqlServer/Dts}ObjectData'
tsk_tag = pfx + 'sqlserver/dts/tasks/sqltask}SqlTaskData'
src_tag = pfx + \
'sqlserver/dts/tasks/sqltask}SqlStatementSource'
print(exe_tag)
print(obj_tag)
print(tsk_tag)
print(src_tag)
# extract sql source statements and write to *.sql files
total_bytes = 0
package_name = root.attrib[obj_tag].replace(" ","")
for cnt, ele in enumerate(root.xpath(".//*")):
if ele.tag == exe_tag:
attr = ele.attrib
for child0 in ele:
if child0.tag == dat_tag:
for child1 in child0:
sql_comment = attr[obj_tag].strip()
if child1.tag == tsk_tag:
dtsx_sql = child1.attrib[src_tag]
dtsx_sql = "-- " + \
sql_comment + "\n" + dtsx_sql
sql_file = sql_out + "\\" \
+ package_name + str(cnt) + ".sql"
total_bytes += len(dtsx_sql)
print((len(dtsx_sql),
sql_comment, sql_file))
with open(sql_file, "w") as file:
file.write(dtsx_sql)
print(('total bytes',total_bytes))
"""
Explanation: Code reformatted to better display on blog
End of explanation
"""
# scan package tree and extract sql source code
total_bytes = 0
package_name = root.attrib['{www.microsoft.com/SqlServer/Dts}ObjectName'].replace(" ","")
for cnt, ele in enumerate(root.xpath(".//*")):
if ele.tag == "{www.microsoft.com/SqlServer/Dts}Executable":
attr = ele.attrib
for child0 in ele:
if child0.tag == "{www.microsoft.com/SqlServer/Dts}ObjectData":
for child1 in child0:
sql_comment = attr["{www.microsoft.com/SqlServer/Dts}ObjectName"].strip()
if child1.tag == "{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlTaskData":
dtsx_sql = child1.attrib["{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlStatementSource"]
dtsx_sql = "-- " + sql_comment + "\n" + dtsx_sql
sql_file = sql_out + "\\" + package_name + str(cnt) + ".sql"
total_bytes += len(dtsx_sql)
print((len(dtsx_sql), sql_comment, sql_file))
with open(sql_file, "w") as file:
file.write(dtsx_sql)
print(('total sql bytes',total_bytes))
"""
Explanation: Original unformatted code
End of explanation
"""
|
folivetti/PIPYTHON
|
ListaEX_04.ipynb
|
mit
|
# Contador de palavras
import codecs
from collections import defaultdict
def ContaPalavras(texto):
for palavra, valor in ContaPalavras('exemplo.txt').iteritems():
print (palavra, valor)
"""
Explanation: Exercício 01: Crie uma função ContaPalavras que receba como entrada o nome de um arquivo de texto e retorne a frequência de cada palavra contida nele.
End of explanation
"""
# converter data no formato 01-MAI-2000 em 01-05-2000
def ConverteData(data):
print (ConverteData('01-MAI-2000'))
"""
Explanation: Exercício 02: Crie uma função ConverteData() que recebe uma string no formato DIA-MES-ANO e retorne uma string no formato DIA-MES_NUMERO-ANO. Exemplo:
'01-MAI-2000' => '01-05-2000'.
Você pode separar a string em uma lista de strings da seguinte maneira:
data = '01-MAI-2000'
lista = data.split('-')
print lista # ['01','MAI','2000']
E pode juntar novamente usando join:
lista = ['01','05', '2000']
data = '-'.join(lista)
print data # '01-05-2000'
End of explanation
"""
# crie um dicionário em que a chave é um número de 2 a 12
# e o valor é uma lista de combinações de dois dados que resulta na chave
Dados =...
for chave, valor in Dados.iteritems():
print (chave, valor)
"""
Explanation: Exercício 03: Crie um dicionário chamado Dados que tenha como chave um número de 2 até 12 e o valor seja uma lista contendo todas as combinações dos valores de dois dados que resulta nessa chave.
End of explanation
"""
# crie um pequeno dicionário de inglês para português e use para traduzir frases simples
import codecs
def Traduz(texto):
print (Traduz('exemplo.txt'))
"""
Explanation: Exercício 04: Crie um dicionário onde as chaves são palavras em português e os valores sua tradução para o inglês. Use todas as palavras do texto do exercício 01.
Crie uma função Traduz() que recebe o nome do arquivo texto como parâmetro e retorna uma string com a tradução.
End of explanation
"""
# cifra de César
import string
def ConstroiDic(n):
def Codifica(frase, n):
l = Codifica('Vou tirar dez na proxima prova', 5)
print (l)
print (Codifica(l,-5))
"""
Explanation: Exercício 05: A Cifra de César é uma forma simples de criptografar um texto. O procedimento é simples:
dado um número $n$
crie um mapa de substituição em que cada letra será substituida pela n-ésima letra após ela no alfabeto. Ex.:
n = 1
A -> B
B -> C
...
n = 2
A -> C
B -> D
...
A Codificação é feita substituindo cada letra da frase pelo correspondente do mapa.
Para Decodificar uma frase, basta criar um mapa utilizando $-n$ ao invés de $n$.
Crie uma função ConstroiDic() que recebe um valor n como entrada e cria um mapa de substituição. Utilize a constante string.ascii_letters para obter todas as letras do alfabeto.
Note que o mapa é cíclico, ou seja, para n=1, a letra Z tem que ser substituida pela letra A. Isso pode ser feito utilizando o operador '%'.
Crie uma função Codifica() que recebe como parâmetros uma string contendo uma frase e um valor para n, essa função deve construir o dicionário e retornar a frase codificada.
Para Decodificar o texto, basta chamar a função Codifica() pasando -n como parâmetro.
End of explanation
"""
# tabela periodica
"""
Explanation: Exercício 06: Faça uma função que leia a tabela periódica de um arquivo (você construirá esse arquivo) e armazene em um dicionário.
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('BZzNBNoae-Y', 640,480)
# velha a fiar
"""
Explanation: Exercício 07: Assista o vídeo abaixo e crie uma lista com os personagens da letra da música.
Em seguida, utilizando dois laços for percorra essa lista e escreva a letra da música.
End of explanation
"""
# dec - romano - dec
def DecRoman(x):
def RomanDec(r):
r = DecRoman(1345)
x = RomanDec(r)
print (r,x)
"""
Explanation: Exercício 08: Faça uma função que converta um número decimal para romano. Para isso construa um dicionário em que as chaves são os números decimais e os valores o equivalente em romano.
O algoritmo funciona da seguinte forma:
Para cada valor decimal do dicionário, do maior para o menor
Enquanto eu puder subtrair esse valor de x
subtraio o valor de x e concateno o equivalente romano em uma string
Exercício 09: Faça uma função que converta um número romano para decimal. Para isso construa um dicionário com o inverso do que foi feito no ex. anterior. O algoritmo fica assim:
Para i de 0 até o tamanho da string do número romano
cria a string formada pela letra i e letra i+1 caso i seja menor que o tamanho da string - 1
cria a string formada pela letra i-1 e i, caso i seja maior que 0
se a primeira string estiver no dicionário, some o valor em x
senão se a segunda string NÃO estiver no dicionário, some o valor da letra i em x
End of explanation
"""
|
ernestyalumni/MLgrabbag
|
kaggle/kaggle.ipynb
|
mit
|
print( os.listdir( os.getcwd() ))
timeseries_pd = pd.read_hdf( 'train.h5')
timeseries_pd.describe()
timeseries_pd.head()
timeseries_pd.columns
print( len(timeseries_pd.columns) )
for col in timeseries_pd.columns: print col
timeseries_pd["timestamp"]; # Name: timestamp, dtype: int16
timeseries_pd[["id","timestamp"]]
timeseries_pd["timestamp"]
"""
Explanation: Sigma, Financial Time Series
End of explanation
"""
timeseries_pd.count()
timeseries_pd.size
"""
Explanation: Total number of data points in time series
End of explanation
"""
timeseries_pd.isnull().describe()
"""
Explanation: Dealing with Missing Values, NaN
cf. https://gallery.cortanaintelligence.com/Experiment/Methods-for-handling-missing-values-1
Replace missing values with the mean. For this age data, we assume that missing values are distributed similarly to the values that are present. The formal name for this assumption is Missing Completely at Random (MCAR). In this case, substituting values that represent the existing distribution, such as the mean, is a reasonable approach.
Replace missing values with the median. This is another justifiable way to handle missing-at-random data, although note that it gives a different answer. For categorical data, it's also common to use the mode, the most commonly occurring value.
cf. http://pandas.pydata.org/pandas-docs/stable/missing_data.html
The sections that became very useful were
Cleaning / filling missing data
Filling with a PandasObject
End of explanation
"""
timeseries_pd_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
timeseries_pd_meanclean.describe()
timeseries_pd_meanclean.notnull().describe()
timeseries_pd_meanclean.head()
"""
Explanation: clean with mean - fill in missing values with the mean on each column
End of explanation
"""
timeseries_id=timeseries_pd_meanclean.sort_values(by=["id", "timestamp"])
timeseries_id.describe()
timeseries_id.head()
print( timeseries_id['id'].unique() )
print( len( timeseries_id['id'].unique() ))
timeseries_id.groupby('id').count()
"""
Explanation: "Each (financial) instrument has an id. "
End of explanation
"""
uids = timeseries_id['id'].unique()
print(uids)
timeseries_id.loc[ timeseries_id['id'] == 0 ] # this selects rows based on values in id column
# cf. http://stackoverflow.com/questions/17071871/select-rows-from-a-dataframe-based-on-values-in-a-column-in-pandas
train_data = []
for uid in uids:
train_data.append( timeseries_id.loc[ timeseries_id['id'] == uid ] )
print(train_data[500].describe() )
train_data[90].describe()
len(train_data)
"""
Explanation: Let uid $\in \mathbb{Z}^+$ represent a unique id for each financial instrument, and in this case, it's implemented with this command in pandas. Each 1 of the uids will be a training example.
End of explanation
"""
train_data_3d = np.dstack(train_data)
"""
Explanation: cf. https://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html
http://stackoverflow.com/questions/4341359/convert-a-list-of-2d-numpy-arrays-to-one-3d-numpy-array
We can also make this into a 3-dimensional numpy array:
End of explanation
"""
# clean with mean
timeseries_pd_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
# order by id
timeseries_id=timeseries_pd_meanclean.sort_values(by=["id", "timestamp"])
uids = timeseries_id['id'].unique()
train_data = []
for uid in uids:
train_data.append( timeseries_id.loc[ timeseries_id['id'] == uid ] )
"""
Explanation: In Summary
End of explanation
"""
# data input to train on
obs_trainon = timeseries_pd[timeseries_pd["timestamp"]<907]
obs_trainon.describe();
obs_trainon_meanclean = obs_trainon.where(pd.notnull(obs_trainon),obs_trainon.mean(),axis='columns')
obs_trainon_meanclean.sort_values(by=['id','timestamp']);
def clean_tseries(timeseries_pd):
# clean the data
# I chose to fill in missing values, NaN values, with the mean, due to the distribution of the data
tseries_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
# order by id. We want the first index to be id=1,2,..m, m representing number of training examples
tseries_id=tseries_meanclean.sort_values(by=['id','timestamp'])
uids = tseries_id['id'].unique()
train_data = []
for uid in uids:
train_data.append( tseries_id.loc[ tseries_id['id'] == uid])
return train_data
res_clean_obs_trainon = clean_tseries(obs_trainon)
print(len(res_clean_obs_trainon)); res_clean_obs_trainon[0].describe()
res_clean_obs_trainon[0].values.shape
res_clean_obs_trainon[0].drop(['y'],axis=1).describe()
print( type(res_clean_obs_trainon[0]['y']) )
print( type(res_clean_obs_trainon[0][['y']]))
print( res_clean_obs_trainon[0][['y']].values.shape)
"""
Explanation: Practice; simulating the kaggle gym, with its "features" (the input data you want to test on)
Simulating what kaggle uses to train on
End of explanation
"""
def clean_tseries(timeseries_pd):
# clean the data
# I chose to fill in missing values, NaN values, with the mean, due to the distribution of the data
tseries_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
# order by id. We want the first index to be id=1,2,..m, m representing number of training examples
tseries_id=tseries_meanclean.sort_values(by=['id','timestamp'])
uids = tseries_id['id'].unique()
train_data = []
for uid in uids:
train_data.append( tseries_id.loc[ tseries_id['id'] == uid].values )
return train_data
res_clean_obs_trainon = clean_tseries(obs_trainon)
print(len(res_clean_obs_trainon)); res_clean_obs_trainon[0].shape
"""
Explanation: So we actually want the numpy array for calculations.
End of explanation
"""
def clean_tseries(timeseries_pd):
# clean the data
# I chose to fill in missing values, NaN values, with the mean, due to the distribution of the data
tseries_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
# order by id. We want the first index to be id=1,2,..m, m representing number of training examples
tseries_id=tseries_meanclean.sort_values(by=['id','timestamp'])
uids = tseries_id['id'].unique()
train_data = []
for uid in uids:
train_data.append( tseries_id.loc[ tseries_id['id'] == uid] )
train_data_split = []
for row in train_data:
train_data_split.append( ( row.drop(['y'],axis=1).values, row[['y']].values ) )
return train_data_split
res_clean_obs_trainon = clean_tseries(obs_trainon)
print( len(res_clean_obs_trainon))
print( type( res_clean_obs_trainon[0] ) ); print(len(res_clean_obs_trainon[0]));
print( res_clean_obs_trainon[0][0].shape);print(res_clean_obs_trainon[0][1].shape)
"""
Explanation: So we actually need to split up the input data $X$ from the output data $y$
End of explanation
"""
np.array( [3]).shape
"""
Explanation: Simulating observations, features
End of explanation
"""
def clean_test(timeseries_pd):
# clean the data
# I chose to fill in missing values, NaN values, with the mean, due to the distribution of the data
tseries_meanclean = timeseries_pd.where( pd.notnull(timeseries_pd),timeseries_pd.mean(),axis='columns')
# order by id. We want the first index to be id=1,2,..m, m representing number of training examples
tseries_id=tseries_meanclean.sort_values(by=['id','timestamp'])
uids = tseries_id['id'].unique()
train_data = []
for uid in uids:
train_data.append( tseries_id.loc[ tseries_id['id'] == uid].values )
return train_data
#this corresponds to what kaggle calls FEATURES, features
obs_predicton = timeseries_pd[timeseries_pd["timestamp"]==908]
obs_predictonX = obs_predicton.drop('y',axis=1)
obs_predicton_cleaned = clean_test( obs_predictonX)
print(type(obs_predicton_cleaned));print(type(obs_predicton_cleaned[0]));print(obs_predicton_cleaned[0].shape)
obs_predicton_cleaned[5][0][0]
def id_only(cleaned_X_data):
m = len(cleaned_X_data)
result = []
for idx in range(m):
id = cleaned_X_data[idx][0][0]
result.append(int(id))
return result
res_id_only = id_only(obs_predicton_cleaned)
res_id_only[:10]
print(len(res_id_only))
def just_id(cleaned_X_data, predicted_y):
# assert len(cleaned_X_data) == len(predicted_y)
print( np.array( [[5],[3],[2]]).shape)
np.array( [[5]]).flatten()[0]
obs_predictedy = obs_predicton['y']
# simulate what I'm going to get
list( obs_predictedy.values.reshape(968,1,1) )[0]
def y_only(predicted_y):
result = []
for row in predicted_y:
y = row.flatten()[0]
result.append(y)
return result
res_y_only = y_only( list( obs_predictedy.values.reshape(968,1,1) ) )
print(len(res_y_only)); res_y_only[:10]
pd_predictedon = pd.DataFrame.from_dict( dict(id=res_id_only,y=res_y_only))
pd_predictedon.describe()
pd_predictedon.isnull().describe()
"""
Explanation: So we actually need also something to clean the test data X to predict on
End of explanation
"""
t_all_cleaned = clean_tseries(timeseries_pd)
len(t_all_cleaned)
print( type(t_all_cleaned[0]));print( len(t_all_cleaned[0]));
print( t_all_cleaned[0][0].shape); print(t_all_cleaned[0][1].shape)
for i in range(13):
print( t_all_cleaned[i][0].shape, t_all_cleaned[i][1].shape )
import theano
theano.version.full_version
"""
Explanation: Making it work with the different $T$
End of explanation
"""
|
tensorflow/examples
|
templates/notebook.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import numpy as np
"""
Explanation: Title
Notebook orignially contributed by: {link to you}
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/template/notebook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/template/notebook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
{Fix these links}
Overview
{Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}
Setup
End of explanation
"""
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True))
"""
Explanation: {Put all your imports and installs up into a setup section.}
Notes
For general instructions on how to write docs for Tensorflow see Writing TensorFlow Documentation.
The tips below are specific to notebooks for tensorflow.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three __future__ imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Working in GitHub
Be consistent about how you save your notebooks, otherwise the JSON-diffs will be a mess.
This notebook has the "Omit code cell output when saving this notebook" option set. GitHub refuses to diff notebooks with large diffs (inline images).
reviewnb.com may help. You can access it using this bookmarklet:
javascript:(function(){ window.open(window.location.toString().replace(/github\.com/, 'app.reviewnb.com').replace(/files$/,"")); })()
To open a GitHub notebook in Colab use the Open in Colab extension (or make a bookmarklet).
The easiest way to edit a notebook in GitHub is to open it with Colab from the branch you want to edit. Then use File --> Save a copy in GitHub, which will save it back to the branch you opened it from.
For PRs it's helpful to post a direct Colab link to the PR head: https://colab.research.google.com/github/{user}/{repo}/blob/{branch}/{path}.ipynb
Code Style
Notebooks are for people. Write code optimized for clarity.
Demonstrate small parts before combining them into something more complex. Like below:
End of explanation
"""
|
yw-fang/readingnotes
|
machine-learning/McKinney-pythonbook2013/chapter03-note.ipynb
|
apache-2.0
|
a = 5
a
import numpy as np
from numpy.random import randn
data = {i: randn() for i in range(7)}
print(data)
data1 = {j: j**2 for j in range(5)}
print(data1)
"""
Explanation: 阅读笔记
作者:方跃文
Email: fyuewen@gmail.com
时间:始于2017年9月12日
第三章笔记始于2017年9月28日23:38,结束于 2017年10月17日
第三章 IPtyhon: 一种交互式计算和开发环境
IPython鼓励一种“执行探索——execute explore”精神,这就区别于传统的“编辑——编译——执行 edit——complie——run”
IPython 基础
End of explanation
"""
an_apple = 27
an_example = 42
an_ #按下tab键就会看到之前定义的变量会被显示出来,方便我们做出选择。
"""
Explanation: Tab 键自动完成
在python shell中,输入表达式时候,只要按下Tab键,当前命名空间中任何已输入的字符串相匹配的变量(对象、函数等)就会被找出来:
End of explanation
"""
import IPython
print(IPython.sys_info())
a = [1,2,3]
a.append(0)
a
import datetime
dt = datetime.time(22,2,2)
dd = datetime.date(2017,2,2)
print("%s %s" % (dt,dd))
"""
Explanation: 此外,我们还可以在任何对象之后输入一个句点来方便地补全方法和属性的输入:
End of explanation
"""
./ #按下Tab键, 如果你当前目录下有文件或者目录,会给出提示。
"""
Explanation: Tab键自动完成成功不只可以搜索命名空间和自动完成对象或模块属性。当我们输入任何看上去像文件路径的东西时(即便是在一个Python字符串中),按下Tab键即可找出电脑文件系统中与之匹配的东西。
End of explanation
"""
b=[1,2,3]
b?
"""
Explanation: 内省
在变量的前面或者后面加上一个问号就可以将有关该对象的一些通用信息显示出来,这个就是内省,即object introspection.
End of explanation
"""
def add_numbers(a,b):
#引号部分则为docstring
"""
Add two numbers together
Returns
-------
the_sum: type of arguments
"""
return a+b
add_numbers(1,2)
add_numbers?
#加一个问号执行会显示上述我已经编写好的docstring,这样在忘记函数作用的时候还是很不错的功能。
add_numbers??
#加两个问号则会显示该函数的源代码
"""
Explanation: 上面执行完,jupyter会跳出一个小窗口并且显示如下:
Type: list
String form: [1, 2, 3]
Length: 3
Docstring:
list() -> new empty list
list(iterable) -> new list initialized from iterable's items
如果对象是一个函数或者实例方法,则它的docstring(如果有的话)也会显示出来。例如:
End of explanation
"""
import numpy as np
np.*load*?
"""
Explanation: ?还有一个用法,即搜索IPython的命名空间,类似于标准UNIX或者Windows命令行中的那种用法。一些字符再配以通配符即可显示出所有与该通配符表达式相匹配的名称。例如我们可以列出NumPy顶级命名空间中含有“load"的所有函数。
End of explanation
"""
def f(x,y,z):
return (x+y)/z
a=5
b=6
c=8
result = f(a,b,c)
print(result)
#执行
%run ./chapter03/simple01.py
"""
Explanation: 上述执行后,jupyter notebooK会给出:
np.loader
np.load
np.loads
np.loadtxt
np.pkgload
%run命令
在IPython会话中,所有文件都可以通过%run命令当作Python程序来运行。假设当前目录下的chapter03文件夹中有个simple01.py的脚本,其中内容为
End of explanation
"""
result
"""
Explanation: 上述脚本simple01.py是在一个空的命名空间中运行的,没有任何import,也没有定义任何其他的变量,所以其行为跟在命令行运行是一样的。此后,该脚本中所定义的变量(包括脚本中的import、函数、全局变量)就可以在当前jupyter notebook中进行访问(除非有其他错误或则异常)
End of explanation
"""
%run -i ./chapter03/simple02.py #-i即interactive
"""
Explanation: 如果Python脚本中需要用到命令行参数(通过 sys.argv访问),可以将参数放到文件路径的后面,就像在命令行执行那样。
如果希望脚本执行的时候会访问当前jupyter notebook中的变量,应该用…%run -i script.py,例如
我在chapter03文件夹中写下
x = 32
add = x + result
print('add is %d' % (add))
End of explanation
"""
#下面我把我在ipython中执行的代码
$ ipython
Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:x = 1
:if x == 1:
: print("x is 1.")
:
:--
x is 1.
"""
Explanation: 中断执行的代码
任何代码在执行时候(无论是通过%run执行的脚本还是长时间运行的命令),只要按下按下“Ctrl+C”,就会引发一个keyboardInterrupt。出一些特殊的情况之外,绝代分python程序都将因此立即停止执行。
例如:
当python代码已经调用了某个已编译的扩展模块时,按下Ctrl+C将无法立即停止执行。
在这种情况下,要么需要等待python解释器重新获得控制权,要么只能通过
操作系统的任务管理器强制执行终止python进程。
执行剪贴板中的代码
在IPython shell(注意,我这里强调一下,并不是在jupyter notebook中,而是ipython shell,虽然我有时候
把他们两个说的好像等效一样,但是两者还是不同的)中执行代码的最简单方式就是粘贴剪贴板中的代码。虽然这种做法很粗糙,
但是在实际工作中就很有用。比如,在开发一个复杂或耗时的程序时候,我们可能需要一段
一段地执行脚本,以便查看各个阶段所加载的数据以及产生的结果。又比如说,在网上找到了
一个何用的代码,但是又不想专门为其新建一个.py文件。
多数情况下,我们可以通过“Ctrl-Shift-V”将粘贴版中的代码片段粘贴出来(windows中)。
%paste 和 %cpaste 这两个魔术函数可以粘贴剪贴板中的一切文本。在ipython shell中这两个函数
可以帮助粘贴。后者%cpaste相比于%paste只是多了粘贴代码的特殊提示符,可以一行一行粘贴。
End of explanation
"""
%run ./pydata-book/ch03/ipython_bug.py
"""
Explanation: IPython 跟编辑器和IDE之间的交互
某些文本编辑器(EMACS, VIM)带有一些能将代码块直接发送到ipython shell的第三方扩展。某些IDE中也
预装有ipython。
对于我自己而言,我喜欢用git bash,然后在里面折腾vim. 当然我有时候也用IDE
键盘快捷键
IPython提供了许多用于提示符导航(Emacs文本编辑器或者UNIX bash shell的用户对此会很熟悉)和查阅历史shell命令的快捷键。因为我不喜欢在ipython shell中写code,所以我就跳过了。如果有人读到我的笔记发现这里没有什么记录的话请自行查找原书。
异常和跟踪
如果%run某段脚本或执行某条语句发生了一场,IPython默认会输出整个调用栈跟踪traceback,其中还会附上调用栈各点附近的几行代码作为上下文参考。
End of explanation
"""
import numpy as np
from numpy.random import randn
a = randn(3,3,3)
a
%timeit np.dot(a,a)
"""
Explanation: 拥有额外的上下文代码参考是它相对于标准python解释器的一大优势。上下文代码参考的数量可以通过%mode魔术命令进行控制,既可以少(与标准python解释器相同)也可以多(带有函数参数值以及其他信息)。本章后面还会讲到如果在出现异常之后进入跟踪栈进行交互式的事后调试post-mortem debugging.
魔术命令
IPython有一些特殊命令(魔术命令Magic Command),它们有的为常见任务提供便利,有的则使你能够轻松控制IPtython系统的行为。魔术命令是以百分号%为前缀的命令。例如,我们可以通过 %timeit 这个魔术命令检测任意Python语句(如矩阵乘法)的执行时间(稍后对此进行详细讲解):
End of explanation
"""
%reset?
"""
Explanation: 魔术命令可以看作运行于IPython系统中的命令行程序。它们大都还有一些“命令行”,使用“?”即可查看其选项
End of explanation
"""
a = 1
a
'a' in _ip.user_ns # 不知道为什么这里没有执行通过?
%reset -f
'a' in __ip.user__ns
"""
Explanation: 上面执行后,会跳出它的docstring
End of explanation
"""
#在terminal 输入
ipython --pylab
#回显中会出现部分关于matplotlib的字段
#IPython 6.2.0 -- An enhanced Interactive Python. Type '?' for help.
#Using matplotlib backend: Qt5Agg
"""
Explanation: 常用的python魔术命令
| 命令 | 功能|
| ------------- | ------------- |
| %quickref | 显示IPython的快速参考 |
| %magic | 显示所有魔术命令的详细文档 |
| %debug | 从最新的异常跟踪的底部进入交互式调试器|
| %hist | #打印命令的输入(可选输出)历史 |
| %pdb | 在异常发生后自动进入调试器 |
| %paste | 执行粘贴版中的python代码 |
| %reset | 删除interactive命名空间中的全部变量、名称 |
| %page OBJECT | 通过分页器打印输出 OBJECT |
| %run script.py | 在IPython中执行脚本 |
| %run statement | 通过cProfile执行statement,并打印分析器的输出结果|
| %time statement | 报告statement的执行时间|
| %timeit statement | 多次执行statement以计算系统平均执行时间。对那些执行时间非常小的代码很有用|
| %who、%who_ls、%whos | 显示interactive命名空间中定义的变量,信息级别、冗余度可变 |
| %xdel variable | 删除variable,并尝试清楚其在IPython中的对象上的一切引用 |
基于Qt的富GUI控制台
IPython团队开发了一个基于Qt框架(其摩的是为终端应用程序提供诸如内嵌图片、多行编辑、语法高亮之类的富文本编辑功能)的GUI控制平台。如果你已经安装了PyQt或者Pyside,使用下面命令来启动的话即可为其添加绘图功能。
ipython qtconsole --pylab=inline
Qt控制台可以通过标签页的形式启动多个IPython进程,这就使得我们可以在多个任务之间轻松地切换。它也开业跟IPython HTML Notebote (即我现在用的jupyter noteboo)共享同一个进程,稍后我们对此进行演示说明。
matplotlib 集成与pylab模式
导致Ipython广泛应用于科学计算领域的部分原因是它跟matplotlib这样的库以及GUI工具集默契配合。
通常我们通过在启动Ipython时候添加--pylab标记来集成matplotlib
End of explanation
"""
#原书给了一个在ipython命令行的例子
#但是,我这里用jupyter notebook来进行演示
# 我这里的代码跟原书可能不是很相同,
#我参考的是matplotlib image tutorial
%matplotlib inline
import matplotlib.image as mpimg
import numpy as np
import matplotlib.pyplot as plt
img=mpimg.imread('pydata-book/ch03/stinkbug.png')
plt.imshow(img)
#Here, we use Pillow library to resize the figure
from PIL import Image
import matplotlib.pyplot as plt
img = Image.open('pydata-book/ch03/stinkbug.png')
img1 = img
img.thumbnail((64,64), Image.ANTIALIAS) ## resizes image in-place
img1.thumbnail((256,256), Image.ANTIALIAS)
imgplot = plt.imshow(img)
img1plot = plt.imshow(img1)
%matplotlib inline
import matplotlib.pylab as plab
from numpy.random import randn
plab.plot(randn(1000).cumsum())
"""
Explanation: 上述的操作会导致几个结果:
IPython 会启动默认GUI后台集成,这样matplotib绘图窗口创建就不会出现问题;
Numpy和matplotlib的大部分功能会被引入到最顶层的interactive命名空间以产生一个交互式的计算环境(类似matlab等)。也可以通过%gui对此进行手工设置(详情请执行%gui?)
End of explanation
"""
#在ipython terminal执行
%run chapter03/simple01.py
"""
Explanation: 使用命令历史
IPython 维护着一个位于硬盘上的小型数据库。其中含有你执行过的每条命令的文本。这样做有几个目的:
只需很少的按键次数即可搜索、自动完成并执行之前已经执行过的命令
在会话间持久化历史命令
将输入/输出历史纪录到日志中去
搜索并重用命令历史
IPython倡导迭代、交互的开发模式:我们常常发现自己总是重复一些命令,假设我们已经执行了
End of explanation
"""
a=3
a
b=4
b
__
c=5
c
_
"""
Explanation: 如果我们想在修改了simple01.py(当然也可以不改)后再次执行上面的操作,只需要输入 %run 命令的前几个字符并按下“ctrl+P”键或者向上箭头就会在命令历史的第一个发现它. (可能是因为我用的是git bash on windows,我自己并未测试成功书中的这个操作;但是在Linux中,我测试是有效的)。此外,ctrl-R可以实现部分增量搜索,跟Unix shell中的readline所提供的功能一样,并且ctrl-R将会循环搜索命令历史中每一条与输入相符的行。
例如,第一次ctrl-R后,我输入了c,ipython返回给我的是:
In [6]: c=a+b
I-search backward: c
再按依次ctrl-R,则变成了历史中含c这个关键字的另一个命令
In [6]: c = c + 1
I-search backward: c
输入和输出变量
IPython shell和jupyter notebook中,最近的两个输出分别保存在 _ 和 __ 两个变量中
End of explanation
"""
foo = 'bar'
foo
_i9
_9
"""
Explanation: 输入的文本被保存在名为 _iX 的变量中,其中X是输入行的行号。每个输入变量都有一个对应的输出变量 _X。例如:
End of explanation
"""
%hist
"""
Explanation: 由于输入变量是字符串,因此可用python的exec关键字重新执行: exec _i9
有几个魔术命令可用于输入、输出历史。%hist用于打印全部或部分历史,可以选择是否带行号
End of explanation
"""
%reset
a #由于上面已经清理了命名空间,所以python并不知道a是多少。
"""
Explanation: %reset 用于清空 interactive 命名空间,并可选择是否清空输入和输出缓存。%xdel 用于从IPython系统中移除特定对象的一切引用。
End of explanation
"""
%logstart
"""
Explanation: 注意:在处理大数据集时,需注意IPython的输入和输出历史,它会导致所有对象引用都无法被垃圾收集器处理(即释放内存),即使用del关键字将变量从interactive命名空间中删除也不行。对于这种情况,谨慎地使用%xdel和%reset将有助于避免出现内存方面的问题。
记录输入和输出
IPython能够记录整个控制台会话,包括输入和输出。执行 %logstart 即可开始记录日志
End of explanation
"""
%logstart?
"""
Explanation: IPython的日志功能开在任何时刻开气,以便记录整个会话。%logstart的具体选项可以参考帮助文档。此外还可以看看几个与之配套的命令:%logoff, %logon, %logstate, 以及 %logstop
End of explanation
"""
In [4]: my_current_dir = !pwd
In [5]: my_current_dir
Out[5]: ['/home/ywfang']
返回的python对象my_current_dir实际上是一个含有控制台输出结果的自定义列表类型。
"""
Explanation: 与操作系统交互
IPython 的另一重要特点就是它跟操作系统的shell结合地非常紧密。即我们可以直接在IPython中实现标准的Windows或unix命令行活动。例如,执行shell命令、更改目录、将命令的执行结果保存在Python对象中等。此外,它还提供了shell命令别名以及目录书签等功能。
下表总结了用于调用shell命令的魔术命令及其语法。本笔记后面还会介绍这些功能。
| 命令 | 说明|
| ------------- | ------------- |
| !cmd | 在系统shell中执行cmd |
| output = !cmd args | 执行cmd,将stdout存放在output中|
| %alias alias_name cmd | 为系统shell命令定义别名|
| %bookmark | 使用IPtyhon的目录书签功能|
| %cd directory | 将系统工作目录更改为directory|
| %pwd | 返回当前工作目录 |
| %pushed directory | 将当前目录入栈,并转向目标目录 (这个不懂??)|
| %popd | 弹出栈顶目录,并转向该目录 |
| %dirs | 返回一个含有当前目录栈的列表 |
| %dhist | 打印目录访问历史 |
| %env | 以dict形式返回系统环境变量 |
shell 命令和别名
在 IPython 中,以感叹号开头的命令行表示其后的所有内容需要在系统shell中执行。In other words, 我们可以删除文件(如rm或者del)、修改目录或执行任意其他处理过程。甚至我们还可启动一些将控制权从IPython手中夺走的进程(比如另外再启动一个Python解释器):
yang@comet-1.edu ~ 19:17:51 >ipython
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
In [1]: !python
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
此外,还可将shell命令的控制台输出存放到变量中,只需要将 !开头的表达式赋值给变量即可。例如在Linux中
End of explanation
"""
#在ipython shell中
In [1]: foo = 'note*'
In [2]: !ls $foo
notebook.log
"""
Explanation: 在使用!时,IPython 还允许使用当前环境中定义的python值。只需在变量名前面加上美元符号($)即可:
End of explanation
"""
%bookmark db D:\PhDinECNU #windows中的写法;如果是Linux,应该为/d/PhDinECNU/
%bookmark dt D:\temp
%cd db
"""
Explanation: 魔术命令 %alias 可以为shell命令自定义简称。例:
In [3]: %alias ll ls -l
In [4]: ll
total 426
drwxr-xr-x 1 YWFANG 197609 0 9月 21 22:47 appendix-A
-rw-r--r-- 1 YWFANG 197609 47204 10月 8 10:48 appendix-A-note.ipynb
可以一次执行多条命令,只需要将她们写在一行并以分号隔开(在Windows中,这个可能不可行,但是Linux可以通过)
In [3]: %alias test_fang (ls -l; cd ml; ls -l; cd ..)
In [4]: test_fang
total 211
drwxr-xr-x 2 ywfang yun112 2 Aug 22 18:45 Desktop
-rw-r--r-- 1 ywfang yun112 11148 Jul 2 22:02 bashrc-fang-20170703
drwxr-xr-x 9 ywfang yun112 9 Jul 7 01:45 glibc-2.14
drwxr-xr-x 3 ywfang yun112 3 Jul 2 23:10 intel
-rwxr-xr-x 1 ywfang yun112 645 Sep 19 04:51 jupter_notebook
drwxr-xr-x 3 ywfang yun112 5 Jul 7 18:51 materials
drwxr-xr-x 20 ywfang yun112 21 Aug 22 18:02 miniconda3
drwxr-xr-x 3 ywfang yun112 3 Sep 4 18:39 ml
-rw-r--r-- 1 ywfang yun112 826 Sep 30 08:35 notebook.log
drwxr-xr-x 3 ywfang yun112 4 Aug 22 18:21 pwwork
drwxr-xr-x 6 ywfang yun112 14 Aug 22 19:04 software
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:56 tensorflow
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:53 tf1.2-py3.6
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:54 tf12-py36
drwxr-xr-x 6 ywfang yun112 518 Jun 20 01:33 tool
total 1
drwxr-xr-x 3 ywfang yun112 3 Sep 4 18:39 tensorflow
注意,IPython会在会话结束时立即"忘记"我们前面所定义的一切别名。如果要进行永久性的别名设置,需要使用配置系统。之后会进行介绍。
目录书签系统
IPython 有一个简单的目录书签系统,它使我们能保存常用目录的别名以便方便地快速跳转。比如,作为一个狂热的dropbox用户,为了能够快速地转到dropbox目录,可以定义一个书签:
End of explanation
"""
%bookmark -l
"""
Explanation: 定义好之后就可以在ipython shell(或jupyter notebook)中使用魔术命令%cd db来使用这些标签
如果书签的名字与当前工作目录中某个名字冲突时,可通过 -b 标记(起作用是覆写)使用书签目录。%bookmark的 -l 选项的作用是列出所有书签。
End of explanation
"""
%reset
%cd D:\PhDinECNU\readingnotes\readingnotes\machine-learning\McKinney-pythonbook2013
%run pydata-book/ch03/ipython_bug.py
%debug
"""
Explanation: 软件开发工具
IPython 不仅是交互式环境和数据分析环境,同时也非常适合做开发环境。在数据分析应用程序中,最重要的是要拥有正确的代码。IPython继承了Python内置的 pdb 调试器。 此外,IPython 提供了一些简单易用的代码运行时间以及性能分析的工具。
交互式调试器
IPython的调试器增加了 pdb ,如 Tab 键自动完成、语法高亮、为异常跟踪的每条信息添加上下文参考等。调试代码的最佳时机之一就是错误刚发生的时候。 %debug 命令(在发生异常之后立即输入)将会条用那个“时候”调试器,并直接跳转到发生异常的那个 栈帧 (stack frame)
End of explanation
"""
执行%pdb命令可以让IPython在出现异常之后直接调用调试器,很多人都认为这一功能很实用。
"""
Explanation: 在这个 pdb 调试器中,我们可以执行任意Python 代码并查看各个栈帧中的一切对象和数据,这就相当于解释器还留了条后路给我们。默认是从最低级开始的,即错误发生的地方,在上面ipdb>后面输入u (up) 或者 d (down) 即可在栈跟踪的各级别之间进行切换。
End of explanation
"""
%run -d ./pydata-book/ch03/ipython_bug.py
"""
Explanation: 此外调试器还能为代码开发提供帮助,尤其当我们想设置断点或者对函数/脚本进行单步调试时。实现这个目的的方式如下所述。
用带有 -d 选项的 %run 命令,这将会在执行脚本文件中的代码之前先打开调试器。必须立即输入 s(或step)才能进入脚本:
End of explanation
"""
%run -d ./pydata-book/ch03/ipython_bug.py
"""
Explanation: 在此之后,上述文件执行的方式就全凭我们自己说了算了。比如说,在上面那个异常中,我们可以在调用 works_fine 方法的地方设置一个断点,然后输入 c (或者 continue) 使脚本一直运行下去直到该断点时为止。
End of explanation
"""
%run ./pydata-book/ch03/ipython_bug.py
%debug
"""
Explanation: 如果想精通这个调试器,必须经过大量的实践。
虽然大部分 IDE 都会自带调试器,但是 IPython 中调试程序的方法往往会带来更高的生产率。
下面是常用的 IPython 调试器命令
|命令 | 功能 |
|------| ------|
| h(elp) | 显示命令列表 |
| help command | 显示 command 的文档 |
| c(ontinue) | 恢复程序的执行 |
| q(uit) | 推出调试器,不再执行任何代码 |
| b(reak) number | 在当前文件的第 number 行设置一个断点 |
| b path/to/file.py:number | 在制定文件的第 numbe 行设置一个断点 |
| s(tep) | 单步进入函数调用 |
| n(ext) | 执行当前行,并前进到当前级别的下一行 |
| u(p)/d(own) | 在函数调用栈中向上或者向下移动 |
| a(rgs) | 显示当前函数的参数 |
| debug statement | 在新的(递归)调试其中调用语句 statement |
| l(ist) statement | 显示当前行,以及当前栈级别上的上下文参考代码 |
| w(here) | 打印当前位置的完整栈跟踪 (包括上下文参考代码) |
调试器的其他使用场景
第一,使用 set_trace 这个特别的函数(以 pdb.set_trace 命名),这差不多可算作一种 “穷人的断点”(意思是这种断点方式很随便,是硬编码的)。下面这两个方法可能会在我们的日常工作中排上用场(我们也可像作者一样直接将其加入IPython配置中):
第一个函数 set_trace 很简单。我们可以将其放在代码中任何希望停下来查看一番的地方,尤其是那些发生异常的地方:
End of explanation
"""
import time
start = time.time()
for i in range(iterations):
#to do something
elapsed_per = (time.time() - start ) / iterations
"""
Explanation: 测试代码执行的时间: %time 和 %timeit
对于大规模数据分析,我们有时候需要对时间有个规划和预测。特别是对于其中最耗时的函数。IPython中可以轻松应对这种情况。
使用内置的 time 模块,以及 time.clock 和 time.time 函数 手工测试代码执行时间是令人烦闷的事情,因为我们必须编写许多一样的公式化代码:
End of explanation
"""
# a huge string array
strings = ['foo', 'foobar', 'baz', 'qux', 'python', 'God']*100000
method1 = [x for x in strings if x.startswith('foo')]
method2 = [x for x in strings if x[:3]=='foo']
#These two methos look almost same, but their performances are different.
# See below, I use %time to calculate the excutable time.
%time method1 = [x for x in strings if x.startswith('foo')]
%time method2 = [x for x in strings if x[:3]=='foo']
"""
Explanation: 由于这是一个非常常用的功能,所以IPython提供了两个魔术工具 %time 和 %timeit 来自动完成该过程。%time 一次执行一条语句,然后报告总的执行时间。假设我们有一大堆字符串,希望对几个“能选出具有特殊前缀的字符串”的函数进行比较。下面是一个拥有60万字字符串的数组,以及两个不同的“能够选出其中以foo开头的字符串”的方法:
End of explanation
"""
%timeit method = [ x for x in strings if x.startswith('foo')]
%timeit method = [x for x in strings if x[:0]=='foo']
"""
Explanation: Wall time是我们感兴趣的数字。所以,看上去第一个方法耗费了接近2倍的时间,但是这并非一个非常精确的结果。如果我们队相同语句多次执行%time的话,就会发现其结果是变化的。为了得到更加精确的结果,我们需要使用魔术函数 %timeit。对于任意语句,它会自动多次执行以产生一个非常精确的平均执行时间
End of explanation
"""
x = 'foobar'
y = 'foo'
%timeit x.startswith(y)
%timeit x[:3]==y
"""
Explanation: 这个很平淡无奇的离子告诉我们这样一个道理:我们有必要了解Python标准库、Numpy、Pandas 以及 本书所用其他库的性能特点。在大型数据分析中,这些不起眼的毫秒数会不断累积产生蝴蝶效应。
对于那些执行时间非常短(甚至是微妙 1e-6 s;或者 纳秒 1e-9 s)的分析语句和函数而言,%timeit 是非常有用的。虽然对于单次执行而言,这些时间小到几乎可以忽略不计。但是我们只要举一个例子,就会发现我们很有必要“分秒必争”:
同样执行100万次一个20微妙的函数,所化时间要比一个5微妙的多出15秒。
在上面我运行的那个例子中,我们可以直接对两个字符串运算进行比较,以了解其性能特点:
End of explanation
"""
import numpy as np
from numpy.linalg import eigvals
def run_experiment(niter = 100):
K = 100
results = []
for _ in range(niter):
mat = np.random.randn(K,K)
max_eigenvalue = np.abs(eigvals(mat)).max()
results.append(max_eigenvalue)
return results
some_results = run_experiment()
print('Largest one we saw: %s' %(np.max(some_results)))
"""
Explanation: 基本性能分析: %run 和 %run -p
代码的性能分析跟代码执行时间密切关联,只是它关注的是耗费时间的位置。Python中,cProfile模块主要用来分析代码性能,它并非转为python设计。cProfile在执行一个程序代码或代码块时,会记录各函数所耗费的时间。
cProfile一般是在命令行上使用的,它将执行整个程序然后输出各个函数的执行时间。
下面,我们就给出了一个简单的例子:在一个循环中执行一些线性代数运算(即计算一个100 * 100 的矩阵的最大本征值绝对值)
End of explanation
"""
!python -m cProfile chapter03/simple03.py
"""
Explanation: 我们将上述脚本内容写入 simple03.py (目录为当前目录下的chapter03目录中),并且执行
End of explanation
"""
!python -m cProfile -s cumulative chapter03/simple03.py
"""
Explanation: 即使这里不明白脚本里面具体做的事情,那也没有关系,反正先这么照着书里先做着,感受下cProfile的作用。
我们可以看到,输出结果是按照函数名排序的(ordered by standard name)。这样就比较难看出哪些地方是最花时间的,因此通常用 -s 标记,换一种排序的规则:
End of explanation
"""
%prun -l 7 -s cumulative run_experiment()
"""
Explanation: 我们看到此时的排序规则为 Ordered by: cumulative time,这样我们只需要看 cumtime 列即可发现各函数所耗费的总计时间。 注意如果一个函数A调用了函数B,计时器并不会停止而重新计时。cProfile记录的是各函数调用的起始和结束时间,并依次计算总时间。
除了命令行用法外,cProfile 还可以通过编程的方式分析任意代码块的性能。IPython为此提供了一个方便的借口,即 %prun 命令和带 -p 选项的 %run。 %prun的格式跟 cProfile 的差不多,但它分析的是 Python 语句 而不是整个 .py 文件:
End of explanation
"""
# A list of dotted module names of IPython extensions to load.
c.TerminalIPythonApp.extensions = [
'line_profiler',
]
#这个代码可以确认 line_profiler 是否被正常的安装和load
import line_profiler
line_profiler
"""
Explanation: 在ipython terminal中,执行 %run -p -s cumulative chapter03/simple03.py也能达到上述效果,但是却无法退出IPython。
逐行分析函数性能
有时候,%prun (或者其他基于cProfile的性能分析手段)所得到的信息要么不足以说明函数的执行时间,要么难以理解(按函数名聚合?)。对于这种情形,我们可以使用一个叫做line_profiler的小型库。气质有一个心的魔术函数 %lprun, 它可以对一个或者多个函数进行逐行性能分析。我们有修改 IPython 配置 以启用这个扩展.
For IPython 0.11+, you can install it by editing the IPython configuration file ~/.ipython/profile_default/ipython_config.py to add the 'line_profiler' item to the extensions list:
End of explanation
"""
from numpy.random import randn
def add_and_sum(x,y):
added = x + y
summed = added.sum(axis=1)
return summed
def call_function():
x = randn(1000,1000)
y = randn(1000,1000)
return add_and_sum(x,y)
"""
Explanation: line_profiler 可以通过编程方式使用,但是其更强大的一面在于与 Ipython 的交互使用。
假设我们有一个名为 prof_mod 的模块,其代码内容为(我们把prof_mode.py 保存在 chapter03目录下)
End of explanation
"""
%run chapter03/prof_mode.py
x = randn(3000, 3000)
y = randn(3000,3000)
%prun add_and_sum(x,y) #因为我们这里只是测试 add_and_sum 这个函数,所以必须给它实参,所以上面我们给出了 x和y
"""
Explanation: 如果我们想了解 add_and_sum 函数的性能,%prun 会给出如下所示的结果
End of explanation
"""
%lprun -f func1 -f func2 statement_to_profile
"""
Explanation: 执行的结果为:
当我们启用 line_profiler 这个扩展后,就会出现新的魔术命令 %lprun。 用法上唯一的区别就是: 必须为 %lprun 指明想要测试哪个或哪些函数。%lprun 的通用语法为:
End of explanation
"""
%lprun -f add_and_sum add_and_sum(x,y)
"""
Explanation: 在本例子中,我们想要测试 add_and_sum,于是执行
End of explanation
"""
%load_ext line_profiler
"""
Explanation: 网上找了下别人也遇到了和我一样的错误,stackoverflow上面有解决方案:
End of explanation
"""
%lprun -f add_and_sum add_and_sum(x,y)
"""
Explanation: 然后我们再执行一次
End of explanation
"""
%lprun -f add_and_sum -f call_function call_function()
综上,当我们需要测试一个程序中的某些函数时,我们需要使用这两行代码:
%load_ext line_profiler
%lprun -f func1 -f func2 statement_to_profile
"""
Explanation: 这个结果就容易理解了许多。这里我们测试的只是 add_and_sum 这个函数。上面那个模块中还有一个call_function 函数,我们可以结合 add_and_sum 一起测试,于是最终我们的命令成为了这个样子:
End of explanation
"""
import numpy as np
import pandas as pd
print('hello world!')
"""
Explanation: 通常我们会用 %prun (cProfile) 做宏观性能分析,而用 %lprun 来做 微观的性能分析。
注意,在使用 %lprun 时,之所以必须显示指明待测试函数的函数名,是因为“跟踪”每一行代码的时间代价是巨大的。对不感兴趣的函数进行跟踪会对分析结果产生很显著的影响。
IPython HTML Notebook
IPthon HTML Notebook,即现在的 jupyter notebook。这个其实在我整个笔记中都已经在使用了。notebook项目最初由 Brian Graner 领导的 Ipython 团队从 2011 年开始开发。目前已被广泛使用于开发和数据分析。
首先来看个导入图标的例子,其实这个笔记的开头,我也已经展示过部分这样的功能
End of explanation
"""
import numpy as np
import pandas as pd
tips = pd.read_csv('./pydata-book/ch08/tips.csv')
tips.head()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
img = plt.imread('pydata-book/ch03/stinkbug.png')
img(figsize = (4,4))
plt.imshow(img)
"""
Explanation: 此处补充一个书上 导入图片的一个例子:
End of explanation
"""
jupyter notebook
"""
Explanation: jupyter notebook是一种基于 JSON 的文档格式 .ipynb, 这种格式是的我们可以轻松分享代码,分析结果,特别是展示图标。目前在各种 Python 研讨会上,一种流行的演示手段就是使用 IPython Notebook,然后再讲 .ipynb 文件发布到网上供所有人参考。
Jupyter Notebook 是一个运行于命令行上的轻量级服务器进程。执行下面代码即可启动
End of explanation
"""
import some_lib
x = 4
y = [1,34,5,6]
result = some_lib.get_answer(x,y)
"""
Explanation: 如果想要图标以inline方式展示,可以在打开notebook后加入 %matplotlib --inline 或者 %pylab --inline
利用IPython 提高代码开发效率的几点提示
使用 IPython,可以让代码的结果更容易交互和亦欲查看。特别是当执行出现错误的时候,IPython 的交互性可以带来极大的便利
重新加载模块依赖项
在 Python 中,当我们输入 import some_lib 时候,some_lib 中的代码就会被执行,且其中所有的变量、函数和引入项都会保存在一个新建立的 some_lib 模块命名空间中。下次再输入 import some_lib 时,就会得到这个模块命名空间的一个引用。而这对于 IPython 的交互式代码开发模式就会有一个问题。
比如,用 %run 执行的某段脚本中包含了某个刚刚做了修改的模块。假设我们有一个 sample_script.py 文件,其中有如下代码:
End of explanation
"""
import some_lib
reload(some_lib)
x = 4
y = [1,34,5,6]
result = some_lib.get_answer(x,y)
"""
Explanation: 如果在执行了 %run sample_script.py 后又对 some_lib.py 进行了修改,下次再执行 %run sample_script.py 时候,将仍然会使用老版本的some_lib。其原因在于python是一种“一次加载”系统。不像 matplab等,它会自动应用代码修改。
那么怎么解决这个问题呢?
第一个办法是使用内置的reload函数,即将 sample_script.py 修改成
End of explanation
"""
import this
"""
Explanation: 这样,就可以保证每次执行 sample_script.py 时候都能使用最新的 some_lib 了。不过这个办法有个问题,当依赖变得更强时,就需要在很多地插入 reload.
第二个办法可以弥补上述第一个办法的弊端。IPython 提供了一个特殊的 dreload 函数 (非魔术函数) 来解决模块的“深度”重加载。如果执行 import some_lib 之后在输入 derealod(some_lib),则它会尝试重新加载 some_lib 及其所有的依赖项。遗憾的是,这个办法也不是“屡试不爽”的,但倘若失效的,重新启动 IPython 就可以解决所有加载问题。
代码设计提示
作者说这个问题不好讲,但是他在日常生活中的确发现了一些高层次的原则。
保留有意义的对象和数据
扁平结构要比嵌套结构好:嵌套结构犹如洋葱,想要调试需要剥掉好多层。(这种思想源自于Zen of Python by Tim Peters. 在jupyter notebook中输入 import this可以看到这首诗)
无惧大文件。这样可以减少模块的反复加载,编辑脚本时候也可以减少跳转。维护也更加方便。维护更大的模块会更实用且更加符合python的特点。
End of explanation
"""
class Message:
def __init__(self, msg):
self.msg = msg
"""
Explanation: 高级python功能
让你的类对IPython更加友好
IPython 力求为各种对象呈现一个友好的字符串表示。对于许多对象(如字典、列表、组等),内置的pprint 模块就能给出漂亮的格式。但是对于我们自己所定义的那些类,必须自己格式化进行输出。假设我们以后下面这个简单的类:
End of explanation
"""
x = Message('I have secret')
x
"""
Explanation: 如果像下面这样写,我们会发现这个类的默认输出很不好看:
End of explanation
"""
class Message:
def __init__(self,msg):
self.msg = msg
def __repr__(self):
return('Message: %s' % self.msg)
x = Message('I have a secret')
x
"""
Explanation: 由于IPython会获取__repr__方法返回的字符串(具体方法是 output = repr(obj)),并将其显示到控制台上。因此,我们可以为上面那个类添加一个简单的 repr 方法以得到一个更有意义的输出形式:
End of explanation
"""
ipython profile create secret_project
#这会创建一个新的配置文件,目录在 :~/.ipython/profile_secret_project/ipython_config.py
"""
Explanation: 个性化和配置
IPython shell 在外观和行为方面的大部分内容都是可以进行配置的。下面是能够通过配置做的部分事情:
修改颜色方案
修改输入输出提示符
去掉 out 提示符跟下一个 In 提示符之间的空行
执行任意 Python 语句。这些语句可以用于引入所有常用的东西,还可以做一些你希望每次启动 IPython 都发生的事情。
启用 IPython 扩展,如 line_profiler 中的魔术命令 %lprun
定义我们自己的魔术命令或者系统别名
所有这些设置都在一个叫做 ipython_config.py 的文件中,可以在 ~/.config/ipython 目录中找到。Linux和windows系统目录略有点小区别。对于我自己来说,我在git bash on windows 上的目录是:~/.ipython/profile_default/ipython_config.py
一个实用的功能是,利用 ipython_config.py,我们可以拥有多个个性化设置。假设我们想专门为某个特定程序或者项目量身定做 IPython 配置。输入下面这样的命令即可新建一个新的个性化配置文件:
End of explanation
"""
ipython --profile=secret_project
"""
Explanation: 然后编辑新建的这个 profile_secret_project 中的配置文件,再用如下方式启动它:
End of explanation
"""
|
Danghor/Formal-Languages
|
Python/Top-Down-Parser.ipynb
|
gpl-2.0
|
import re
"""
Explanation: A Recursive Parser for Arithmetic Expressions
In this notebook we implement a simple recursive descend parser for arithmetic expressions.
This parser will implement the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{product}\;\;\mathrm{exprRest} \[0.2cm]
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{factor}\;\;\mathrm{productRest} \[0.2cm]
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
We implement a scanner with the help of the module re.
End of explanation
"""
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ number ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + (2 + @ 34 - 2**0)/7')
"""
Explanation: The function tokenize receives a string s as argument and returns a list of tokens.
The string s is supposed to represent an arithmetical expression.
Note:
1. We need to set the flag re.VERBOSE in our call of the function findall
below because otherwise we are not able to format the regular expression lexSpec the way
we have done it.
2. The regular expression lexSpec contains 5 parenthesized groups. Therefore,
findall returns a list of 5-tuples where the 5 components correspond to the 5
groups of the regular expression.
End of explanation
"""
def parse(s):
TL = tokenize(s)
result, Rest = parseExpr(TL)
assert Rest == [], f'Parse Error: could not parse {TL}'
return result
"""
Explanation: Implementing the Recursive Descend Parser
The function parse takes a string s as input and parses this string according to the recursive grammar
shown above. The function returns the floating point number that results from evaluation the expression given in s.
End of explanation
"""
def parseExpr(TL):
product, Rest = parseProduct(TL)
return parseExprRest(product, Rest)
"""
Explanation: The function parseExpr implements the following grammar rule:
$$ \mathrm{expr} \rightarrow \;\mathrm{product}\;\;\mathrm{exprRest} $$
It takes a token list TL as its input and returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
"""
def parseExprRest(Sum, TL):
if TL == []:
return Sum, []
elif TL[0] == '+':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum + product, Rest)
elif TL[0] == '-':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum - product, Rest)
else:
return Sum, TL
"""
Explanation: The function parseExprRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \;\varepsilon \[0.2cm]
\end{eqnarray}
$$
It takes two arguments:
- sum is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
"""
def parseProduct(TL):
factor, Rest = parseFactor(TL)
return parseProductRest(factor, Rest)
"""
Explanation: The function parseProduct implements the following grammar rule:
$$ \mathrm{product} \rightarrow \;\mathrm{factor}\;\;\mathrm{productRest} $$
It takes one argument:
- TL is the list of tokens that need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a product.
End of explanation
"""
def parseProductRest(product, TL):
if TL == []:
return product, []
elif TL[0] == '*':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product * factor, Rest)
elif TL[0] == '/':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product / factor, Rest)
else:
return product, TL
"""
Explanation: The function parseProductRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \;\varepsilon \
\end{eqnarray*}
$$
It takes two arguments:
- product is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse the rest of a product.
End of explanation
"""
def parseFactor(TL):
if TL[0] == '(':
expr, Rest = parseExpr(TL[1:])
assert Rest[0] == ')', 'Parse Error: expected ")"'
return expr, Rest[1:]
else:
return float(TL[0]), TL[1:]
"""
Explanation: The function parseFactor implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \;\texttt{NUMBER}
\end{eqnarray}
$$
It takes one argument:
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a factor.
End of explanation
"""
def test(s):
r1 = parse(s)
r2 = eval(s)
assert r1 == r2
return r1
test('11+22*(33-44)/(5-10*5/(4-3))')
test('0*11+22*(33-44)/(5-10*5/(4-3))')
"""
Explanation: Testing
End of explanation
"""
|
net-titech/CREST-Deep-M
|
notebooks/weight-clustering.ipynb
|
mit
|
import numpy as np
import os
import sys
weights_path = '/'.join(os.getcwd().split('/')[:-1]) + '/local-trained/alexnet/weights/'
print(weights_path)
os.listdir(weights_path)
keys = ['conv1', 'conv2', 'conv3', 'conv4', 'conv5', 'fc6', 'fc7', 'fc8']
weights = {}
for k in keys:
weights[k] = np.load(weights_path + k + '.npy')
"""
Explanation: Psuedo Weight Pruning and Clustering
2017-05-10
Model
In this post, we use a trained AlexNet model (training on ImageNet dataset). AlexNet has 8 parameterized layers: 5 convolutional and 4 fully connected:
conv1: 96 11x11-kernels - 3 channels
conv2: 256 5x5-kernels - 48 channels
conv3: 384 3x3-kernels - 256 channels
conv4: 384 3x3-kernels - 192 channels
conv5: 256 3x3-kernels - 192 channels
fc6: 4096x9216 matrix
fc7: 4096x4096 matrix
fc8: 1000x4096 matrix
Each of these layers is saved as a numpy 2D array.
End of explanation
"""
for k in keys:
print("Layer " + k + ": " + str(weights[k].shape))
"""
Explanation: The shape of each layers:
End of explanation
"""
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
"""
Explanation: Preprocessing
We analyze the statisical properties for each layers.
End of explanation
"""
plt.style.use('ggplot')
i = 0
def histogram(ax, x, num_bins=1000):
"""Plot a histogram onto ax"""
global i
i = (i + 1) % 7
clr = list(plt.rcParams['axes.prop_cycle'])[i]['color']
return ax.hist(x, num_bins, normed=1, color=clr, alpha=0.8)
# Create figure and 8 axes (4-by-2)
fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(12.8,19.2))
# Flatten each layer
conv1_f = weights['conv1'].flatten()
conv2_f = weights['conv2'].flatten()
conv3_f = weights['conv3'].flatten()
conv4_f = weights['conv4'].flatten()
conv5_f = weights['conv5'].flatten()
fc6_f = weights['fc6'].flatten()
fc7_f = weights['fc7'].flatten()
fc8_f = weights['fc8'].flatten()
# Plot histogram
histogram(ax[0,0], conv1_f)
ax[0,0].set_title("conv1")
histogram(ax[0,1], conv2_f)
ax[0,1].set_title("conv2")
histogram(ax[1,0], conv3_f)
ax[1,0].set_title("conv3")
histogram(ax[1,1], conv4_f)
ax[1,1].set_title("conv4")
histogram(ax[2,0], conv5_f)
ax[2,0].set_title("conv5")
histogram(ax[2,1], fc6_f)
ax[2,1].set_title("fc6")
histogram(ax[3,0], fc7_f)
ax[3,0].set_title("fc7")
histogram(ax[3,1], fc8_f)
ax[3,1].set_title("fc8")
fig.tight_layout()
plt.show()
plt.close()
"""
Explanation: Using ggplot (R style) for all the plots. There are 7 colors in the color wheel, we simply use a global variable i to cycle through all the color. Function histogram here plots to axis ax.
End of explanation
"""
def violin(ax, x, pos):
"""Plot a histogram onto ax"""
ax.violinplot(x, showmeans=True, showextrema=True, showmedians=True, positions=[pos])
# Create a single figure
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12.8,6.4))
# Plot violin
violin(ax, conv1_f, pos=0)
violin(ax, conv2_f, pos=1)
violin(ax, conv3_f, pos=2)
violin(ax, conv4_f, pos=3)
violin(ax, conv5_f, pos=4)
violin(ax, fc6_f, pos=5)
violin(ax, fc7_f, pos=6)
violin(ax, fc8_f, pos=7)
# Labels
ax.set_xticks(np.arange(0, len(keys)))
ax.set_xticklabels(keys)
fig.tight_layout()
plt.show()
fig.savefig('violin_alexnet.pdf')
plt.close()
"""
Explanation: The plot showed that all weights in AlexNet seems to have zero mean and follows a normal (or gamma) distribution. Next, we use violin plot to show the statistical properties of these weights in detail.
End of explanation
"""
def prun(o_weights, thres=None, percentile=0.8):
"""Set weights to zero according the threshold.
If the threshold is not provided, `thres` is
infered from `percentile`."""
w_weights = o_weights.reshape(1,-1)
if thres == None:
args = w_weights[0].argsort()
thres = w_weights[0][args[int((len(args)-1)*(1-percentile))]]
for i, val in enumerate(w_weights[0]):
if abs(val) <= thres:
w_weights[0][i] = 0.0
"""
Explanation: Pruning
This session provide a simple pseudo pruning function. We call this pseudo pruning because these is no re-training involved, hence the accuracy of the neural network would greatly decreased comparing to the pruning-retraining scheme. The prun function here merely used to create a fake sparse matrix to use for testing the compression packing.
End of explanation
"""
print("Before pruning:")
for layer_name in keys:
print(layer_name + " total size: " + str(weights[layer_name].size))
print(layer_name + " non-zero count: " + str(np.count_nonzero(weights[layer_name])))
print("Density: " + str(float(np.count_nonzero(weights[layer_name]))/weights[layer_name].size))
print("Cloning layers...")
clone_w = {}
for layer_name in keys:
clone_w[layer_name] = weights[layer_name].copy()
keep_per = 0.3
print("Prunning... Keeping " + str(keep_per*100) + "%")
for layer_name in keys:
prun(clone_w[layer_name], percentile=keep_per)
print(layer_name + " total size: " + str(clone_w[layer_name].size))
print(layer_name + " non-zero count: " + str(np.count_nonzero(clone_w[layer_name])))
print("Density: " + str(float(np.count_nonzero(clone_w[layer_name])*1.0)/clone_w[layer_name].size))
"""
Explanation: Test the effect of pruning:
End of explanation
"""
from sklearn.cluster import KMeans
"""
Explanation: Clustering
The next step after prunning is to cluster the weight values using k-means. For the convolutional layers k=256, and k=16 for the fully connected layers. First, we run clustering for un-prunned weight matrices.
End of explanation
"""
def quantize_kmeans(weight, ncluster=256, rs=0):
org_shape = weight.shape
km = KMeans(n_clusters=ncluster, random_state=rs).fit(weight.reshape(-1,1))
num_bits = int(np.ceil(np.log2(ncluster)))
encoded = np.zeros_like(weight, dtype=np.int32)
codebook = km.cluster_centers_
weight = weight.reshape(1,-1)
encoded = encoded.reshape(1,-1)
for i in range(encoded.size):
encoded[i] = km.predict([weight[0][i]])
return num_bits, codebook, encoded.reshape(org_shape)
"""
Explanation: Function quantize_kmeans performs k-means clustering on the for weight values. The cluster_centers_ returned by k-means is the codebook, and the built-in function Kmeans.predict is the decoder.
End of explanation
"""
print("Clustering conv1 ...")
conv1_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv1'].reshape(-1,1))
print("Clustering conv2 ...")
conv2_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv2'].reshape(-1,1))
print("Clustering conv3 ...")
conv3_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv3'].reshape(-1,1))
print("Clustering conv4 ...")
conv4_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv4'].reshape(-1,1))
print("Clustering conv5 ...")
conv5_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv5'].reshape(-1,1))
print("Clustering fc6 ...")
fc6_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc6'].reshape(-1,1))
print("Clustering fc7 ...")
fc7_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc7'].reshape(-1,1))
print("Clustering fc8 ...")
fc8_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc8'].reshape(-1,1))
"""
Explanation: We manually get each clustering for each layer. Convolutional layers have 256 centers each; fully connected layers have 16 centers each.
End of explanation
"""
def histogram_kmeans(ax, flat, kmeans, norm=20):
histogram(ax, flat)
tmp = np.ones_like(kmeans.cluster_centers_)
idx = ((np.cumsum(tmp)) - 1) / norm
ax.scatter(sorted(kmeans.cluster_centers_), idx, s=16, alpha=0.6)
plt.close()
fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(12.8,19.2))
histogram_kmeans(ax[0,0], conv1_f, conv1_k)
ax[0,0].set_title("conv1")
histogram_kmeans(ax[0,1], conv2_f, conv2_k)
ax[0,1].set_title("conv2")
histogram_kmeans(ax[1,0], conv3_f, conv3_k)
ax[1,0].set_title("conv3")
histogram_kmeans(ax[1,1], conv4_f, conv4_k)
ax[1,1].set_title("conv4")
histogram_kmeans(ax[2,0], conv5_f, conv5_k)
ax[2,0].set_title("conv5")
histogram_kmeans(ax[2,1], fc6_f, fc6_k, norm=0.5)
ax[2,1].set_title("fc6")
histogram_kmeans(ax[3,0], fc7_f, fc7_k, norm=0.5)
ax[3,0].set_title("fc7")
histogram_kmeans(ax[3,1], fc8_f, fc8_k, norm=0.5)
ax[3,1].set_title("fc8")
fig.tight_layout()
plt.show()
plt.close()
def encode_kmeans(kmeans, weights):
w = weights.reshape(-1,1)
codebook = kmeans.cluster_centers_
encoded = kmeans.predict(w)
return codebook, encoded.reshape(weights.shape)
cb_conv1, conv1_e = encode_kmeans(conv1_k, weights['conv1'])
cb_conv2, conv2_e = encode_kmeans(conv2_k, weights['conv2'])
cb_conv3, conv3_e = encode_kmeans(conv3_k, weights['conv3'])
cb_conv4, conv4_e = encode_kmeans(conv4_k, weights['conv4'])
cb_conv5, conv5_e = encode_kmeans(conv5_k, weights['conv5'])
cb_fc6, fc6_e = encode_kmeans(fc6_k, weights['fc6'])
cb_fc7, fc7_e = encode_kmeans(fc7_k, weights['fc7'])
cb_fc8, fc8_e = encode_kmeans(fc8_k, weights['fc8'])
"""
Explanation: We plot the cluster center points on top of the histogram for weight values. For the cluster center, x-axis represent its value, y-axis represents its id. We plot in such manner to better observe the concentration of cluster center.
End of explanation
"""
from scipy import misc
def save_image(encoded_w, name, ext='.png'):
encoded_w = encoded_w.reshape(encoded_w.shape[0], -1)
misc.imsave('./' + name + ext, encoded_w)
save_image(conv1_e, 'conv1')
save_image(conv2_e, 'conv2')
save_image(conv3_e, 'conv3')
save_image(conv4_e, 'conv4')
save_image(conv5_e, 'conv5')
save_image(fc6_e, 'fc6')
save_image(fc7_e, 'fc7')
save_image(fc8_e, 'fc8')
"""
Explanation: The encoded data can be stored by using codebooks (cb_*) and encoded matrices (*_e). Since the matrix can be represented as 8-bit data, we think it's a good idea to save the data as png images.
End of explanation
"""
def check_image(img_name, encoded, is_fc=False):
data = misc.imread(img_name)
if is_fc:
data = data / 17 # Quick hack for 4-bit data
print(np.all(data == encoded.reshape(encoded.shape[0], -1)))
check_image('conv1.png', conv1_e)
check_image('conv2.png', conv2_e)
check_image('conv3.png', conv3_e)
check_image('conv4.png', conv4_e)
check_image('conv5.png', conv5_e)
check_image('fc6.png', fc6_e, is_fc=True)
check_image('fc7.png', fc7_e, is_fc=True)
check_image('fc8.png', fc8_e, is_fc=True)
"""
Explanation: We check if there is any problem (data loss) while loading the images:
End of explanation
"""
def encode_index(nz_index, bits=4):
"""Encode nonzero indices using 4-bit"""
max_val = 2**bits
if bits == 4 or bits == 8:
data_type = np.uint8
elif bits == 16:
data_type = np.uint16
else:
print("Unimplemented index encoding with " + str(bits) + " bits.")
sys.exit(1)
code = np.zeros_like(nz_index, dtype=np.uint32)
adv = 0
# Encode with relative to array index
for i, val in enumerate(nz_index):
cur_i = i + adv
code[i] = val - cur_i
if (val - cur_i != 0):
adv += val - cur_i
# Check if there is overflow
if (code.max() >= max_val):
print("Overflow index codebook. Unimplemented handling.")
sys.exit(1)
# Special case of 4-bit encoding
if (bits == 4):
code_4bit = np.zeros((code.size-1)/2+1)
code_4bit = code[np.arange(0,code.size,2)]*(2**bits) + code[np.arange(1,code.size,2)]
return np.asarray(code_4bit, dtype=data_type)
return np.asarray(code, dtype=data_type)
def decode_index(encoded_ind, org_size=None, bits=4):
"""Decode nonzero indices"""
if org_size is None:
print("Original size must be specified.")
sys.exit(1)
decode = np.zeros(org_size, dtype=np.uint32)
if (bits == 4):
decode[np.arange(0,org_size,2)] = encoded_ind / 2**bits
decode[np.arange(1,org_size,2)] = encoded_ind % 2**bits
decode = np.cumsum(decode+1) - 1
return np.asarray(decode, dtype=np.uint32)
# It should be nonzero indices for real data; this is psuedo weights
conv1_ind = np.arange(weights['conv1'].size)
conv2_ind = np.arange(weights['conv2'].size)
conv3_ind = np.arange(weights['conv3'].size)
conv4_ind = np.arange(weights['conv4'].size)
conv5_ind = np.arange(weights['conv5'].size)
fc6_ind = np.arange(weights['fc6'].size)
fc7_ind = np.arange(weights['fc7'].size)
fc8_ind = np.arange(weights['fc8'].size)
# Encode the indices
conv1_ie = encode_index(conv1_ind)
conv2_ie = encode_index(conv2_ind)
conv3_ie = encode_index(conv3_ind)
conv4_ie = encode_index(conv4_ind)
conv5_ie = encode_index(conv5_ind)
fc6_ie = encode_index(fc6_ind)
fc7_ie = encode_index(fc7_ind)
fc8_ie = encode_index(fc8_ind)
"""
Explanation: Saving data as compressed format
We now encode the data using non-zero indices list and encoded values.
End of explanation
"""
|
ioshchepkov/SHTOOLS
|
examples/notebooks/tutorial_4.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow
lmax = 100
coeffs = SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[5], ms=[2])
"""
Explanation: Spherical Harmonic Normalizations and Parseval's theorem
The variance of a single spherical harmonic
We will here demonstrate the relatioship between a function expressed in spherical harmonics and its variance. To make things simple, we will consider only a single harmonic, and note that the results are easily extended to more complicated functions given that the spherical harmonics are orthogonal.
We start by initializing a new coefficient class to zero and setting a single coefficient to 1.
End of explanation
"""
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
"""
Explanation: Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature:
End of explanation
"""
N = ((grid.data**2) * grid.weights[np.newaxis,:].T).sum() * (2. * np.pi / grid.nlon)
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
"""
Explanation: Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm N) divided by the surface area of the sphere (4 pi):
$$N_{lm} = \int_\Omega Y^2_{lm}(\mathbf{\theta, \phi})~d\Omega$$
$$Var(Y_{lm}(\mathbf{\theta, \phi}) = \frac{N_{lm}}{4 \pi}$$
When the spherical harmonics are 4-pi normalized, N is equal to 4 pi for all values of l and m. Thus, by definition, the variance of each harmonic is 1 for 4-pi nomalized harmonics.
We can verify the mathemiatical value of N by doing the integration manually. For this, we will perform a Gauss-Legendre Quadrature, making use of the latitudinal weighting function that is stored in the SHGrid class instance.
End of explanation
"""
from pyshtools.utils import DHaj
grid_dh = coeffs.expand('DH')
weights = DHaj(grid_dh.nlat)
N = ((grid_dh.data**2) * weights[np.newaxis,:].T).sum() * 2. * np.sqrt(2.) * np.pi / grid_dh.nlon
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
"""
Explanation: Alternatively, we could have done the integration with a 'DH' grid instead:
End of explanation
"""
power = coeffs.spectrum()
print('Total power is ', power.sum())
"""
Explanation: Parseval's theorem
We have seen in the previous section that a single 4-pi normalized spherical harmonic has unit variance. In spectral analysis, the word power is often used to mean the value of the function squared divided by the area it spans, and if the function has zero mean, power is equivalent to variance. Since the spherical harmonics are orthogonal functions on the sphere, there exists a simple relationship between the power of the function and its spherical harmonic coefficients:
$$\frac{1}{4 \pi} \int_{\Omega} f^2(\mathbf{\theta, \phi})~d\Omega = \sum_{lm} C_{lm}^2 \frac{N_{lm}}{4 \pi}$$
This is Parseval's theorem for data on the sphere. For 4-pi normalized harmonics, the last fraction on the right hand side is unity, and the total variance (power) of the function is the sum of the coefficients squared. Knowning this, we can confirm the result of the previous section by showing that the total power of the l=5, m=2 harmonic is unity:
End of explanation
"""
lmax = 200
a = 30
ls = np.arange(lmax+1, dtype=float)
power = 1. / (1. + (ls / a) ** 2) ** 1
coeffs = SHCoeffs.from_random(power)
power_random = coeffs.spectrum()
total_power = power_random.sum()
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
"""
Explanation: If the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem. If the individual coefficients were Gaussian in the first place, the distribution would naturally be Gaussian as well. We illustrate this below.
First, we create a random realization of normally distributed coefficients whose power spectrum follows a power law:
End of explanation
"""
weights = (grid.weights[np.newaxis,:].T).repeat(grid.nlon, axis=1) * (2. * np.pi / grid.nlon)
bins = np.linspace(-50, 50, 30)
center = 0.5 * (bins[:-1] + bins[1:])
dbin = center[1] - center[0]
hist, bins = np.histogram(grid.data, bins=bins, weights=weights, density=True)
"""
Explanation: Next, we calculate a histogram of the data using the Gauss-Legendre quadrature points and weights:
End of explanation
"""
normal_distribution = np.exp( - center ** 2 / (2 * total_power))
normal_distribution /= dbin * normal_distribution.sum()
fig, ax = plt.subplots(1, 1)
ax.plot(center, hist, '-x', c='blue', label='computed distribution')
ax.plot(center, normal_distribution, c='red', label='predicted distribution')
ax.legend(loc=3);
"""
Explanation: Finally, we compute the expected distribution and plot the two:
End of explanation
"""
|
MingChen0919/learning-apache-spark
|
notebooks/04-miscellaneous/.ipynb_checkpoints/user-defined-sql-function (udf)-checkpoint.ipynb
|
mit
|
from pyspark.sql.types import *
from pyspark.sql.functions import udf
mtcars = spark.read.csv('../../data/mtcars.csv', inferSchema=True, header=True)
mtcars = mtcars.withColumnRenamed('_c0', 'model')
mtcars.show(5)
"""
Explanation: udf() function and sql types`
The pyspark.sql.functions.udf() function is a very important function. It allows us to transfer a user defined function to a pyspark.sql.functions function which can act on columns of a DataFrame. It makes data framsformation much more flexible.
Using udf() could be tricky. The key is to understand how to define the returnType parameter.
End of explanation
"""
def disp_by_hp(disp, hp):
return(disp/hp)
disp_by_hp_udf = udf(disp_by_hp, returnType=FloatType())
all_original_cols = [eval('mtcars.' + x) for x in mtcars.columns]
all_original_cols
disp_by_hp_col = disp_by_hp_udf(mtcars.disp, mtcars.hp)
disp_by_hp_col
all_new_cols = all_original_cols + [disp_by_hp_col]
all_new_cols
mtcars.select(all_new_cols).show()
"""
Explanation: The structure of the schema passed to returnType has to match the data structure of the return value from the user defined function.
Case 1: divide disp by hp and put the result to a new column
The user defined function returns a float value.
End of explanation
"""
# define function
def merge_two_columns(col1, col2):
return([float(col1), float(col2)])
# convert user defined function into an udf function (sql function)
array_merge_two_columns_udf = udf(merge_two_columns, returnType=ArrayType(FloatType()))
array_col = array_merge_two_columns_udf(mtcars.disp, mtcars.hp)
array_col
all_new_cols = all_original_cols + [array_col]
all_new_cols
mtcars.select(all_new_cols).show(5, truncate=False)
"""
Explanation: case 2: create an array column that contain disp and hp values
End of explanation
"""
# define function
def merge_two_columns(col1, col2):
return([float(col1), float(col2)])
array_type = ArrayType(FloatType())
array_merge_two_columns_udf = udf(merge_two_columns, returnType=array_type)
"""
Explanation: ArrayType vs. StructType
Both ArrayType and StructType can be used to build returnType for a list. The difference is:
ArrayType requires all elements in the list have the same elementType, while StructType can have different elementTypes.
StructType represents a Row object.
Define an ArrayType with elementType being FloatType.
End of explanation
"""
# define function
def merge_two_columns(col1, col2):
return([str(col1), float(col2)])
struct_type = StructType([
StructField('f1', StringType()),
StructField('f2', FloatType())
])
struct_merge_two_columns_udf = udf(merge_two_columns, returnType=struct_type)
"""
Explanation: Define a StructType with one elementType being StringType and the other being FloatType.
End of explanation
"""
array_col = array_merge_two_columns_udf(mtcars.hp, mtcars.disp)
array_col
"""
Explanation: array column expression: both values are float type values
End of explanation
"""
struct_col = struct_merge_two_columns_udf(mtcars.model, mtcars.disp)
struct_col
"""
Explanation: struct column expression: first value is a string and the second value is a float type value.
End of explanation
"""
mtcars.select(array_col, struct_col).show(truncate=False)
"""
Explanation: Results
End of explanation
"""
|
joshspeagle/frankenz
|
demos/5 - Population Inference with Redshifts.ipynb
|
mit
|
from __future__ import print_function, division
import sys
import pickle
import numpy as np
import scipy
import matplotlib
from matplotlib import pyplot as plt
from six.moves import range
# import frankenz code
import frankenz
# plot in-line within the notebook
%matplotlib inline
np.random.seed(7001826)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'axes.titlepad': '15.0'})
rcParams.update({'font.size': 30})
"""
Explanation: Population Inference
This notebook outlines how to derive population redshift distributions from a given collection of redshift PDFs using some of the tools available in frankenz.
Setup
End of explanation
"""
downsample = 10 # downsampling the population
survey = pickle.load(open('../data/mock_sdss_cww_bpz.pkl', 'rb')) # load data
types = survey.data['types'][::downsample]
templates = survey.data['templates'][::downsample]
redshifts = survey.data['redshifts'][::downsample]
mags = survey.data['refmags'][::downsample]
Nobs = len(types)
print('Number of observed redshifts:', Nobs)
"""
Explanation: Population Redshift Density Estimation
For every observed galaxy $g \in \mathbf{g}$ out of $N_\mathbf{g}$ galaxies, let's assume we have an associated redshift estimate $z_g$ with PDF $P(z_g | z)$. We now want to construct an estimate for the population redshift distribution $P(z|\mathbf{g})$. We can write out the posterior using Bayes Theorem:
$$
P(\rho|\lbrace p_g \rbrace) \propto P(\lbrace p_g \rbrace | \rho) P(\rho)
$$
where we have suppressed some notation for compactness to write $\rho \equiv P(z|\mathbf{g})$ and $p_g \equiv P(z|g)$. Assuming independence, we can factor the first term to be
$$
P(\lbrace p_g \rbrace | \rho) = \prod_{g} P(p_g|\rho) = \prod_g \int P(z_g|z) P(z|\mathbf{g}) dz
$$
In other words, the posterior probability for the population redshift distribution $P(z|\mathbf{g})$ is based on how much it overlaps the most with each of the individual redshift PDFs $P(z_g | z)$ (with some prior).
Note that this result means that the population distribution is not what you get by stacking the individual redshift PDFs:
$$ P(z|\mathbf{g}) \neq \frac{1}{N_\mathbf{g}}\sum_g P(z_g | z) $$
since there is no guarantee $\frac{1}{N_\mathbf{g}}\sum_g P(z_g | z)$ will maximize the posterior probability.
Data
For our proof-of-concept tests, we will use the mock SDSS data we previously generated.
End of explanation
"""
dzbin = 0.1
zbins = np.arange(-1, 7.+1e-5, dzbin) # redshift bins
zbins_mid = 0.5 * (zbins[1:] + zbins[:-1]) # bin midpoints
Nbins = len(zbins) - 1
# plotting histogrammed representation
plt.figure(figsize=(14, 6))
plt.hist(redshifts, bins=zbins, histtype='stepfilled', lw=5,
color='blue', alpha=0.7, density=True, edgecolor='black')
plt.xlabel('Redshift')
plt.xlim([zbins[0], zbins[-1]])
plt.yticks([])
plt.ylabel(r'$P(z|\mathbf{g})$')
plt.tight_layout()
"""
Explanation: Redshift Basis
As before, we can make this functional result a bit more concrete by representing our PDFs using some associated discrete basis $\lbrace \dots, K(z|z_h), \dots \rbrace$ indexed by $h \in \mathbf{h}$ via
$$
P(g|h) \rightarrow P(z_g|z_h) = \int P(z_g | z) K(z | z_h) dz
$$
where the notation $K(z|z_h)$ again is meant to suggest a kernel density and differentiate it from $P(z_g|z)$.
Top-hat Kernels (Histogram)
One common choice of basis is a series of redshift bins (i.e. a histogram), which can be modeled using a top-hat kernel consisting of a product of Heavyside functions
$$
K(z|z_h) = \frac{\mathcal{H}(z - z_h^{-})\mathcal{H}(z_h^{+} - z)}{z_h^{+} - z_h^{-}}
$$
where $z_h^{\pm}$ are the bin edges. In the errorless case where $P(z_g|z) = \delta(z_g - z)$ (where $\delta(\cdot)$ is the Dirac delta function) this just evaluates to $\frac{1}{z_h^{+} - z_h^{-}}$ if $z_g \in \left[z_h^{-}, z_h^{+}\right)$ and $0$ otherwise. In the general case, this just means evaluating the integral above from $z_h^{-}$ to $z_h^{+}$.
Applying Bayes Theorem gives us:
$$ \ln P(\lbrace h_g \rbrace | \boldsymbol{\rho}) = \sum_{g} \ln P(h_g|\boldsymbol{\rho}) = \sum_g \ln\left( \sum_h P(g|h) P(h|\mathbf{g}) \right) $$
where we now used $\boldsymbol{\rho} = \lbrace \dots, P(h|\mathbf{g}), \dots \rbrace$ to indicate a discrete collection of probabilities over $\mathbf{h}$.
In the case where our PDFs are errorless, $P(g|h) = 1$ only in one bin and $0$ in all other bin. This gives
$$ \ln P(\boldsymbol{\rho}|\lbrace h_g \rbrace) = \sum_h N_h \ln N_h^\prime - N_\mathbf{g} \ln N_{\mathbf{g}} $$
where $P(h|g)=N_h^\prime/N_{\mathbf{g}}$ is the counts/amplitude of the $h$th bin. This is maximized when $N_h^\prime = N_h$, i.e. when the posterior is equal to the number of observed counts in each bin.
This formalism might seem like overkill when the conclusion is that the "right" thing to do for noiseless data is "count up the number of galaxies in each redshift bin and normalize them". While this might seem intuitive, it is important to remember that this result is special and does not hold in general.
End of explanation
"""
# KDE
dzgrid = 0.01
zgrid = np.arange(-1., 7.+1e-5, dzgrid)
Ngrid, smooth = len(zgrid), 0.05
pdf = frankenz.pdf.gauss_kde(redshifts, np.ones(Nobs) * smooth, zgrid)
pdf /= np.trapz(pdf, zgrid)
# plotting
plt.figure(figsize=(14, 6))
plt.hist(redshifts, bins=zbins, histtype='stepfilled', lw=5,
color='blue', alpha=0.7, density=True, edgecolor='black')
plt.plot(zgrid, pdf, lw=5, color='red', alpha=0.8)
plt.hist(zgrid + 1e-5, bins=zbins, weights=pdf, histtype='step', lw=5,
color='red', alpha=0.5, density=True)
plt.xlabel('Redshift')
plt.xlim([zbins[0], zbins[-1]])
plt.yticks([])
plt.ylabel('$P(z|\mathbf{g})$')
plt.tight_layout()
"""
Explanation: Gaussian Kernels (Smooth KDE)
Alternately, we can construct a smooth density estimate by using, e.g., kernel density estimation by assigning, e.g., a Gaussian kernel to each point. The estimate for the population redshift distribution is then (by construction)
$$ P(z|\mathbf{g}) = \frac{1}{N_\mathbf{g}} \sum_{g} \mathcal{N}(z|\mu=z_g, \sigma^2=\sigma^2) $$
In other words, we just stack kernels centered around each point.
We cannot emphasize enough that this is a heuristic, not the truth. In general, stacking PDFs/kernels does not maximize the posterior probability.
End of explanation
"""
# generate Gaussian PDFs over grid
sigma = np.random.uniform(0.05, 0.2, size=Nobs) # width
mu = np.random.normal(redshifts, sigma) # noisy observation
zpdf = np.array([frankenz.pdf.gaussian(mu[i], sigma[i], zgrid)
for i in range(Nobs)]) # redshift pdfs
zpdf /= np.trapz(zpdf, zgrid)[:,None] # normalizing
# generate PDFs over bins
zpdf_bins = np.array([frankenz.pdf.gaussian_bin(mu[i], sigma[i], zbins)
for i in range(Nobs)]) # redshift pdfs
zpdf_bins /= zpdf_bins.sum(axis=1)[:,None] * dzbin # normalizing
"""
Explanation: We can see the disagreement between the two methods by examining the histogrammed version of the KDE $P(z|\mathbf{g})$, which disagrees with the histogram computed directly from the data.
Noisy Data
Let's say that we're actually dealing with a noisy redshift estimate $\hat{z}_g$. This leaves us with:
$$
P(g|h) = \frac{\int P(\hat{z}g | z) K(z | z_h) dz}{\sum{g} \int P(\hat{z}_g | z) K(z | z_h) dz}
$$
Let's assume our errors are Gaussian such that $P(\hat{z}_g|z) = \mathcal{N}(\hat{z}_g|z, \hat{\sigma}_g)$, where we assume the error $\hat{\sigma}_g$ is also estimated but which we will take to be fixed (and correct).
End of explanation
"""
# plot some PDFs
plt.figure(figsize=(20, 12))
Nfigs = (3, 3)
Nplot = np.prod(Nfigs)
colors = plt.get_cmap('viridis')(np.linspace(0., 0.7, Nplot))
idxs = np.random.choice(Nobs, size=Nplot)
idxs = idxs[np.argsort(redshifts[idxs])]
for i, (j, c) in enumerate(zip(idxs, colors)):
plt.subplot(Nfigs[0], Nfigs[1], i + 1)
plt.plot(zgrid, zpdf[j], color=c, lw=4)
plt.hist(zbins_mid, zbins, weights=zpdf_bins[j],
color=c, lw=4, alpha=0.5, histtype='step')
plt.vlines(redshifts[j], 0., max(zpdf[j] * 1.2), color='red',
lw=3)
plt.xlim([-0.5, 6])
plt.ylim([0.03, None])
plt.xlabel('Redshift')
plt.yticks([])
plt.ylabel('PDF')
plt.tight_layout()
"""
Explanation: To avoid edge effects, for our simulated data we will allow for negative redshifts so that producing a "correct" PDF involves simply shifting the mean of the Gaussian.
End of explanation
"""
from scipy.stats import norm as normal
from scipy.stats import kstest
# compute CDF draws
cdf_vals = normal.cdf(redshifts, loc=mu, scale=sigma)
plt.figure(figsize=(8, 4))
plt.hist(cdf_vals, bins=np.linspace(0., 1., 20))
plt.xlabel('CDF Value')
plt.ylabel('Counts')
# compute KS test statistic
ks_result = kstest(rvs=cdf_vals, cdf='uniform')
print('K-S Test p-value = {:0.3f}'.format(ks_result[1]))
"""
Explanation: We can verify that these are PDFs in a statistical sense by computing the empirical CDF and comparing our results to that expected from a uniform distribution, showcasing that indeed our likelihoods provide proper coverage (as constructed).
End of explanation
"""
# plotting
plt.figure(figsize=(14, 6))
plt.plot(zgrid, pdf, lw=5, color='black', alpha=0.8,
label='Truth (KDE)')
plt.plot(zgrid, zpdf.sum(axis=0) / Nobs, lw=5, color='blue',
alpha=0.6, label='Stacked PDFs')
plt.hist(redshifts, bins=zbins, histtype='step', lw=5,
color='black', alpha=0.7, density=True)
plt.hist(zbins_mid, bins=zbins, weights=zpdf_bins.sum(axis=0) / Nobs,
histtype='step', lw=5,
color='blue', alpha=0.5, density=True)
plt.xlabel('Redshift')
plt.xlim([zgrid[0], zgrid[-1]])
plt.yticks([])
plt.ylabel('$N(z|\mathbf{g})$')
plt.ylim([0., None])
plt.legend(fontsize=28, loc='best')
plt.tight_layout()
"""
Explanation: Stacking PDFs
Let's now see what happens when we "stack" our PDFs.
End of explanation
"""
# number of samples
Nsamples = 500
"""
Explanation: Now that we're using noisy observations (and PDFs) instead of the true (noiseless) observations, we can see stacking the PDFs is not a fully accurate reconstruction of the true population redshift number density. This should now intuitively make sense -- the noise broadens the overall distribution, so estimating the population redshift distribution $P(z|\mathbf{g})$ requires deconvolving the noisy observations.
"Quick Fixes"
We've done the wrong thing, but maybe it's not too terrible. In particular, maybe we can derive some estimate for the error that encompasses the true distribution, so that if we use our stacked PDFs in some future analysis it won't affect us as much.
One direct way to do this is to try and draw samples from the distribution:
$$ \mathbf{n}\mathbf{g}^{(1)}, \dots, \mathbf{n}{\mathbf{g}}^{(k)} \sim P(\mathbf{n}_\mathbf{g}|\mathbf{g}) $$
where $\mathbf{n}\mathbf{g} = \boldsymbol{\rho} \times N\mathbf{g}$ is just the normalized effective counts.
End of explanation
"""
# draw Poisson samples
pdf1 = zpdf_bins.sum(axis=0) # stack PDFs
pdf1 /= pdf1.sum() # normalize
pdf1 *= Nobs # transform to counts
pdf1_samples = np.array([np.random.poisson(pdf1)
for i in range(Nsamples)]) # draw samples
def zplot_bin(samples, label='type', color='blue', downsample=5):
"""Plot our binned draws."""
[plt.hist(zbins_mid + 1e-5, zbins,
weights=samples[i], lw=3,
histtype='step', color=color, alpha=0.05)
for i in np.arange(Nsamples)[::downsample]]
plt.hist(zgrid, weights=pdf*1e-5, lw=3, histtype='step',
color=color, alpha=0.6, label=label)
h = plt.hist(redshifts, zbins,
histtype='step', lw=6, color='black', alpha=0.7)
plt.xlabel('Redshift')
plt.xlim([-0.5, 4])
plt.yticks([])
plt.ylim([0, max(h[0]) * 1.2])
plt.ylabel('$N(z|\mathbf{g})$')
plt.legend(fontsize=26, loc='best')
plt.tight_layout()
# plotting
plt.figure(figsize=(14, 6))
zplot_bin(pdf1_samples, label='Poisson', color='blue')
"""
Explanation: Poisson Approximation
A common approximation is that there are some number of galaxies $N_h$ within the $h$-th redshift bin, and we observe some random realization of this underlying count. The number of objects in each bin (assuming they're independent from each other) then follows a Poisson distribution where
$$ n_h \sim \textrm{Pois}(\lambda_h) $$
This treats $n_h(\mathbf{g})$ as a Poisson random variable (and $\mathbf{n}_{\mathbf{g}}$ as a Poisson random vector) that we want to simuate. The maximum-likelihood solution for our results occurs when $\lambda_h = n_h$.
End of explanation
"""
# draw multinomial samples
pdf2_samples = np.random.multinomial(Nobs, pdf1 / pdf1.sum(),
size=Nsamples) # samples
# plotting
plt.figure(figsize=(20, 6))
plt.subplot(1, 2, 1)
zplot_bin(pdf1_samples, label='Poisson', color='blue')
plt.subplot(1, 2, 2)
zplot_bin(pdf2_samples, label='Multinomial', color='red')
"""
Explanation: The Poisson approximation implies that the number of objects we observe at a given redshift is simply a counting process with mean $\boldsymbol{\mu} = \boldsymbol{\lambda}{\mathbf{g}} = \mathbf{n}{\mathbf{g}}$. This isn't quite right, since this assumes the number of objects observed at different $z_h$'s are independent, when we know that there must be some covariance due to each object's redshift PDF. More importantly, however, this approximation implies that the total number of objects we observe doesn't remain constant!
Not looking good...
Multinomial Approximation
One improvement on the Poisson model above is to try and conserve the total number of observed counts $N_\mathbf{g}$. They are then drawn from the Multinomial distribution:
$$ \mathbf{n}{\mathbf{h}}' \sim \textrm{Mult}\left(N\mathbf{g}, \mathbf{p}_{\mathbf{g}}\right) $$
The maximum-likelihood result is
$$ \mathbf{p}{\mathbf{g},\textrm{ML}} = \mathbf{n}\mathbf{g}/N_\mathbf{g} $$
End of explanation
"""
# draw posterior samples
pdf3_samples = np.zeros_like(pdf1_samples)
zpdf_norm = zpdf_bins / zpdf_bins.sum(axis=1)[:, None]
for j in range(Nsamples):
if j % 50 == 0:
sys.stderr.write(' {0}'.format(j))
for i in range(Nobs):
# stack categorial draw
pdf3_samples[j] += np.random.multinomial(1, zpdf_norm[i])
# plotting
plt.figure(figsize=(30, 6))
plt.subplot(1, 3, 1)
zplot_bin(pdf1_samples, label='Poisson', color='blue')
plt.subplot(1, 3, 2)
zplot_bin(pdf2_samples, label='Multinomial', color='red')
plt.subplot(1, 3, 3)
zplot_bin(pdf3_samples, label='PDF Draws', color='darkviolet')
"""
Explanation: The multinomial approximation implies that the redshift PDF of a random observed galaxy $P(z|g)$ is proportional to the population PDF $P(z|\mathbf{g})$. In other words, a given galaxy is just a random draw from the population. Since the multinomial has the benefit of keeping the overall number of galaxies $N_\mathbf{g}$ constant, it induces negative correlations among the individual categories (redshifts). However, this still approximation ignores measurement errors (i.e. individual galaxy PDFs), which can induce additional correlations among redshifts.
Things don't look much better than for the Poisson...
Individual Redshift Posterior Samples
Any particular galaxy $g$ with PDF $\mathbf{p}_g$ over our redshift bins is actually located at a particular redshift $z_g$, with the corresponding redshift PDF modeling our uncertainty over its true redshift. We can imagine
simulating the true redshifts $z_g'$ from each PDF $P(z|g)$ over the whole population and then applying the same methods we used when dealing with noiseless observations to get realizations of the $P(z|\mathbf{g})$.
More formally, over our bins the distribution of $z_g$ ($\mathbf{n}_g$) can be modeled as a Categorial (Multinomial) random variable such that
$$
z_g' \sim \textrm{Cat}\left(\mathbf{p}=\mathbf{p}_g\right)
\quad \Leftrightarrow \quad
\mathbf{n}_g' \sim \textrm{Mult}\left(n=1, \mathbf{p}=\mathbf{p}_g\right)
$$
where $\mathbf{n}_g'$ is $1$ at the bin containing $z_g$ and zero elsewhere.
The redshift number density is then
$$
\mathbf{n}\mathbf{g}' = \sum{g} \mathbf{n}_g'
$$
This represents a convolution of a series of Multinomial-distributed random variables with different PDFs. While this doesn't have a simple closed-form solution, it is straightforward to draw samples $\mathbf{n}{\mathbf{g}}^{(i)}$ from this distribution by drawing redshifts from each galaxy's PDF and then stacking the results. This procedure intuitively seems to make sense: we simulate our uncertainties on $\mathbf{n}\mathbf{g}$ by simulating our uncertainties on the individual redshifts from $\mathbf{p}_g$.
End of explanation
"""
def cov_draws(samples, bin1=(14, 16), bin2=(16, 18), color='blue',
label='label', xlim=None, ylim=None):
"""Plot our draws within two bins."""
# Bin results.
n, _ = np.histogram(redshifts, bins=zbins)
pdf_bin1 = n[bin1[0]:bin1[1]].sum() / n.sum() * Nobs / 1e3
pdf_bin2 = n[bin2[0]:bin2[1]].sum() / n.sum() * Nobs / 1e3
samples_bin1 = samples[:, bin1[0]:bin1[1]].sum(axis=1) / 1e3
samples_bin2 = samples[:, bin2[0]:bin2[1]].sum(axis=1) / 1e3
# Plot results.
plt.vlines(pdf_bin1, 0, 100, lw=2, colors='black', linestyles='--')
plt.hlines(pdf_bin2, 0, 100, lw=2, colors='black', linestyles='--')
plt.plot(pdf_bin1, pdf_bin2, 's', color='black', markersize=20)
plt.plot(samples_bin1, samples_bin2, 'o', color=color,
label=label, markersize=8, alpha=0.4)
if xlim is None:
plt.xlim([min(pdf_bin1, min(samples_bin1)) - 0.1,
max(pdf_bin1, max(samples_bin1)) + 0.1])
else:
plt.xlim(xlim)
if ylim is None:
plt.ylim([min(pdf_bin2, min(samples_bin2)) - 0.1,
max(pdf_bin2, max(samples_bin2)) + 0.1])
else:
plt.ylim(ylim)
plt.xlabel(r'$N({:6.1f}\leq z < {:6.1f}) \quad [10^3]$'.format(zbins[bin1[0]],
zbins[bin1[1]]))
plt.ylabel(r'$N({:6.1f}\leq z < {:6.1f}) \quad [10^3]$'.format(zbins[bin2[0]],
zbins[bin2[1]]))
plt.legend(fontsize=28, loc=2)
plt.tight_layout()
# plotting binned covariance
plt.figure(figsize=(30, 9))
plt.subplot(1, 3, 1)
cov_draws(pdf1_samples,
xlim=(2.8, 3.6), ylim=(2.8, 3.3),
color='blue', label='Poisson')
plt.subplot(1, 3, 2)
cov_draws(pdf2_samples,
xlim=(2.8, 3.6), ylim=(2.8, 3.3),
color='red', label='Multinomial')
plt.subplot(1, 3, 3)
cov_draws(pdf3_samples,
xlim=(2.8, 3.6), ylim=(2.8, 3.3),
color='darkviolet', label='PDF Draws')
"""
Explanation: Okay, so that didn't seem to help very much...
Covariance Structure
In addition to the true distribution, it's also informative to check out the covariances among our $P(z|\mathbf{g})$ draws.
End of explanation
"""
# grab representative set of previous redshifts
Nref = 1000
redshifts_ref = survey.data['redshifts'][-Nref:]
# plotting histogrammed representation
plt.figure(figsize=(14, 6))
plt.hist(redshifts, bins=zbins, histtype='stepfilled', lw=5,
color='blue', alpha=0.7, density=True, edgecolor='black',
label='Underlying')
plt.hist(redshifts_ref, bins=zbins, histtype='step', lw=5,
color='red', alpha=0.7, density=True, label='Reference')
plt.xlabel('Redshift')
plt.xlim([zbins[0], zbins[-1]])
plt.yticks([])
plt.ylabel(r'$P(z|\mathbf{g})$')
plt.legend(fontsize='small')
plt.tight_layout()
alpha = np.ones(Nbins)
counts_ref, _ = np.histogram(redshifts_ref, bins=zbins)
# define our prior
def logprior(x, alpha=None, counts_ref=None):
if alpha is None:
alpha = np.ones_like(x)
if counts_ref is None:
counts_ref = np.zeros_like(x)
if np.any(x < 0.):
return -np.inf
return scipy.stats.dirichlet.logpdf(x, alpha + counts_ref)
"""
Explanation: The lack of covariances under the Poisson model makes sense given each bin is treated independently. The slight anti-correlation under the Multinomial model is also what we'd expact given the conservation of total counts (i.e. removing an object from one bins requires putting it in another bin). The PDF draws appear to capture the correct covariance structure overall, but severely underestimate the uncertainties and remains biased.
Population Modeling
As derived above, the "right" thing to do is to sample from the posterior distribution:
$$
\ln P(\boldsymbol{\rho}|\lbrace \mathbf{p}g \rbrace) = \ln P(\lbrace \mathbf{p}_g \rbrace | \boldsymbol{\rho}) + \ln P(p\mathbf{g}) - \ln P(\lbrace \mathbf{p}_g \rbrace) = \sum_g \ln\left( \mathbf{p}_g \cdot \boldsymbol{\rho} \right) + \ln P(\boldsymbol{\rho}) - \ln P(\lbrace \mathbf{p}_g \rbrace)
$$
where $P(\lbrace \mathbf{p}_g \rbrace)$ is a constant that can be ignored and $\cdot$ is the dot product.
Hyper-Prior
We will take $P(\boldsymbol{\rho})$ to be a Dirichlet distribution:
$$ \boldsymbol{\rho} \sim {\rm Dir}\left(\mathbf{m} + \boldsymbol{\alpha}\right) $$
where $\boldsymbol{\alpha} = \mathbf{1}$ are a set of concentration parameters (with 1 being uniform) and $\mathbf{m}$ being a set of counts we've previously observed. We will come back to this particular choice of prior later when we deal with hierarchical models.
End of explanation
"""
from frankenz import samplers
# initialize sampler
sampler = samplers.population_sampler(zpdf_norm)
# run MH-in-Gibbs MCMC
Nburn = 250
sampler.run_mcmc(Nsamples + Nburn, logprior_nz=logprior, prior_args=[alpha, counts_ref])
# grab samples
pdf4_samples, pdf4_lnps = sampler.results
pdf4_samples = pdf4_samples[-500:] * Nobs # truncate and rescale
# plotting
plt.figure(figsize=(30, 12))
plt.subplot(2, 3, 1)
zplot_bin(pdf1_samples, label='Poisson', color='blue')
plt.subplot(2, 3, 2)
zplot_bin(pdf2_samples, label='Multinomial', color='red')
plt.subplot(2, 3, 3)
zplot_bin(pdf3_samples, label='PDF Draws', color='darkviolet')
plt.subplot(2, 3, 4)
zplot_bin(pdf4_samples, label='Population', color='darkgoldenrod')
# plotting binned covariance
plt.figure(figsize=(30, 18))
plt.subplot(2, 3, 1)
cov_draws(pdf1_samples,
xlim=(2.8, 3.8), ylim=(2.8, 4.0),
color='blue', label='Poisson')
plt.subplot(2, 3, 2)
cov_draws(pdf2_samples,
xlim=(2.8, 3.8), ylim=(2.8, 4.0),
color='red', label='Multinomial')
plt.subplot(2, 3, 3)
cov_draws(pdf3_samples,
xlim=(2.8, 3.8), ylim=(2.8, 4.0),
color='darkviolet', label='PDF Draws')
plt.subplot(2, 3, 4)
cov_draws(pdf4_samples,
xlim=(2.8, 3.8), ylim=(2.8, 4.0),
color='darkgoldenrod', label='Posterior')
"""
Explanation: Sampling
We now turn to the challenge of generating samples from our distribution. While there are several ways to theoretically do this, we will focus on Markov Chain Monte Carlo methods. Due to the constraint that $\boldsymbol{\rho}$ must sum to 1, we are sampling from this distribution on the $(N_h - 1)$-dimensional simplex since the amplitude of the final bin is always determined by the remaining bins. This creates an additional challenge, since changing one bin will always lead to changes in the other bins.
While we could attempt to sample this distribution directly using Metropolis-Hastings (MH) updates, given the number of parameters involved in specifying our population distribution $\boldsymbol{\rho}$ it is likely better to use Gibbs sampling to iterate over conditionals. To satisfy the summation constraint, we opt to use an approach where we update bins $(i, j)$ pairwise so that $i^\prime + j^\prime = (i + \Delta i) + (j + \Delta j) = i + j \Rightarrow \Delta j = -\Delta i = z$, where $z$ is now our step-size over the bins. We generate proposals for each random pair of bins using MH proposals where the scale is determined adaptively by estimating the gradient for $\partial/\partial z$ at each iteration to aim for optimal acceptance fractions.
The likelihood and Gibbs sampler are implemented natively is frankenz. If we wanted to specify a particular prior, that can also be passed to the sampler as shown below.
End of explanation
"""
|
ewulczyn/talk_page_abuse
|
misc/kaggle/src/n-grams.ipynb
|
apache-2.0
|
data_filename = '../data/train.csv'
data_df = pd.read_csv(data_filename)
corpus = data_df['Comment']
labels = data_df['Insult']
train_corpus, test_corpus, train_labels, test_labels = \
sklearn.cross_validation.train_test_split(corpus, labels, test_size=0.33)
"""
Explanation: Load and Split Kaggle Data
End of explanation
"""
pipeline = Pipeline([
('vect', sklearn.feature_extraction.text.CountVectorizer()),
('tfidf', sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True,norm='l2')),
('clf', sklearn.linear_model.LogisticRegression()),
])
param_grid = {
#'vect__max_df': (0.5, 0.75, 1.0),
#'vect__max_features': (None, 5000, 10000, 50000),
'vect__ngram_range': ((1, 1), (2, 2), (1,4)), # unigrams or bigrams
#'vect_lowercase': (True, False),
'vect__analyzer' : ('char',), #('word', 'char')
#'tfidf__use_idf': (True, False),
#'tfidf__norm': ('l1', 'l2'),
#'clf__penalty': ('l2', 'elasticnet'),
#'clf__n_iter': (10, 50, 80),
'clf__C': [0.1, 1, 5, 50, 100, 1000, 5000],
}
model = cv (train_corpus, train_labels.values, 5, pipeline, param_grid, 'roc_auc', False, n_jobs=8)
# Hold out set Perf
auc(test_labels.values,get_scores(model, test_corpus))
"""
Explanation: Build baseline text classification model in Sklearn
End of explanation
"""
joblib.dump(model, '../models/kaggle_ngram.pkl')
"""
Explanation: This is about as good as the best Kagglers report they did.
End of explanation
"""
d_wiki = pd.read_csv('../../wikipedia/data/100k_user_talk_comments.tsv', sep = '\t').dropna()[:10000]
d_wiki['prob'] = model.predict_proba(d_wiki['diff'])[:,1]
d_wiki.sort('prob', ascending=False, inplace = True)
_ = plt.hist(d_wiki['prob'].values)
plt.xlabel('Insult Prob')
plt.title('Wikipedia Score Distribution')
_ = plt.hist(model.predict_proba(train_corpus)[:, 1])
plt.xlabel('Insult Prob')
plt.title('Kaggle Score Distribution')
"""
Explanation: Score Random Wikipedia User Talk Comments
Lets take a random sample of user talk comments, apply the insult model trained on kaggle and see what we find.
End of explanation
"""
"%0.2f%% of random wiki comments are predicted to be insults" % ((d_wiki['prob'] > 0.5).mean() * 100)
"""
Explanation: The distribution over insult probabilities in the two datasets is radically different. Insults in the Wikipedia dataset are much rarer
End of explanation
"""
for i in range(5):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(50, 55):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(100, 105):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
"""
Explanation: Check High Scoring Comments
End of explanation
"""
d_wiki_blocked = pd.read_csv('../../wikipedia/data/blocked_users_user_talk_page_comments.tsv', sep = '\t').dropna()[:10000]
d_wiki_blocked['prob'] = model.predict_proba(d_wiki_blocked['diff'])[:,1]
d_wiki_blocked.sort('prob', ascending=False, inplace = True)
"%0.2f%% of random wiki comments are predicted to be insults" % ((d_wiki_blocked['prob'] > 0.5).mean() * 100)
"""
Explanation: Score Blocked Users' User Talk Comments
End of explanation
"""
for i in range(5):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki_blocked.iloc[i]['diff'], '\n')
for i in range(50, 55):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(100, 105):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
"""
Explanation: Check High Scoring Comments
End of explanation
"""
isinstance(y_train, np.ndarray)
y_train = np.array([y_train, 1- y_train]).T
y_test = np.array([y_test, 1- y_test]).T
# Parameters
learning_rate = 0.001
training_epochs = 60
batch_size = 200
display_step = 5
# Network Parameters
n_hidden_1 = 100 # 1st layer num features
n_hidden_2 = 100 # 2nd layer num features
n_hidden_3 = 100 # 2nd layer num features
n_input = X_train.shape[1]
n_classes = 2
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def LG(_X, _weights, _biases):
return tf.matmul(_X, _weights['out']) + _biases['out']
# Store layers weight & bias
weights = {
'out': tf.Variable(tf.random_normal([n_input, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = LG(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
sess = tf.Session()
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
m = 0
batches = batch_iter(X_train.toarray(), y_train, batch_size)
# Loop over all batches
for batch_xs, batch_ys in batches:
batch_m = len(batch_ys)
m += batch_m
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys}) * batch_m
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost/m))
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Accuracy:", accuracy.eval({x: X_train.toarray(), y: y_train}, session=sess))
print ("Accuracy:", accuracy.eval({x: X_test.toarray(), y: y_test}, session=sess))
print ("Optimization Finished!")
# Test model
"""
Explanation: Scratch: Do not keep reading :)
Tensorflow MPL
End of explanation
"""
|
lodrantl/github_analysis
|
github_analysis/analysis.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.max_rows = 20
"""
Explanation: GitHub Analiza
iz
V tem projektu bomo analizirali najpopularnejše odprte repozitorije na priljubljeni strani GitHub. Podatki so bili zajeti iz https://api.github.com, kar pa v tem REST API-ju ni bilo dosegljivo pa iz https://github.com.
Zajeti podatki
lastnik
ime
programski jezik
število zvezdic
število commitov
število vej
število forkov
število izdaj
število gledalcev
število contributorjev
licenca
datum nastanka
datum zadnjega commita
Analizirali bomo, kateri so najpriljubljenejši programski jeziki, kako "veliki" so ti repozitoriji,... in pa kako so ti podatki povezani med seboj.
End of explanation
"""
repos = pd.read_csv('../data/repositories.csv', parse_dates=[11,12,13])
repos
"""
Explanation: Naložimo zajete podatke in si oglejmo primer.
End of explanation
"""
repos.describe()
"""
Explanation: Pandas nam omogoči hiter pregled osnovnih številčnih izračunov.
End of explanation
"""
repos.groupby(repos.created_at.dt.year).size().plot(kind='bar')
"""
Explanation: Sedaj si poglejmo, katerega leta so bili ti repozitoriji ustvarjeni.
End of explanation
"""
top_languages = repos.groupby("language").filter(lambda x: len(x) > 10).groupby("language")
"""
Explanation: Vidimo, da je največ projektov nastalo v letih 2013 in 2014, prej GitHub še ni bil tako popularen, novejši projekti pa še niso imeli časa da zaslovijo.
Pogosto bomo uporabljali repozitorije grupirane po jezikih, da pa bo analiza lažja vzemimo le tiste jezike z več kot 10 repozitoriji.
End of explanation
"""
by_lang_pie = top_languages.size().sort_values().plot(kind='pie',figsize=(7, 7), fontsize=13)
by_lang_pie.set_ylabel("") #Removes the none
"""
Explanation: Zdaj pa si oglejmo, kateri programski jeziki se najpogosteje pojavijo. Prikazali bomo le tiste z več kot 10 repozitoriji, saj drugače ne dobimo uporabnega grafa. Vidimo, da prevladuje JavaScript, sledijo pa mu Java, Objective-C in Python.
End of explanation
"""
df = repos.copy()
z = np.polyfit(x=df.loc[:, 'stargazers_count'], y=df['forks_count'], deg=1)
p = np.poly1d(z)
df['trendline'] = p(df.loc[:, 'stargazers_count'])
ax = df.plot.scatter(x='stargazers_count', y='forks_count', color='green')
df.set_index('stargazers_count', inplace=True)
df.trendline.sort_index(ascending=False).plot(ax=ax, color='red')
plt.gca().invert_xaxis()
ax.set_ylim(0, 6000)
ax.set_xlim(0, 30000)
"""
Explanation: Narišimo še graf, ki prikazuje odvisnost števila forkov od števila zvezdic. Poskusimo lahko to odvisnost predstaviti kot linearno funkcijo, a hitro vidimo, da prihaja do velikih odstopanj. Vseeno pa iz naraščujoče funkcije lahko sklepamo, da imajo repozitoriji z več zvezdicamo tudi več forkov.
End of explanation
"""
top_languages[["commit_count"]].mean().sort_values("commit_count").plot(kind='bar')
"""
Explanation: Oglejmo si še kako je povezano število commitov z programskim jezikom uporabljenim. Vidimo, da so se najbolj spreminjali repozitoriji v VimL, Shell in Objective-C, najmanj pa C++, Ruby in Coffescript.
End of explanation
"""
repos.groupby("license").size().sort_values(ascending=False)
"""
Explanation: Izpis licenc uporabljenih v teh repozitorijih, razvrščene po vrstnem redu. None pomeni, da licenca ni objavljena, Other pa da uporabljajo lastno licenco.
End of explanation
"""
pushed_pie = repos.groupby(repos.pushed_at >= "2016-10-03").size().plot(kind='pie',figsize=(7, 7), autopct='%.2f%%')
pushed_pie.set_ylabel("") #Removes the none
"""
Explanation: Za konec ugotovimo še, koliko izmed teh repozitorijev je sveže posodobljenih (imajo spremebe v zadnje mesecu). True pomeni, da ima spremebe, False da ne.
End of explanation
"""
|
vangj/py-bbn
|
jupyter/libpgm.ipynb
|
apache-2.0
|
json_data = {
"V": ["Letter", "Grade", "Intelligence", "SAT", "Difficulty"],
"E": [["Difficulty", "Grade"],
["Intelligence", "Grade"],
["Intelligence", "SAT"],
["Grade", "Letter"]],
"Vdata": {
"Letter": {
"ord": 4,
"numoutcomes": 2,
"vals": ["weak", "strong"],
"parents": ["Grade"],
"children": None,
"cprob": {
"['A']": [.1, .9],
"['B']": [.4, .6],
"['C']": [.99, .01]
}
},
"SAT": {
"ord": 3,
"numoutcomes": 2,
"vals": ["lowscore", "highscore"],
"parents": ["Intelligence"],
"children": None,
"cprob": {
"['low']": [.95, .05],
"['high']": [.2, .8]
}
},
"Grade": {
"ord": 2,
"numoutcomes": 3,
"vals": ["A", "B", "C"],
"parents": ["Difficulty", "Intelligence"],
"children": ["Letter"],
"cprob": {
"['easy', 'low']": [.3, .4, .3],
"['easy', 'high']": [.9, .08, .02],
"['hard', 'low']": [.05, .25, .7],
"['hard', 'high']": [.5, .3, .2]
}
},
"Intelligence": {
"ord": 1,
"numoutcomes": 2,
"vals": ["low", "high"],
"parents": None,
"children": ["SAT", "Grade"],
"cprob": [.7, .3]
},
"Difficulty": {
"ord": 0,
"numoutcomes": 2,
"vals": ["easy", "hard"],
"parents": None,
"children": ["Grade"],
"cprob": [.6, .4]
}
}
}
"""
Explanation: Intro
This notebook is to demonstrate how to convert a libpgm discrete Bayesian Belief Network (BBN) into a py-bbn BBN one. The JSON data specified here is the example taken.
End of explanation
"""
from pybbn.graph.dag import Bbn
from pybbn.graph.jointree import EvidenceBuilder
from pybbn.pptc.inferencecontroller import InferenceController
from pybbn.graph.factory import Factory
bbn = Factory.from_libpgm_discrete_dictionary(json_data)
"""
Explanation: Conversion
~~Here we kludge how to build a libpgm discrete network. Note that libpgm requires node data and a skeleton. Since I could not find a factory method to read in from a dictionary or JSON string (they only show how to read the JSON data from a file), I took a look at the code and manually constructed a discrete BBN in libpgm. If the libpgm API changes, then this all might break too.~~
Note that we do NOT support dependency on libpgm anymore since it is not Python 3.x compatible. You will have to get either the JSON string value or the dictionary representation to work with py-bbn. The culprit with libgpm is this line here. After you have the JSON or dictionary specifying a libpgm BBN, you can use the Factory to create a py-bbn BBN and perform exact inference as follows.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore')
nx_graph, labels = bbn.to_nx_graph()
pos = nx.nx_agraph.graphviz_layout(nx_graph, prog='neato')
plt.figure(figsize=(10, 8))
plt.subplot(121)
nx.draw(
nx_graph,
pos=pos,
with_labels=True,
labels=labels,
arrowsize=15,
edge_color='k',
width=2.0,
style='dash',
font_size=13,
font_weight='normal',
node_size=100)
plt.title('libpgm BBN DAG')
"""
Explanation: You may also visualize the directed acylic graph (DAG) of the BBN through networkx.
End of explanation
"""
join_tree = InferenceController.apply(bbn)
"""
Explanation: Inference
Now, create a join tree.
End of explanation
"""
import pandas as pd
def potential_to_df(p):
data = []
for pe in p.entries:
try:
v = pe.entries.values()[0]
except:
v = list(pe.entries.values())[0]
p = pe.value
t = (v, p)
data.append(t)
return pd.DataFrame(data, columns=['val', 'p'])
def potentials_to_dfs(join_tree):
data = []
for node in join_tree.get_bbn_nodes():
name = node.variable.name
df = potential_to_df(join_tree.get_bbn_potential(node))
t = (name, df)
data.append(t)
return data
marginal_dfs = potentials_to_dfs(join_tree)
# insert an observation evidence for when SAT=highscore
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('SAT')) \
.with_evidence('highscore', 1.0) \
.build()
join_tree.unobserve_all()
join_tree.set_observation(ev)
sat_high_dfs = potentials_to_dfs(join_tree)
# insert an observation evidence for when SAT=lowscore
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name('SAT')) \
.with_evidence('lowscore', 1.0) \
.build()
join_tree.unobserve_all()
join_tree.set_observation(ev)
sat_low_dfs = potentials_to_dfs(join_tree)
# merge all dataframes so we can visualize then side-by-side
all_dfs = []
for i in range(len(marginal_dfs)):
all_dfs.append(marginal_dfs[i])
all_dfs.append(sat_high_dfs[i])
all_dfs.append(sat_low_dfs[i])
"""
Explanation: Here, we visualize the marginal probabilities without observations and with different observations.
End of explanation
"""
import numpy as np
fig, axes = plt.subplots(len(marginal_dfs), 3, figsize=(15, 20), sharey=True)
for i, ax in enumerate(np.ravel(axes)):
all_dfs[i][1].plot.bar(x='val', y='p', legend=False, ax=ax)
ax.set_title(all_dfs[i][0])
ax.set_ylim([0.0, 1.0])
ax.set_xlabel('')
plt.tight_layout()
"""
Explanation: Plot the marginal probabilities.
The first column are the marginal probabilities without any observations.
The second column are the marginal probabiliities with SAT=highscore.
The third column are the marginal probabilites with SAT=lowscore.
End of explanation
"""
|
TwistedHardware/mltutorial
|
notebooks/IPython-Tutorial/4 - Numpy Basics.ipynb
|
gpl-2.0
|
import numpy as np
"""
Explanation: Tutorial Brief
numpy is a powerful set of tools to perform mathematical operations of on lists of numbers. It works faster than normal python lists operations and can manupilate high dimentional arrays too.
Finding Help:
http://wiki.scipy.org/Tentative_NumPy_Tutorial
http://docs.scipy.org/doc/numpy/reference/
SciPy (pronounced “Sigh Pie”) is a Python-based ecosystem of open-source software for mathematics, science, and engineering.
http://www.scipy.org/
So NumPy is a part of a bigger ecosystem of libraries that build on the optimized performance of NumPy NDArray.
It contain these core packages:
<table>
<tr>
<td style="background:Lavender;"><img src="http://www.scipy.org/_static/images/numpylogo_med.png" style="width:50px;height:50px;" /></td>
<td style="background:Lavender;"><h4>NumPy</h4> Base N-dimensional array package </td>
<td><img src="http://www.scipy.org/_static/images/scipy_med.png" style="width:50px;height:50px;" /></td>
<td><h4>SciPy</h4> Fundamental library for scientific computing </td>
<td><img src="http://www.scipy.org/_static/images/matplotlib_med.png" style="width:50px;height:50px;" /></td>
<td><h4>Matplotlib</h4> Comprehensive 2D Plotting </td>
</tr>
<tr>
<td><img src="http://www.scipy.org/_static/images/ipython.png" style="width:50px;height:50px;" /></td>
<td><h4>IPython</h4> Enhanced Interactive Console </td>
<td><img src="http://www.scipy.org/_static/images/sympy_logo.png" style="width:50px;height:50px;" /></td>
<td><h4>SymPy</h4> Symbolic mathematics </td>
<td><img src="http://www.scipy.org/_static/images/pandas_badge2.jpg" style="width:50px;height:50px;" /></td>
<td><h4>Pandas</h4> Data structures & analysis </td>
</tr>
</table>
Importig the library
Import numpy library as np
This helps in writing code and it's almost a standard in scientific work
End of explanation
"""
np.arange(10)
np.arange(1,10)
np.arange(1,10, 0.5)
np.arange(1,10, 3)
np.arange(1,10, 2, dtype=np.float64)
"""
Explanation: Working with ndarray
We will generate an ndarray with np.arange method.
np.arange([start,] stop[, step,], dtype=None)
End of explanation
"""
ds = np.arange(1,10,2)
ds.ndim
ds.shape
ds.size
ds.dtype
ds.itemsize
x=ds.data
list(x)
ds
# Memory Usage
ds.size * ds.itemsize
"""
Explanation: Examining ndrray
End of explanation
"""
%%capture timeit_results
# Regular Python
%timeit python_list_1 = range(1,1000)
python_list_1 = range(1,1000)
python_list_2 = range(1,1000)
#Numpy
%timeit numpy_list_1 = np.arange(1,1000)
numpy_list_1 = np.arange(1,1000)
numpy_list_2 = np.arange(1,1000)
print timeit_results
# Function to calculate time in seconds
def return_time(timeit_result):
temp_time = float(timeit_result.split(" ")[5])
temp_unit = timeit_result.split(" ")[6]
if temp_unit == "ms":
temp_time = temp_time * 1e-3
elif temp_unit == "us":
temp_time = temp_time * 1e-6
elif temp_unit == "ns":
temp_time = temp_time * 1e-9
return temp_time
python_time = return_time(timeit_results.stdout.split("\n")[0])
numpy_time = return_time(timeit_results.stdout.split("\n")[1])
print "Python/NumPy: %.1f" % (python_time/numpy_time)
"""
Explanation: Why to use numpy?
We will compare the time it takes to create two lists and do some basic operations on them.
Generate a list
End of explanation
"""
%%capture timeit_python
%%timeit
# Regular Python
[(x + y) for x, y in zip(python_list_1, python_list_2)]
[(x - y) for x, y in zip(python_list_1, python_list_2)]
[(x * y) for x, y in zip(python_list_1, python_list_2)]
[(x / y) for x, y in zip(python_list_1, python_list_2)];
print timeit_python
%%capture timeit_numpy
%%timeit
#Numpy
numpy_list_1 + numpy_list_2
numpy_list_1 - numpy_list_2
numpy_list_1 * numpy_list_2
numpy_list_1 / numpy_list_2;
print timeit_numpy
python_time = return_time(timeit_python.stdout)
numpy_time = return_time(timeit_numpy.stdout)
print "Python/NumPy: %.1f" % (python_time/numpy_time)
"""
Explanation: Basic Operation
End of explanation
"""
np.array([1,2,3,4,5])
"""
Explanation: Most Common Functions
List Creation
array(object, dtype=None, copy=True, order=None, subok=False, ndmin=0)
```
Parameters
object : array_like
An array, any object exposing the array interface, an
object whose array method returns an array, or any
(nested) sequence.
dtype : data-type, optional
The desired data-type for the array. If not given, then
the type will be determined as the minimum type required
to hold the objects in the sequence. This argument can only
be used to 'upcast' the array. For downcasting, use the
.astype(t) method.
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy
will only be made if array returns a copy, if obj is a
nested sequence, or if a copy is needed to satisfy any of the other
requirements (dtype, order, etc.).
order : {'C', 'F', 'A'}, optional
Specify the order of the array. If order is 'C' (default), then the
array will be in C-contiguous order (last-index varies the
fastest). If order is 'F', then the returned array
will be in Fortran-contiguous order (first-index varies the
fastest). If order is 'A', then the returned array may
be in any order (either C-, Fortran-contiguous, or even
discontiguous).
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise
the returned array will be forced to be a base-class array (default).
ndmin : int, optional
Specifies the minimum number of dimensions that the resulting
array should have. Ones will be pre-pended to the shape as
needed to meet this requirement.
```
End of explanation
"""
np.array([[1,2],[3,4],[5,6]])
"""
Explanation: Multi Dimentional Array
End of explanation
"""
np.zeros((3,4))
np.zeros((3,4), dtype=np.int64)
np.ones((3,4))
"""
Explanation: zeros(shape, dtype=float, order='C') and ones(shape, dtype=float, order='C')
```
Parameters
shape : int or sequence of ints
Shape of the new array, e.g., (2, 3) or 2.
dtype : data-type, optional
The desired data-type for the array, e.g., numpy.int8. Default is
numpy.float64.
order : {'C', 'F'}, optional
Whether to store multidimensional data in C- or Fortran-contiguous
(row- or column-wise) order in memory.
```
End of explanation
"""
np.linspace(1,5)
np.linspace(0,2,num=4)
np.linspace(0,2,num=4,endpoint=False)
"""
Explanation: np.linspace(start, stop, num=50, endpoint=True, retstep=False)
```
Parameters
start : scalar
The starting value of the sequence.
stop : scalar
The end value of the sequence, unless endpoint is set to False.
In that case, the sequence consists of all but the last of num + 1
evenly spaced samples, so that stop is excluded. Note that the step
size changes when endpoint is False.
num : int, optional
Number of samples to generate. Default is 50.
endpoint : bool, optional
If True, stop is the last sample. Otherwise, it is not included.
Default is True.
retstep : bool, optional
If True, return (samples, step), where step is the spacing
between samples.
```
End of explanation
"""
np.random.random((2,3))
np.random.random_sample((2,3))
"""
Explanation: random_sample(size=None)
```
Parameters
size : int or tuple of ints, optional
Defines the shape of the returned array of random floats. If None
(the default), returns a single float.
```
End of explanation
"""
data_set = np.random.random((2,3))
data_set
"""
Explanation: Statistical Analysis
End of explanation
"""
np.max(data_set)
np.max(data_set, axis=0)
np.max(data_set, axis=1)
"""
Explanation: np.max(a, axis=None, out=None, keepdims=False)
```
Parameters
a : array_like
Input data.
axis : int, optional
Axis along which to operate. By default, flattened input is used.
out : ndarray, optional
Alternative output array in which to place the result. Must
be of the same shape and buffer length as the expected output.
See doc.ufuncs (Section "Output arguments") for more details.
keepdims : bool, optional
If this is set to True, the axes which are reduced are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the original arr.
```
End of explanation
"""
np.min(data_set)
"""
Explanation: np.min(a, axis=None, out=None, keepdims=False)
End of explanation
"""
np.mean(data_set)
"""
Explanation: np.mean(a, axis=None, dtype=None, out=None, keepdims=False)
End of explanation
"""
np.median(data_set)
"""
Explanation: np.median(a, axis=None, out=None, overwrite_input=False)
End of explanation
"""
np.std(data_set)
"""
Explanation: np.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False)
End of explanation
"""
np.sum(data_set)
"""
Explanation: np.sum(a, axis=None, dtype=None, out=None, keepdims=False)
End of explanation
"""
np.reshape(data_set, (3,2))
np.reshape(data_set, (6,1))
np.reshape(data_set, (6))
"""
Explanation: Reshaping
np.reshape(a, newshape, order='C')
End of explanation
"""
np.ravel(data_set)
"""
Explanation: np.ravel(a, order='C')
End of explanation
"""
data_set = np.random.random((5,10))
data_set
data_set[1]
data_set[1][0]
data_set[1,0]
"""
Explanation: Slicing
End of explanation
"""
data_set[2:4]
data_set[2:4,0]
data_set[2:4,0:2]
data_set[:,0]
"""
Explanation: Slicing a range
End of explanation
"""
data_set[2:4:1]
data_set[::]
data_set[::2]
data_set[2:4]
data_set[2:4,::2]
"""
Explanation: Stepping
End of explanation
"""
import numpy as np
# Matrix A
A = np.array([[1,2],[3,4]])
# Matrix B
B = np.array([[3,4],[5,6]])
"""
Explanation: Matrix Operations
End of explanation
"""
A+B
"""
Explanation: Addition
End of explanation
"""
A-B
"""
Explanation: Subtraction
End of explanation
"""
A*B
"""
Explanation: Multiplication (Element by Element)
End of explanation
"""
A.dot(B)
"""
Explanation: Multiplication (Matrix Multiplication)
End of explanation
"""
A/B
"""
Explanation: Division
End of explanation
"""
np.square(A)
"""
Explanation: Square
End of explanation
"""
np.power(A,3) #cube of matrix
"""
Explanation: Power
End of explanation
"""
A.transpose()
"""
Explanation: Transpose
End of explanation
"""
np.linalg.inv(A)
"""
Explanation: Inverse
End of explanation
"""
|
jph00/part2
|
seq2seq-translation.ipynb
|
apache-2.0
|
import unicodedata, string, re, random, time, math, torch, torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
import keras, numpy as np
from keras.preprocessing import sequence
"""
Explanation: Requirements
End of explanation
"""
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
"""
Explanation: Loading data files
The data for this project is a set of many thousands of English to French translation pairs.
This question on Open Data Stack Exchange pointed me to the open translation site http://tatoeba.org/ which has downloads available at http://tatoeba.org/eng/downloads - and better yet, someone did the extra work of splitting language pairs into individual text files here: http://www.manythings.org/anki/
The English to French pairs are too big to include in the repo, so download to data/eng-fra.txt before continuing. The file is a tab separated list of translation pairs:
I am cold. Je suis froid.
We'll need a unique index per word to use as the inputs and targets of the networks later. To keep track of all this we will use a helper class called Lang which has word → index (word2index) and index → word (index2word) dictionaries, as well as a count of each word word2count to use to later replace rare words.
End of explanation
"""
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
"""
Explanation: The files are all in Unicode, to simplify we will turn Unicode characters to ASCII, make everything lowercase, and trim most punctuation.
End of explanation
"""
def readLangs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = open('data/%s-%s.txt' % (lang1, lang2)).read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
"""
Explanation: To read the data file we will split the file into lines, and then split lines into pairs. The files are all English → Other Language, so if we want to translate from Other Language → English I added the reverse flag to reverse the pairs.
End of explanation
"""
MAX_LENGTH = 10
eng_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s",
"you are", "you re ",
"we are", "we re ",
"they are", "they re "
)
def filterPair(p):
return len(p[0].split(' ')) < MAX_LENGTH and \
len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(eng_prefixes)
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
"""
Explanation: Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. Here the maximum length is 10 words (that includes ending punctuation) and we're filtering to sentences that translate to the form "I am" or "He is" etc. (accounting for apostrophes replaced earlier).
End of explanation
"""
def prepareData(lang1, lang2, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepareData('eng', 'fra', True)
print(random.choice(pairs))
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]+[EOS_token]
def variableFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
return Variable(torch.LongTensor(indexes).unsqueeze(0))
def variablesFromPair(pair):
input_variable = variableFromSentence(input_lang, pair[0])
target_variable = variableFromSentence(output_lang, pair[1])
return (input_variable, target_variable)
def index_and_pad(lang, dat):
return sequence.pad_sequences([indexesFromSentence(lang, s)
for s in dat], padding='post').astype(np.int64)
fra, eng = list(zip(*pairs))
fra = index_and_pad(input_lang, fra)
eng = index_and_pad(output_lang, eng)
def get_batch(x, y, batch_size=16):
idxs = np.random.permutation(len(x))[:batch_size]
return x[idxs], y[idxs]
"""
Explanation: The full process for preparing the data is:
Read text file and split into lines, split lines into pairs
Normalize text, filter by length and content
Make word lists from sentences in pairs
End of explanation
"""
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)
def forward(self, input, hidden):
output, hidden = self.gru(self.embedding(input), hidden)
return output, hidden
# TODO: other inits
def initHidden(self, batch_size):
return Variable(torch.zeros(1, batch_size, self.hidden_size))
"""
Explanation: The Encoder
The encoder of a seq2seq network is a RNN that outputs some value for every word from the input sentence. For every input word the encoder outputs a vector and a hidden state, and uses the hidden state for the next input word.
End of explanation
"""
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1):
super(DecoderRNN, self).__init__()
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)
# TODO use transpose of embedding
self.out = nn.Linear(hidden_size, output_size)
self.sm = nn.LogSoftmax()
def forward(self, input, hidden):
emb = self.embedding(input).unsqueeze(1)
# NB: Removed relu
res, hidden = self.gru(emb, hidden)
output = self.sm(self.out(res[:,0]))
return output, hidden
"""
Explanation: Simple Decoder
In the simplest seq2seq decoder we use only last output of the encoder. This last output is sometimes called the context vector as it encodes context from the entire sequence. This context vector is used as the initial hidden state of the decoder.
At every step of decoding, the decoder is given an input token and hidden state. The initial input token is the start-of-string <SOS> token, and the first hidden state is the context vector (the encoder's last hidden state).
End of explanation
"""
class AttnDecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout_p = dropout_p
self.max_length = max_length
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn = nn.Linear(self.hidden_size * 2, self.max_length)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hidden, encoder_output, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)))
attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
output = self.attn_combine(output).unsqueeze(0)
for i in range(self.n_layers):
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = F.log_softmax(self.out(output[0]))
return output, hidden, attn_weights
def initHidden(self):
return Variable(torch.zeros(1, 1, self.hidden_size))
"""
Explanation: Attention Decoder
If only the context vector is passed betweeen the encoder and decoder, that single vector carries the burden of encoding the entire sentence.
Attention allows the decoder network to "focus" on a different part of the encoder's outputs for every step of the decoder's own outputs. First we calculate a set of attention weights. These will be multiplied by the encoder output vectors to create a weighted combination. The result (called attn_applied in the code) should contain information about that specific part of the input sequence, and thus help the decoder choose the right output words.
Calculating the attention weights is done with another feed-forward layer attn, using the decoder's input and hidden state as inputs. Because there are sentences of all sizes in the training data, to actually create and train this layer we have to choose a maximum sentence length (input length, for encoder outputs) that it can apply to. Sentences of the maximum length will use all the attention weights, while shorter sentences will only use the first few.
End of explanation
"""
def train(input_variable, target_variable, encoder, decoder,
encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
batch_size, input_length = input_variable.size()
target_length = target_variable.size()[1]
encoder_hidden = encoder.initHidden(batch_size).cuda()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
loss = 0
encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)
decoder_input = Variable(torch.LongTensor([SOS_token]*batch_size)).cuda()
decoder_hidden = encoder_hidden
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
#, encoder_output, encoder_outputs)
targ = target_variable[:, di]
# print(decoder_output.size(), targ.size(), target_variable.size())
loss += criterion(decoder_output, targ)
decoder_input = targ
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
def trainEpochs(encoder, decoder, n_epochs, print_every=1000, plot_every=100,
learning_rate=0.01):
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
encoder_optimizer = optim.RMSprop(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.RMSprop(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss().cuda()
for epoch in range(1, n_epochs + 1):
training_batch = get_batch(fra, eng)
input_variable = Variable(torch.LongTensor(training_batch[0])).cuda()
target_variable = Variable(torch.LongTensor(training_batch[1])).cuda()
loss = train(input_variable, target_variable, encoder, decoder, encoder_optimizer,
decoder_optimizer, criterion)
print_loss_total += loss
plot_loss_total += loss
if epoch % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print('%s (%d %d%%) %.4f' % (timeSince(start, epoch / n_epochs), epoch,
epoch / n_epochs * 100, print_loss_avg))
if epoch % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
showPlot(plot_losses)
"""
Explanation: Note: There are other forms of attention that work around the length limitation by using a relative position approach. Read about "local attention" in Effective Approaches to Attention-based Neural Machine Translation.
Training
To train we run the input sentence through the encoder, and keep track of every output and the latest hidden state. Then the decoder is given the <SOS> token as its first input, and the last hidden state of the decoder as its first hidden state.
"Teacher forcing" is the concept of using the real target outputs as each next input, instead of using the decoder's guess as the next input. Using teacher forcing causes it to converge faster but when the trained network is exploited, it may exhibit instability.
End of explanation
"""
# TODO: Make this change during training
teacher_forcing_ratio = 0.5
def train(input_variable, target_variable, encoder, decoder, encoder_optimizer,
decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_variable.size()[0]
target_length = target_variable.size()[0]
encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size))
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
decoder_input = Variable(torch.LongTensor([[SOS_token]]))
decoder_hidden = encoder_hidden
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
if use_teacher_forcing:
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
loss += criterion(decoder_output[0], target_variable[di])
decoder_input = target_variable[di] # Teacher forcing
else:
# Without teacher forcing: use its own predictions as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_output, encoder_outputs)
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
decoder_input = Variable(torch.LongTensor([[ni]]))
loss += criterion(decoder_output[0], target_variable[di])
if ni == EOS_token:
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
"""
Explanation: Attention
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
%matplotlib inline
def showPlot(points):
plt.figure()
fig, ax = plt.subplots()
loc = ticker.MultipleLocator(base=0.2) # this locator puts ticks at regular intervals
ax.yaxis.set_major_locator(loc)
plt.plot(points)
"""
Explanation: Plotting results
Plotting is done with matplotlib, using the array of loss values plot_losses saved while training.
End of explanation
"""
def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):
input_variable = variableFromSentence(input_lang, sentence).cuda()
input_length = input_variable.size()[0]
encoder_hidden = encoder.initHidden(1).cuda()
encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)
decoder_input = Variable(torch.LongTensor([SOS_token])).cuda()
decoder_hidden = encoder_hidden
decoded_words = []
# decoder_attentions = torch.zeros(max_length, max_length)
for di in range(max_length):
# decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
#, encoder_output, encoder_outputs)
# decoder_attentions[di] = decoder_attention.data
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[ni])
decoder_input = Variable(torch.LongTensor([ni])).cuda()
return decoded_words,0#, decoder_attentions[:di+1]
def evaluateRandomly(encoder, decoder, n=10):
for i in range(n):
pair = random.choice(pairs)
print('>', pair[0])
print('=', pair[1])
output_words, attentions = evaluate(encoder, decoder, pair[0])
output_sentence = ' '.join(output_words)
print('<', output_sentence)
print('')
"""
Explanation: Evaluation
Evaluation is mostly the same as training, but there are no targets so we simply feed the decoder's predictions back to itself for each step. Every time it predicts a word we add it to the output string, and if it predicts the EOS token we stop there. We also store the decoder's attention outputs for display later.
End of explanation
"""
#TODO:
# - Test set
# - random teacher forcing
# - attention
# - multi layers
# - bidirectional encoding
hidden_size = 256
encoder1 = EncoderRNN(input_lang.n_words, hidden_size).cuda()
attn_decoder1 = DecoderRNN(hidden_size, output_lang.n_words).cuda()
trainEpochs(encoder1, attn_decoder1, 15000, print_every=500, learning_rate=0.005)
evaluateRandomly(encoder1, attn_decoder1)
"""
Explanation: Training and Evaluating
Note: If you run this notebook you can train, interrupt the kernel, evaluate, and continue training later. Comment out the lines where the encoder and decoder are initialized and run trainEpochs again.
End of explanation
"""
output_words, attentions = evaluate(encoder1, attn_decoder1, "je suis trop froid .")
plt.matshow(attentions.numpy())
"""
Explanation: Visualizing Attention
A useful property of the attention mechanism is its highly interpretable outputs. Because it is used to weight specific encoder outputs of the input sequence, we can imagine looking where the network is focused most at each time step.
You could simply run plt.matshow(attentions) to see attention output displayed as a matrix, with the columns being input steps and rows being output steps:
End of explanation
"""
def showAttention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def evaluateAndShowAttention(input_sentence):
output_words, attentions = evaluate(encoder1, attn_decoder1, input_sentence)
print('input =', input_sentence)
print('output =', ' '.join(output_words))
showAttention(input_sentence, output_words, attentions)
evaluateAndShowAttention("elle a cinq ans de moins que moi .")
evaluateAndShowAttention("elle est trop petit .")
evaluateAndShowAttention("je ne crains pas de mourir .")
evaluateAndShowAttention("c est un jeune directeur plein de talent .")
"""
Explanation: For a better viewing experience we will do the extra work of adding axes and labels:
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/emcee_continue_from.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger('error')
"""
Explanation: Advanced: Continuing Emcee from a Previous Run
IMPORTANT: this tutorial assumes basic knowledge (and uses a file resulting from) the emcee tutorial.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
b = phoebe.load('emcee_advanced_tutorials.bundle')
"""
Explanation: We'll then start with the bundle from the end of the emcee tutorial. If you're running this notebook locally, you will need to run that first to create the emcee_advanced_tutorials.bundle file that we will use here.
End of explanation
"""
print(b.solvers, b.solutions)
print(b.filter(solver='emcee_solver', context='solver'))
print(b.get_parameter(qualifier='continue_from', solver='emcee_solver'))
"""
Explanation: continue_from parameter
Once we have an existing solution(s) in the bundle that used emcee, the continue_from parameter (in the emcee solver) will have those available as valid options.
End of explanation
"""
b.set_value(qualifier='continue_from', value='emcee_sol')
"""
Explanation: By setting this to the existing solution, we will no longer have options for nwalkers, init_from, or init_from_combine. Instead, the new run will use the same number of walkers as the previous run (to change the number of walkers, resample emcee from a previous run instead) and will continue with the parameters exactly where they left-off in the latest iteration.
End of explanation
"""
print(b.filter(solver='emcee_solver', context='solver'))
"""
Explanation: Note that this also exposes a new continue_from_iter parameter which defaults to -1 (the last iteration from the continued run). If you want to continue a run from anywhere other than the last iteration, you can override this value (using either a positive or negative index).
End of explanation
"""
b.set_value(qualifier='niters', solver='emcee_solver', context='solver', value=50)
"""
Explanation: Note that niters now defines the number of additional iterations.
End of explanation
"""
b.run_solver('emcee_solver', solution='emcee_sol_contd')
"""
Explanation: run_solver
End of explanation
"""
print(b.filter(qualifier='niters', context='solution'))
_ = b.plot(solution='emcee_sol', style='lnprobability',
burnin=0, thin=1, lnprob_cutoff=3600, show=True)
_ = b.plot(solution='emcee_sol_contd', style='lnprobability',
burnin=0, thin=1, lnprob_cutoff=3600, show=True)
"""
Explanation: To overwrite the existing solution, we could have passed solution='emcee_sol', overwrite=True.
Solution
Now if we look at the original and new solution, we can see that the chains have been extended by niters iterations.
End of explanation
"""
|
vanheck/blog-notes
|
QuantTrading/time-series-analyze_1-pandas.ipynb
|
mit
|
import datetime
MY_VERSION = 1,0
print('Verze notebooku:', '.'.join(map(str, MY_VERSION)))
print('Poslední aktualizace:', datetime.datetime.now())
"""
Explanation: Analýza časových řad 1 - manipulace s daty v Pandas
Popis základních funkcí pomocí pro analýzu dat v Pandas.
Info o verzi a notebooku
End of explanation
"""
import sys
import datetime
import pandas as pd
import pandas_datareader as pdr
import pandas_datareader.data as pdr_web
import quandl as ql
# Load Quandl API key
import json
with open('quandl_key.json','r') as f:
quandl_api_key = json.load(f)
ql.ApiConfig.api_key = quandl_api_key['API-key']
print('Verze pythonu:')
print(sys.version)
print('---')
print('Pandas:', pd.__version__)
print('pandas-datareader:', pdr.__version__)
print('Quandl version:', ql.version.VERSION)
"""
Explanation: Informace o použitých python modulech
End of explanation
"""
start_date = datetime.datetime(2015, 1, 1)
end_date = datetime.datetime.now()
ES = ql.get("CHRIS/CME_ES1", start_date=start_date, end_date=end_date)
ES.head()
SPY = pdr_web.DataReader("NYSEARCA:SPY", 'google', start=start_date, end=end_date)
SPY.head()
"""
Explanation: Seznam zdrojů:
Pandas - manipulace a analýza dat
pandas-datareader
Seznam všech webových zdrojů v pandas-datareader
Python For Finance: Algorithmic Trading
Quandl
ETF trhy - finančník
Series a DataFrame
Knihovna pandas používá k uchovávání a zpracování dat své typy Series a DataFrame.
V případě Series se jedná o 1D označená (labeled) struktura dat jednoho typu. DataFrame je pak 2D označená (labeled) struktura dat různých typů. Jednotlivé sloupce v DataFrame jsou typu Series. Další informace v dokumentaci DataFrame a Series.
Data k analýze
End of explanation
"""
n = 10
#ES.head()
ES.head(n)
"""
Explanation: Základní práce s daty
Zobrazení prvních n záznamů z DataFrame.
End of explanation
"""
n = 10
#ES.tail()
ES.tail(n)
"""
Explanation: Zobrazení posledních n záznamů z DataFrame.
End of explanation
"""
ES.describe()
"""
Explanation: Zobrazeních několik statistických informací ke každému sloupci v DataFrame.
End of explanation
"""
ES.to_csv('data/es.csv')
"""
Explanation: Uložení dat v DataFrame do .csv souboru
End of explanation
"""
#data = pd.read_csv('data/es.csv')
data = pd.read_csv('data/es.csv', header=0, index_col='Date', parse_dates=True)
data.head(3)
"""
Explanation: Načtení dat z .csv souboru
End of explanation
"""
data.index
data.columns
"""
Explanation: Informace o indexu a sloupcích daného DataFrame
End of explanation
"""
# výběr posledních 10 záznamů ze sloupce Last, výsledek je typu Series
vyber = data['Last'][-10:]
vyber
"""
Explanation: ## Výběr určitých dat z DataFrame
### Indexace
Základní výběr dat z DataFrame lze dělat pomocí indexace.
End of explanation
"""
data.loc['2016-11-01']
vyber = data.loc['2017']
print(vyber.head(5))
print(vyber.tail(5))
"""
Explanation: Výběr podle popisu (label-based) a pozice (positional)
Pro získání dat podle popisu pandas používá funkce loc. Např. 2017, nebo 2016-11-01 zadáme jako argument:
End of explanation
"""
# zobrazí řádek 20
print(data.iloc[20])
# zobrazí řádky 0,1,2,3,4 a sloupce 0,1,2,3
data.iloc[[0,1,2,3,4], [0,1,2,3]]
"""
Explanation: Pro získání dat podle pozice pandas používá funkce iloc. Např. 20, nebo 43 zadáme jako argument:
End of explanation
"""
# Vzorek 20 řádků
sample = data.sample(20)
sample
"""
Explanation: Více v podrobné dokumentaci Indexing and Selecting Data.
Úprava datového vzorku časové řady
Náhodný vzorek dat
Vzorek náhodných dat lze získat pomocí funkce sample. Dokumentace k DataFrame.sample.
End of explanation
"""
prumer = data.resample('M').mean()
prumer.head()
mesicni = data.asfreq("M", method="bfill")
mesicni.head()
"""
Explanation: Získání měsíčního vzorku dat z denního
Funkce resample umožňuje flexibilní konverzi frekvence dat jako funkce asfreq, ale i další. Více v dokumentaci k resample a dokumentaci asfreq.
End of explanation
"""
data['ATR_1'] = data.High - data.Low
data.head()
"""
Explanation: Vypočítání volatility EOD dat
Se sloupci DataFramu můžu bezproblému aritmeticky počítat. Pro získání volatility jednotlivých denních záznamů, odečtu jednoduše sloupec low od sloupce high a výsledek vložím do sloupce ATR.
End of explanation
"""
del data['ATR_1']
data.head()
"""
Explanation: Smazání sloupce
Smazat sloupce lze pomocí klíčového slova del.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.24/_downloads/3d564af6b3f1e758cf01cd38abefd45f/50_epochs_to_data_frame.ipynb
|
bsd-3-clause
|
import os
import seaborn as sns
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
"""
Explanation: Exporting Epochs to Pandas DataFrames
This tutorial shows how to export the data in :class:~mne.Epochs objects to a
:class:Pandas DataFrame <pandas.DataFrame>, and applies a typical Pandas
:doc:split-apply-combine <pandas:user_guide/groupby> workflow to examine the
latencies of the response maxima across epochs and conditions.
We'll use the sample-dataset dataset, but load a version of the raw file
that has already been filtered and downsampled, and has an average reference
applied to its EEG channels. As usual we'll start by importing the modules we
need and loading the data:
End of explanation
"""
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(sample_data_events_file)
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 µV
eog=200e-6) # 200 µV
tmin, tmax = (-0.2, 0.5) # epoch from 200 ms before event to 500 ms after it
baseline = (None, 0) # baseline period from start of epoch to time=0
epochs = mne.Epochs(raw, events, event_dict, tmin, tmax, proj=True,
baseline=baseline, reject=reject_criteria, preload=True)
del raw
"""
Explanation: Next we'll load a list of events from file, map them to condition names with
an event dictionary, set some signal rejection thresholds (cf.
tut-reject-epochs-section), and segment the continuous data into
epochs:
End of explanation
"""
df = epochs.to_data_frame()
df.iloc[:5, :10]
"""
Explanation: Converting an Epochs object to a DataFrame
Once we have our :class:~mne.Epochs object, converting it to a
:class:~pandas.DataFrame is simple: just call :meth:epochs.to_data_frame()
<mne.Epochs.to_data_frame>. Each channel's data will be a column of the new
:class:~pandas.DataFrame, alongside three additional columns of event name,
epoch number, and sample time. Here we'll just show the first few rows and
columns:
End of explanation
"""
df = epochs.to_data_frame(time_format=None,
scalings=dict(eeg=1, mag=1, grad=1))
df.iloc[:5, :10]
"""
Explanation: Scaling time and channel values
By default, time values are converted from seconds to milliseconds and
then rounded to the nearest integer; if you don't want this, you can pass
time_format=None to keep time as a :class:float value in seconds, or
convert it to a :class:~pandas.Timedelta value via
time_format='timedelta'.
Note also that, by default, channel measurement values are scaled so that EEG
data are converted to µV, magnetometer data are converted to fT, and
gradiometer data are converted to fT/cm. These scalings can be customized
through the scalings parameter, or suppressed by passing
scalings=dict(eeg=1, mag=1, grad=1).
End of explanation
"""
df = epochs.to_data_frame(index=['condition', 'epoch'],
time_format='timedelta')
df.iloc[:5, :10]
"""
Explanation: Notice that the time values are no longer integers, and the channel values
have changed by several orders of magnitude compared to the earlier
DataFrame.
Setting the index
It is also possible to move one or more of the indicator columns (event name,
epoch number, and sample time) into the index <pandas:indexing>, by
passing a string or list of strings as the index parameter. We'll also
demonstrate here the effect of time_format='timedelta', yielding
:class:~pandas.Timedelta values in the "time" column.
End of explanation
"""
long_df = epochs.to_data_frame(time_format=None, index='condition',
long_format=True)
long_df.head()
"""
Explanation: Wide- versus long-format DataFrames
Another parameter, long_format, determines whether each channel's data
is in a separate column of the :class:~pandas.DataFrame
(long_format=False), or whether the measured values are pivoted into a
single 'value' column with an extra indicator column for the channel name
(long_format=True). Passing long_format=True will also create an
extra column ch_type indicating the channel type.
End of explanation
"""
channels = ['MEG 1332', 'MEG 1342']
data = long_df.loc['auditory/left'].query('channel in @channels')
# convert channel column (CategoryDtype → string; for a nicer-looking legend)
data['channel'] = data['channel'].astype(str)
sns.lineplot(x='time', y='value', hue='channel', data=data)
"""
Explanation: Generating the :class:~pandas.DataFrame in long format can be helpful when
using other Python modules for subsequent analysis or plotting. For example,
here we'll take data from the "auditory/left" condition, pick a couple MEG
channels, and use :func:seaborn.lineplot to automatically plot the mean and
confidence band for each channel, with confidence computed across the epochs
in the chosen condition:
End of explanation
"""
df = epochs.to_data_frame(time_format=None)
peak_latency = (df.filter(regex=r'condition|epoch|MEG 1332|MEG 2123')
.groupby(['condition', 'epoch'])
.aggregate(lambda x: df['time'].iloc[x.idxmax()])
.reset_index()
.melt(id_vars=['condition', 'epoch'],
var_name='channel',
value_name='latency of peak')
)
ax = sns.violinplot(x='channel', y='latency of peak', hue='condition',
data=peak_latency, palette='deep', saturation=1)
"""
Explanation: We can also now use all the power of Pandas for grouping and transforming our
data. Here, we find the latency of peak activation of 2 gradiometers (one
near auditory cortex and one near visual cortex), and plot the distribution
of the timing of the peak in each channel as a :func:~seaborn.violinplot:
End of explanation
"""
|
ueapy/ueapy.github.io
|
content/notebooks/2018-02-05-oop-vs-procedural.ipynb
|
mit
|
import numpy as np
import itertools
import warnings
warnings.simplefilter(action='ignore')
"""
Explanation: Instructions for the example in the code can be found here: https://adventofcode.com/2015/day/21
And other approaches to this problem (including other languages) can be found on Reddit: https://www.reddit.com/r/adventofcode/comments/3xspyl/day_21_solutions/
First, we import Packages
End of explanation
"""
startbosshp = 104
bossdamage = 8
bossarmor = 1
startplayerhp = 100
playerdamage = 0
playerarmor = 0
"""
Explanation: Procedural code version
End of explanation
"""
from collections import namedtuple
Item = namedtuple('item', ['name', 'cost', 'damage', 'armor'])
weaponsnt = [
Item('Dagger', 8, 4, 0),
Item('Shortsword', 10, 5, 0),
Item('Warhammer', 25, 6, 0),
Item('Longsword', 40, 7, 0),
Item('Greataxe', 74, 8, 0),
]
armornt = [
Item('Leather', 13, 0, 1),
Item('Chainmail', 31, 0, 2),
Item('Splintmail', 53, 0, 3),
Item('Bandedmail', 75, 0, 4),
Item('Platemail', 102, 0, 5),
Item('Naked', 0, 0, 0),
]
ringsnt = [
Item('Damage +1', 25, 1, 0),
Item('Damage +2', 50, 2, 0),
Item('Damage +3', 100, 3, 0),
Item('Defense +1', 20, 0, 1),
Item('Defense +2', 40, 0, 2),
Item('Defense +3', 80, 0, 3),
]
"""
Explanation: Named tuples tuples but BETTER!
End of explanation
"""
wn = 5 #weapons
an = 6 #armor
rn = 22 #rings
comb = wn * an * rn #total number of possibilities
#setup arrays
player_spent = np.full((wn, an, rn), np.nan)
player_damage = np.full((wn, an, rn), np.nan) #damage
player_armor = np.full((wn, an, rn), np.nan) #cost
#setup arrays
#weapons = np.full((wn, 3), 0)
armor = np.full((an, 3), 0)
rings0 = np.full((6, 3), 0)
rings = np.full((rn, 3), 0)
weapons = np.array([[8, 4, 0], [10, 5, 0], [25, 6, 0], [40, 7, 0], [74, 8, 0]])
armor[:,:] = [[13, 0, 1], [31, 0, 2], [53, 0, 3], [75, 0, 4], [102, 0, 5], [0, 0, 0]]
rings0[:,:] = [[25, 1, 0], [50, 2, 0], [100, 3, 0], [20, 0, 1], [40, 0, 2], [80, 0, 3]]
"""
Explanation: Set up arrays
End of explanation
"""
ring_combs = (list(itertools.combinations(range(6), 2)))
print(ring_combs)
"""
Explanation: Use itertool package to get 15 combinations of 2 rings
End of explanation
"""
for i in range(0, len(ring_combs)):
rings[i, 0] = int(rings0[ring_combs[i][0]][0] + rings0[ring_combs[i][1]][0]) #spent
rings[i, 1] = int(rings0[ring_combs[i][0]][1] + rings0[ring_combs[i][1]][1]) #damage
rings[i, 2] = int(rings0[ring_combs[i][0]][2] + rings0[ring_combs[i][1]][2]) #armor
rings[15:-1, :] = rings0[:,:]
print(rings)
"""
Explanation: The [Cost, Damage, Armor] of the 22 ring combinations
End of explanation
"""
for w in range(0, wn):
for a in range(0, an):
for r in range(0, rn):
player_spent[w, a, r] = weapons[w, 0] + armor[a, 0] + rings[r, 0]
player_damage[w, a, r] = weapons[w, 1] + armor[a, 1] + rings[r, 1]
player_armor[w, a, r] = weapons[w, 2] + armor[a, 2] + rings[r, 2]
"""
Explanation: Fill 3x 660-cell 3D arrays with the cost, damage and armor for each kit-combination
End of explanation
"""
playerspent = player_spent[0,0,0]
print('playerspent=',playerspent)
playerdamage = player_damage[0,0,0]
print('playerdamage=',playerdamage)
playerarmor = player_armor[0,0,0]
print('playerarmor=',playerarmor)
"""
Explanation: E.g. [0,0,0] = rings: dam+1, dam+2, leather, dagger
End of explanation
"""
bestspend = 999
wi = 0
ai = 0
ri = 0
worstspend = 0
wi2 = 0
ai2 = 0
ri2 = 0
win_no = 0
lose_no = 0
for w in range(0, wn): #length=5
for a in range(0, an): #length=6
for r in range(0, rn): #length=22
#get 1 of 660
playerspent = player_spent[w, a, r]
playerdamage = player_damage[w, a, r]
playerarmor = player_armor[w, a, r]
bosshp = startbosshp
playerhp = startplayerhp
playeractdam = playerdamage - bossarmor
if (playeractdam < 1):
playeractdam = 1
# playactdam = max(playeractdam, 1)
bossactdam = bossdamage - playerarmor
if (bossactdam < 1):
bossactdam = 1
while (bosshp > 0) and (playerhp > 0):
#bosshp = bosshp - playeractdam
bosshp -= playeractdam
#playerhp = playerhp - bossactdam
playerhp -= bossactdam
if playerhp > bosshp: #if I win
#win_no = win_no += 1
win_no += 1
if playerspent < bestspend:
bestspend = playerspent
wi = w
ai = a
ri = r
if playerhp < bosshp: #if I lose
#lose_no = lose_no + 1
lose_no += 1
if playerspent > worstspend:
worstspend = playerspent
wi2 = w
ai2 = a
ri2 = r
print('lowest cost while still winning =',bestspend)
print(weaponsnt[wi])
print(armornt[ai])
print('ringscombi',rings[ri])
print(wi)
print(ai)
print(ri)
print('-')
print('highest cost while still losing =',worstspend)
print(weaponsnt[wi2])
print(armornt[ai2])
print('ringscombi',rings[ri2])
print(wi2)
print(ai2)
print(ri2)
print('-')
print('win_no=',win_no)
print('lose_no=',lose_no)
"""
Explanation: 660 boss vs player battles
End of explanation
"""
ww = 0
aa = 5
rr = 5
playerspend = player_spent[ww, aa, rr]
playerdamage = player_damage[ww, aa, rr]
playerarmor = player_armor[ww, aa, rr]
bosshp = startbosshp
playerhp = startplayerhp
playeractdam = playerdamage - bossarmor
if (playeractdam < 1):
playeractdam = 1
bossactdam = bossdamage - playerarmor
if (bossactdam < 1):
bossactdam = 1
while (bosshp > 0) & (playerhp > 0):
bosshp = bosshp - playeractdam
playerhp = playerhp - bossactdam
print('bosshp=',bosshp)
print('playerhp=',playerhp)
print('-')
"""
Explanation: Test a specific combination of weapons, armor and rings
End of explanation
"""
# class Boss:
# def __init__(self, hp, damage, armor):
# self.hp = hp
# self.damage = damage
# self.armor = armor
# def calc_actdamage(self, playerarmor):
# self.actdamage = self.damage - playerarmor
# if (self.actdamage < 1):
# self.actdamage = 1
# class Player:
# def __init__(self, spent, hp, damage, armor):
# self.spent = spent
# self.hp = hp
# self.damage = damage
# self.armor = armor
# def calc_actdamage(self, bossarmor):
# self.actdamage = self.damage - bossarmor
# if (self.actdamage < 1):
# self.actdamage = 1
"""
Explanation: OOP code version
Make the boss and player class individually or..
End of explanation
"""
class RpgChara:
def __init__(self, hp, damage, armor):
self.hp = hp
self.damage = damage
self.armor = armor
def calc_actdamage(self, enemyarmor):
self.actdamage = self.damage - enemyarmor
if (self.actdamage < 1):
self.actdamage = 1
class Boss(RpgChara):
pass
class Player(RpgChara):
def __init__(self, spent, hp, damage, armor):
super().__init__(hp, damage, armor)
self.spent = spent
"""
Explanation: ...Use inheritance
End of explanation
"""
bestspend = 999 #too high in order to come down
wi = 0
ai = 0
ri = 0
worstspend = 0 #too low in order to go up
wi2 = 0
ai2 = 0
ri2 = 0
win_no = 0
lose_no = 0
for w in range(0, wn): #length5
for a in range(0, an): #length6
for r in range(0, rn): #length22
#get 1 of 660 instances per loop
boss_i = Boss(startbosshp, bossdamage, bossarmor)
player_i = Player(player_spent[w, a, r], startplayerhp, player_damage[w, a, r], \
player_armor[w, a, r])
#but what is their actual damage
boss_i.calc_actdamage(player_i.armor)
player_i.calc_actdamage(boss_i.armor)
while (boss_i.hp > 0) & (player_i.hp > 0):
boss_i.hp -= player_i.actdamage
player_i.hp -= boss_i.actdamage
if player_i.hp > boss_i.hp: #if I win
win_no += 1
if player_i.spent < bestspend:
bestspend = player_i.spent
wi = w
ai = a
ri = r
if player_i.hp < boss_i.hp: #if I lose
lose_no += 1
if player_i.spent > worstspend:
worstspend = player_i.spent
wi2 = w
ai2 = a
ri2 = r
print('lowest spend while still winning =',bestspend)
print(weaponsnt[wi])
print(armornt[ai])
print('ringscombi',rings[ri])
print(ri)
print('-')
print('highest spend while still losing =',worstspend)
print(weaponsnt[wi2])
print(armornt[ai2])
print('ringscombi',rings[ri2])
print(ri2)
print('-')
print('win_no=',win_no)
print('lose_no=',lose_no)
HTML(html)
"""
Explanation: another 660 fights, OOP style
End of explanation
"""
|
ucsdlib/python-novice-inflammation
|
6-errors.ipynb
|
cc0-1.0
|
cd code
import errors_01
errors_01.favorite_ice_cream()
"""
Explanation: Errors and Exceptions
every programmer deals with errors and they can be v. frustrating
understanding what the different error types are and when you are likely to encounter them helps a lot
Errors in python ahve a specific form, called a traceback, let's look
End of explanation
"""
def some_function()
msg = "hello, world"
print(msg)
return msg
"""
Explanation: Two levels in this traceback:
the first shows code from teh cell above with an arrow pointing to line 2 (favorite_ice_cream()).
the second shows somce code in the other function (favorit_ice_cream, located in the file error_01.py on line 7.
This is where the actual error happened.
LONG tracebacks
sometimes tracebacks are very long (20 levels deep)
this can make it seem like something terrible has happened, but just means your program called many functions before the error
Most times you can just go to the bottom on the trace
What error did we get above?
Last line tells us the type - IndexError and a message "list index out of range"
if you encoutner an error and don't know what it means, read the trace closely
that way if you fix the and then encounter a new one, you can tell that the error change
sometimes just knowing where the error occured is enough to fix it
Don't the error
* look at the official documentation on errors on docs.python.org (google it)
* it might not be there - it might be a custom error
Syntax Errors
when you forget a colon at the end of a line, add one space too man when indenting uder an if statement, forget parenthesis -- you'll get a syntax errror
this means python couldn't figure out how to read your program
this is similar to forgetting punctuation in English:
this text is difficult to read there is no punctuation there is also no capitalization why is this hard because you have to figure out where each sentence ends you also have to figure out where each sentence begins to some extent it might be ambiguous if there should be a sentence break or not
if python can't figure out how to read a program, it will give up and inform you
End of explanation
"""
def some_function():
msg = "hello, world"
print(msg)
return msg
"""
Explanation: python tells us that there's a syntax error on line 1
event puts a little error where the issue is
we missed the colon at the end of the function definition
let's fix it
End of explanation
"""
print(a)
"""
Explanation: ah, we actually had two errors in that code
now we see an 'IndentationError' and the arrow pointing to the 'return' statement
this tells us or indentation is off on that line
SyntaxError and IndentationError indicate a problem with syntax in your program
IndentationError is more specific - it always means there is aproblem with your indentation in the code
Note on TABS and Spaces
* both white spaces
* hard to visually tell def.
* some editors don't show the diff
Variable Name Errors
very common errors
occurs when you try and use a variable that doesn't exist
End of explanation
"""
print(hello)
"""
Explanation: usually tell us the var name doesn't exist
Why does this occur?
maybe you forgot to quote a string
End of explanation
"""
for number in range(10):
count = count + number
print("The count is:" + str(count))
"""
Explanation: or forgot to create the variable before using it
End of explanation
"""
Count = 0
for number in range(10):
count = count + number
print("The count is:" + str(count))
"""
Explanation: lastly you might have made a typo
let's say we fixed the above by adding the line Count = 0
you still get an error
remember variables are case sensitive, so the var count is different from Count
End of explanation
"""
letters = ['a', 'b', 'c']
print("Letter #1 is " + letters[0])
print("Letter #2 is " + letters[1])
print("Letter #3 is " + letters[2])
print("Letter #4 is " + letters[3])
"""
Explanation: Item errors
errors dealing with containers
trying to access an item in a list that doesn't exist
End of explanation
"""
file_handle = open('myfile.txt', 'r')
"""
Explanation: IndexError - meaning we tried to access a list index that did not exist
File Errors
* errors dealing with reading and writing files
End of explanation
"""
file_handle = open('myfile', 'w')
file_handle.read()
"""
Explanation: ONE reason for reciving this error is that you specified an incorrect path to the file
Eg. i'm in my folder 'python-novice-inflammation' and the file i tried to access is in another folder python-novice-inflammation/writing/myfile.txt, but I tried to open just 'myfile.txt'
the correct path is python-novice-inflammation/writing/myfile.txt
A related issue is we use the read flag instead of the write flag
python will not give an error if you try and open a file for writing
if you meant to open a file for reading, but accidentally opented it for writing, and then try and read from it, pytho will give an UnsupportedOPeration error
End of explanation
"""
|
gautam1858/tensorflow
|
tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
"""
Explanation: Post-training integer quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is valuable for low-power devices such as microcontrollers. This data format is also required by integer-only accelerators such as the Edge TPU.
In this tutorial, you'll train an MNIST model from scratch, convert it into a Tensorflow Lite file, and quantize it using post-training quantization. Finally, you'll check the accuracy of the converted model and compare it to the original float model.
You actually have several options as to how much you want to quantize a model. In this tutorial, you'll perform "full integer quantization," which converts all weights and activation outputs into 8-bit integer data—whereas other strategies may leave some amount of data in floating-point.
To learn more about the various quantization strategies, read about TensorFlow Lite model optimization.
Setup
In order to quantize both the input and output tensors, we need to use APIs added in TensorFlow r2.3:
End of explanation
"""
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
"""
Explanation: Generate a TensorFlow Model
We'll build a simple model to classify numbers from the MNIST dataset.
This training won't take long because you're training the model for just a 5 epochs, which trains to about ~98% accuracy.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
"""
Explanation: Convert to a TensorFlow Lite model
Now you can convert the trained model to TensorFlow Lite format using the TFLiteConverter API, and apply varying degrees of quantization.
Beware that some versions of quantization leave some of the data in float format. So the following sections show each option with increasing amounts of quantization, until we get a model that's entirely int8 or uint8 data. (Notice we duplicate some code in each section so you can see all the quantization steps for each option.)
First, here's a converted model with no quantization:
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
"""
Explanation: It's now a TensorFlow Lite model, but it's still using 32-bit float values for all parameter data.
Convert using dynamic range quantization
Now let's enable the default optimizations flag to quantize all fixed parameters (such as weights):
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
"""
Explanation: The model is now a bit smaller with quantized weights, but other variable data is still in float format.
Convert using float fallback quantization
To quantize the variable data (such as model input/output and intermediates between layers), you need to provide a RepresentativeDataset. This is a generator function that provides a set of input data that's large enough to represent typical values. It allows the converter to estimate a dynamic range for all the variable data. (The dataset does not need to be unique compared to the training or evaluation dataset.)
To support multiple inputs, each representative data point is a list and elements in the list are fed to the model according to their indices.
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: Now all weights and variable data are quantized, and the model is significantly smaller compared to the original TensorFlow Lite model.
However, to maintain compatibility with applications that traditionally use float model input and output tensors, the TensorFlow Lite Converter leaves the model input and output tensors in float:
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
"""
Explanation: That's usually good for compatibility, but it won't be compatible with devices that perform only integer-based operations, such as the Edge TPU.
Additionally, the above process may leave an operation in float format if TensorFlow Lite doesn't include a quantized implementation for that operation. This strategy allows conversion to complete so you have a smaller and more efficient model, but again, it won't be compatible with integer-only hardware. (All ops in this MNIST model have a quantized implementation.)
So to ensure an end-to-end integer-only model, you need a couple more parameters...
Convert using integer-only quantization
To quantize the input and output tensors, and make the converter throw an error if it encounters an operation it cannot quantize, convert the model again with some additional parameters:
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: The internal quantization remains the same as above, but you can see the input and output tensors are now integer format:
End of explanation
"""
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
"""
Explanation: Now you have an integer quantized model that uses integer data for the model's input and output tensors, so it's compatible with integer-only hardware such as the Edge TPU.
Save the models as files
You'll need a .tflite file to deploy your model on other devices. So let's save the converted models to files and then load them when we run inferences below.
End of explanation
"""
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
"""
Explanation: Run the TensorFlow Lite models
Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies.
First, we need a function that runs inference with a given model and images, and then returns the predictions:
End of explanation
"""
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
"""
Explanation: Test the models on one image
Now we'll compare the performance of the float model and quantized model:
+ tflite_model_file is the original TensorFlow Lite model with floating-point data.
+ tflite_model_quant_file is the last model we converted using integer-only quantization (it uses uint8 data for input and output).
Let's create another function to print our predictions:
End of explanation
"""
test_model(tflite_model_file, test_image_index, model_type="Float")
"""
Explanation: Now test the float model:
End of explanation
"""
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
"""
Explanation: And test the quantized model:
End of explanation
"""
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
"""
Explanation: Evaluate the models on all images
Now let's run both models using all the test images we loaded at the beginning of this tutorial:
End of explanation
"""
evaluate_model(tflite_model_file, model_type="Float")
"""
Explanation: Evaluate the float model:
End of explanation
"""
evaluate_model(tflite_model_quant_file, model_type="Quantized")
"""
Explanation: Evaluate the quantized model:
End of explanation
"""
|
ChadFulton/statsmodels
|
examples/notebooks/tsa_dates.ipynb
|
bsd-3-clause
|
from __future__ import print_function
import statsmodels.api as sm
import numpy as np
import pandas as pd
"""
Explanation: Dates in timeseries models
End of explanation
"""
data = sm.datasets.sunspots.load()
"""
Explanation: Getting started
End of explanation
"""
from datetime import datetime
dates = sm.tsa.datetools.dates_from_range('1700', length=len(data.endog))
"""
Explanation: Right now an annual date series must be datetimes at the end of the year.
End of explanation
"""
endog = pd.Series(data.endog, index=dates)
"""
Explanation: Using Pandas
Make a pandas TimeSeries or DataFrame
End of explanation
"""
ar_model = sm.tsa.AR(endog, freq='A')
pandas_ar_res = ar_model.fit(maxlag=9, method='mle', disp=-1)
"""
Explanation: Instantiate the model
End of explanation
"""
pred = pandas_ar_res.predict(start='2005', end='2015')
print(pred)
"""
Explanation: Out-of-sample prediction
End of explanation
"""
ar_model = sm.tsa.AR(data.endog, dates=dates, freq='A')
ar_res = ar_model.fit(maxlag=9, method='mle', disp=-1)
pred = ar_res.predict(start='2005', end='2015')
print(pred)
"""
Explanation: Using explicit dates
End of explanation
"""
print(ar_res.data.predict_dates)
"""
Explanation: This just returns a regular array, but since the model has date information attached, you can get the prediction dates in a roundabout way.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.