repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
antoniomezzacapo/qiskit-tutorial | community/games/game_engines/Making_your_own_hello_quantum.ipynb | apache-2.0 | [
"Hello Quantum for Jupyter notebook\nHello Quantum is a project based on the idea of visualizing two qubit states and gates, and making them accessible to a non-specialist audience.\nIn the hello_quantum.py file you'll find some tools with which the 'Hello Quantum' visualizations and puzzles can be implemented in Jupyter notebooks. These were used to create the puzzles in the Hello_Qiskit notebook, but you can also create your own custom ones. These could then be used as part of presentations given about Qiskit, or self-study materials prepared for people learning Qiskit.\nTo use it, import hello_quantum and use matplotlib magic.",
"%matplotlib notebook\nimport hello_quantum",
"The import here was very simple, because this notebook is in the same folder as the hello_quantum.py file. If this is not the case, you'll have to change the path. See the Hello_Qiskit notebook for an example of this.\nOnce the import has been done, you can set up and display the visualization.",
"grid = hello_quantum.pauli_grid()\ngrid.update_grid()",
"This has attributes and methods which create and run quantum circuits with Qiskit.",
"for gate in [['x','1'],['h','0'],['z','0'],['h','1'],['z','1']]:\n command = 'grid.qc.'+gate[0]+'(grid.qr['+gate[1]+'])'\n eval(command)\n grid.update_grid()",
"There is also an alternative visualization, which can be used to better represent non-Clifford gates.",
"grid = hello_quantum.pauli_grid(mode='line')\ngrid.update_grid()",
"The run_game function, can also be used to implement custom 'Hello Quantum' games within a notebook. This is called with\nhello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)\nwhere the arguments set up the puzzle by specifying the following information.\ninitialize\n* List of gates applied to the initial 00 state to get the starting state of the puzzle.\n* Supported single qubit gates (applied to qubit '0' or '1') are 'x', 'y', 'z', 'h', 'ry(pi/4)'\n* Supported two qubit gates are 'cz' and 'cx'. For these, specify only the target qubit.\n* Example: initialize = [['x', '0'],['cx', '1']]\nsuccess_condition\n* Values for pauli observables that must be obtained for the puzzle to declare success.\n* Example: success_condition = {'IZ': 1.0}\nallowed_gates\n* For each qubit, specify which operations are allowed in this puzzle.\n* For operations that don't need a qubit to be specified ('cz' and 'unbloch'), assign the operation to 'both' instead of qubit '0' or '1'.\n* Gates are expressed as a dict with an int as value.\n * If this is non-zero, it specifies the exact number of times the gate must be used for the puzzle to be successfully solved.\n * If it is zero, the player can use the gate any number of times.\n* Example: allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 1}}\nvi\n* Some visualization information as a three element list. These specify:\n * Which qubits are hidden (empty list if both shown).\n * Whether both circles shown for each qubit? (use True for qubit puzzles and False for bit puzzles).\n * Whether the correlation circles (the four in the middle) are shown.\n* Example: vi = [[], True, True]\nqubit_names\n* The two qubits are always called '0' and '1' internally. But for the player, we can display different names.\n* Example: qubit_names = {'0':'qubit 0', '1':'qubit 1'}\nThe puzzle defined by the examples given here can be run in the following cell. See also the many examples in the Hello_Qiskit notebook.",
"initialize = [['x', '0'],['cx', '1']]\nsuccess_condition = {'IZ': 1.0}\nallowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 1}}\nvi = [[], True, True]\nqubit_names = {'0':'qubit 0', '1':'qubit 1'}\npuzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb | apache-2.0 | [
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nSeries\nThe first main data type we will learn about for pandas is the Series data type. Let's import Pandas and explore the Series object.\nA Series is very similar to a NumPy array (in fact it is built on top of the NumPy array object). What differentiates the NumPy array from a Series, is that a Series can have axis labels, meaning it can be indexed by a label, instead of just a number location. It also doesn't need to hold numeric data, it can hold any arbitrary Python Object.\nLet's explore this concept through some examples:",
"import numpy as np\nimport pandas as pd",
"Creating a Series\nYou can convert a list,numpy array, or dictionary to a Series:",
"labels = ['a', 'b', 'c']\nmy_list = [10, 20, 30]\narr = np.array([10, 20, 30])\nd = {'a': 10,'b': 20,'c': 30}",
"Using Lists",
"pd.Series(data = my_list)\n\npd.Series(data = my_list,\n index = labels)\n\npd.Series(my_list, labels)",
"NumPy Arrays",
"pd.Series(arr)\n\npd.Series(arr, labels)",
"Dictionary",
"pd.Series(d)",
"Data in a Series\nA pandas Series can hold a variety of object types:",
"pd.Series(data = labels)\n\n# Even functions (although unlikely that you will use this)\npd.Series([sum, print, len])",
"Using an Index\nThe key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary).\nLet's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2:",
"ser1 = pd.Series([1, 2, 3, 4], \n index = ['USA', 'Germany', 'USSR', 'Japan']) \n\nser1\n\nser2 = pd.Series([1, 2, 5, 4], \n index = ['USA', 'Germany', 'Italy', 'Japan']) \n\nser2\n\nser1['USA']",
"Operations are then also done based off of index:",
"ser1 + ser2",
"Let's stop here for now and move on to DataFrames, which will expand on the concept of Series!\nGreat Job!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ramseylab/networkscompbio | class21_reveal_python3.ipynb | apache-2.0 | [
"Class 21: joint entropy and the REVEAL algorithm\nWe'll use the bladder cancer gene expression data to test out the REVEAL algorithm. First, we'll load the data and filter to include only genes for which the median log2 expression level is > 12 (as we did in class session 20). That should give us 164 genes to work with.\nImport the Python modules that we will need for this exercise",
"import pandas\nimport numpy\nimport itertools",
"Load the data file shared/bladder_cancer_genes_tcga.txt into a pandas.DataFrame, convert it to a numpy.ndarray matrix, and print the matrix dimensions",
"gene_matrix_for_network_df = pandas.read_csv(\"shared/bladder_cancer_genes_tcga.txt\", sep=\"\\t\")\ngene_matrix_for_network = gene_matrix_for_network_df.as_matrix()\nprint(gene_matrix_for_network.shape)",
"Filter the matrix to include only rows for which the column-wise median is > 14; matrix should now be 13 x 414.",
"genes_keep = numpy.where(numpy.median(gene_matrix_for_network, axis=1) > 14)\nmatrix_filt = gene_matrix_for_network[genes_keep, ][0]\nmatrix_filt.shape\nN = matrix_filt.shape[0]\nM = matrix_filt.shape[1]",
"Binarize the gene expression matrix using the mean value as a breakpoint, turning it into a NxM matrix of booleans (True/False). Call it gene_matrix_binarized.",
"gene_matrix_binarized = numpy.tile(numpy.mean(matrix_filt, axis=1),(M,1)).transpose() < matrix_filt\nprint(gene_matrix_binarized.shape)",
"Test your matrix by printing the first four columns of the first four rows:",
"gene_matrix_binarized[0:4,0:4]",
"The core part of the REVEAL algorithm is a function that can compute the joint entropy of a collection of binary (TRUE/FALSE) vectors X1, X2, ..., Xn (where length(X1) = length(Xi) = M).\nWrite a function entropy_multiple_vecs that takes as its input a nxM matrix (where n is the number of variables, i.e., genes, and M is the number of samples in which gene expression was measured). The function should use the log2 definition of the Shannon entropy. It should return the joint entropy H(X1, X2, ..., Xn) as a scalar numeric value. I have created a skeleton version of this function for you, in which you can fill in the code. I have also created some test code that you can use to test your function, below.",
"def entropy_multiple_vecs(binary_vecs):\n ## use shape to get the numbers of rows and columns as [n,M]\n [n, M] = binary_vecs.shape\n \n # make a \"M x n\" dataframe from the transpose of the matrix binary_vecs\n binary_df = pandas.DataFrame(binary_vecs.transpose())\n \n # use the groupby method to obtain a data frame of counts of unique occurrences of the 2^n possible logical states\n binary_df_counts = binary_df.groupby(binary_df.columns.values.tolist()).size().values\n \n # divide the matrix of counts by M, to get a probability matrix\n probvec = binary_df_counts/M\n \n # compute the shannon entropy using the formula\n hvec = -probvec*numpy.log2(probvec)\n return numpy.sum(hvec)",
"This test case should produce the value 3.938:",
"print(entropy_multiple_vecs(gene_matrix_binarized[0:4,]))",
"Example implementation of the REVEAL algorithm:\nWe'll go through stage 3",
"ratio_thresh = 0.1\ngenes_to_fit = list(range(0,N))\nstage = 0\nregulators = [None]*N\nentropies_for_stages = [None]*N\nmax_stage = 4\n\nentropies_for_stages[0] = numpy.zeros(N)\n\nfor i in range(0,N):\n single_row_matrix = gene_matrix_binarized[i,:,None].transpose()\n entropies_for_stages[0][i] = entropy_multiple_vecs(single_row_matrix)\n \ngenes_to_fit = set(range(0,N))\n\nfor stage in range(1,max_stage + 1):\n for gene in genes_to_fit.copy():\n # we are trying to find regulators for gene \"gene\"\n poss_regs = set(range(0,N)) - set([gene])\n poss_regs_combs = [list(x) for x in itertools.combinations(poss_regs, stage)]\n HGX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[[gene] + poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])\n HX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ])\n HG = entropies_for_stages[0][gene]\n min_value = numpy.min(HGX - HX)\n if HG - min_value >= ratio_thresh * HG:\n regulators[gene]=poss_regs_combs[numpy.argmin(HGX - HX)]\n genes_to_fit.remove(gene)\nregulators"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
elenduuche/deep-learning | seq2seq/sequence_to_sequence_implementation.ipynb | mit | [
"Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.",
"import helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)",
"Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.",
"source_sentences[:50].split('\\n')",
"target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.",
"target_sentences[:50].split('\\n')",
"Preprocess\nTo do anything useful with it, we'll need to turn the characters into a list of integers:",
"def extract_character_vocab(data):\n special_words = ['<pad>', '<unk>', '<s>', '<\\s>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\\n')]\n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])\nprint()\nprint(\"<s> index is {}\".format(target_letter_to_int['<s>']))",
"The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.",
"def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):\n new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in source_ids]\n new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in target_ids]\n\n return new_source_ids, new_target_ids\n\n\n# Use the longest sequence as sequence length\nsequence_length = max(\n [len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])\n\n# Pad all sequences up to sequence length\nsource_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int, \n target_letter_ids, target_letter_to_int, sequence_length)\n\nprint(\"Sequence Length\")\nprint(sequence_length)\nprint(\"\\n\")\nprint(\"Input sequence example\")\nprint(source_ids[:3])\nprint(\"\\n\")\nprint(\"Target sequence example\")\nprint(target_ids[:3])",
"This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow",
"from distutils.version import LooseVersion\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))",
"Hyperparameters",
"# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 13\ndecoding_embedding_size = 13\n# Learning Rate\nlearning_rate = 0.001",
"Input",
"input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])\ntargets = tf.placeholder(tf.int32, [batch_size, sequence_length])\nlr = tf.placeholder(tf.float32)",
"Sequence to Sequence\nThe decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\nThen, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.\nLet's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.\nNotice that the inference decoder feeds the output of each time step as an input to the next.\nAs for the training decoder, we can think of it as looking like this:\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\nEncoding\n\nEmbed the input data using tf.contrib.layers.embed_sequence\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.",
"#print(source_letter_to_int)\nsource_vocab_size = len(source_letter_to_int)\nprint(\"Length of letter to int is {}\".format(source_vocab_size))\nprint(\"encoding embedding size is {}\".format(encoding_embedding_size))\n\n# Encoder embedding\nenc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n# Encoder\nenc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)",
"Process Decoding Input",
"import numpy as np\n\n# Process the input we'll feed to the decoder\nending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])\ndec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)\n\n#Demonstration/Example\ndemonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))\n\nsess = tf.InteractiveSession()\nprint(\"Targets\")\nprint(demonstration_outputs[:2])\nprint(\"\\n\")\nprint(\"Processed Decoding Input\")\nprint(sess.run(dec_input, {targets: demonstration_outputs})[:2])\nprint(\"targets shape is {} and ending shape is {}\".format(targets.shape, ending.shape))\nprint(\"demonstration_outputs shape is {}\".format(demonstration_outputs.shape))",
"Decoding\n\nEmbed the decoding input\nBuild the decoding RNNs\nBuild the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.",
"target_vocab_size = len(target_letter_to_int)\n#print(target_vocab_size, \" : \", decoding_embedding_size)\n# Decoder Embedding\ndec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\ndec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n#print(dec_input, target_vocab_size, decoding_embedding_size)\n\n# Decoder RNNs\ndec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n\nwith tf.variable_scope(\"decoding\") as decoding_scope:\n # Output Layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)",
"Decoder During Training\n\nBuild the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.\nApply the output layer to the output of the training decoder",
"with tf.variable_scope(\"decoding\") as decoding_scope:\n # Training Decoder\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n \n # Apply output function\n train_logits = output_fn(train_pred)",
"Decoder During Inference\n\nReuse the weights the biases from the training decoder using tf.variable_scope(\"decoding\", reuse=True)\nBuild the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.\nThe output function is applied to the output in this step",
"with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n # Inference Decoder\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\\s>'], \n sequence_length - 1, target_vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)\n print(inference_logits.shape)",
"Optimization\nOur loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.",
"# Loss function\ncost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([batch_size, sequence_length]))\n\n# Optimizer\noptimizer = tf.train.AdamOptimizer(lr)\n\n# Gradient Clipping\ngradients = optimizer.compute_gradients(cost)\ncapped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\ntrain_op = optimizer.apply_gradients(capped_gradients)",
"Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.",
"import numpy as np\n\ntrain_source = source_ids[batch_size:]\ntrain_target = target_ids[batch_size:]\n\nvalid_source = source_ids[:batch_size]\nvalid_target = target_ids[:batch_size]\n\nsess.run(tf.global_variables_initializer())\n\nfor epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch, targets: target_batch, lr: learning_rate})\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source})\n\n train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))\n valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))",
"Prediction",
"input_sentence = 'hello'\n\n\ninput_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]\ninput_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))\nbatch_shell = np.zeros((batch_size, sequence_length))\nbatch_shell[0] = input_sentence\nchatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in input_sentence]))\nprint(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))\nprint(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
swails/mdtraj | examples/principal-components.ipynb | lgpl-2.1 | [
"scikit-learn is a machine learning library for python, with a very easy to use API and great documentation.",
"%matplotlib inline\nfrom __future__ import print_function\nimport mdtraj as md\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA",
"Lets load up our trajectory. This is the trajectory that we generated in\nthe \"Running a simulation in OpenMM and analyzing the results with mdtraj\"\nexample.",
"traj = md.load('ala2.h5')\ntraj",
"Create a two component PCA model, and project our data down into this\nreduced dimensional space. Using just the cartesian coordinates as\ninput to PCA, it's important to start with some kind of alignment.",
"pca1 = PCA(n_components=2)\ntraj.superpose(traj, 0)\n\nreduced_cartesian = pca1.fit_transform(traj.xyz.reshape(traj.n_frames, traj.n_atoms * 3))\nprint(reduced_cartesian.shape)",
"Now we can plot the data on this projection.",
"plt.figure()\nplt.scatter(reduced_cartesian[:, 0], reduced_cartesian[:,1], marker='x', c=traj.time)\nplt.xlabel('PC1')\nplt.ylabel('PC2')\nplt.title('Cartesian coordinate PCA: alanine dipeptide')\ncbar = plt.colorbar()\ncbar.set_label('Time [ps]')",
"Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to \"featurize\" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA.",
"pca2 = PCA(n_components=2)\n\nfrom itertools import combinations\n# this python function gives you all unique pairs of elements from a list\n\natom_pairs = list(combinations(range(traj.n_atoms), 2))\npairwise_distances = md.geometry.compute_distances(traj, atom_pairs)\nprint(pairwise_distances.shape)\nreduced_distances = pca2.fit_transform(pairwise_distances)\n\nplt.figure()\nplt.scatter(reduced_distances[:, 0], reduced_distances[:,1], marker='x', c=traj.time)\nplt.xlabel('PC1')\nplt.ylabel('PC2')\nplt.title('Pairwise distance PCA: alanine dipeptide')\ncbar = plt.colorbar()\ncbar.set_label('Time [ps]')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajaybhat/DLND | Project 1/Project-1.ipynb | apache-2.0 | [
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
" %matplotlib inline\n %config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport math",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n \n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n \n ### Forward pass ###\n hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) \n hidden_outputs = self.activation_function(hidden_inputs)\n \n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n \n ### Backward pass ###\n output_errors = targets-final_outputs\n output_grad = output_errors\n \n hidden_errors = np.dot(self.weights_hidden_to_output.T,output_errors)\n hidden_grad = self.activation_function_derivative(hidden_inputs)\n \n self.weights_hidden_to_output += self.lr*np.dot(output_grad,hidden_outputs.T)\n self.weights_input_to_hidden += self.lr*np.dot(hidden_errors* hidden_grad,inputs.T)\n \n def activation_function(self,x):\n return 1/(1 + np.exp(-x))\n \n def activation_function_derivative(self,x):\n return self.activation_function(x)*(1-self.activation_function(x))\n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n \n final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)\n final_outputs = final_inputs\n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\nepochs = 2000\nlearning_rate = 0.008\nhidden_nodes = 10\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n if e%(epochs/10) == 0:\n sys.stdout.write(\"\\nProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], 'r',label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, 'g', label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model predicts the data fairly well for the limited amount it is provided. It fails massively for the period December 23-28, because real-world scenarios indicate most people would not get bikes at that time, and would be staying home. However, the network predicts the same behavior as the other days of the month, and so it fails. This could be avoided by feeding it similar data of the type seen in the sample (as part of the training data).\nAnother way is to experiment with the activation function itself: Sigmoid can be replaced by tanh or Leaky RelU functions. http://cs231n.github.io/neural-networks-1/\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n \n def runTest(self):\n pass\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner(verbosity=1).run(suite)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
EducationalTestingService/rsmtool | rsmtool/notebooks/summary/header.ipynb | apache-2.0 | [
"# Setting options for the plots\n%matplotlib inline\n%config InlineBackend.figure_formats={'retina', 'svg'}\n%config InlineBackend.rc={'savefig.dpi': 150}",
"Summary Report",
"import itertools\nimport json\nimport os\nimport re\nimport pickle\nimport platform\nimport time\n\nfrom collections import defaultdict as dd\nfrom functools import partial\nfrom os.path import abspath, dirname, exists, join\nfrom string import Template\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport scipy.stats as stats\nfrom matplotlib import pyplot as plt\n\nfrom IPython import sys_info\nfrom IPython.display import display, HTML, Image, Javascript, Markdown, SVG\n\nfrom rsmtool.utils.files import (get_output_directory_extension,\n parse_json_with_comments)\nfrom rsmtool.utils.notebook import (float_format_func,\n int_or_float_format_func,\n bold_highlighter,\n color_highlighter,\n show_thumbnail)\n\nfrom rsmtool.reader import DataReader\nfrom rsmtool.writer import DataWriter\nfrom rsmtool.version import VERSION as rsmtool_version\n\n# turn off interactive plotting\nplt.ioff()\n\nrsm_report_dir = os.environ.get('RSM_REPORT_DIR', None)\nif rsm_report_dir is None:\n rsm_report_dir = os.getcwd()\n\nrsm_environ_config = join(rsm_report_dir, '.environ.json')\nif not exists(rsm_environ_config):\n raise FileNotFoundError('The file {} cannot be located. '\n 'Please make sure that either (1) '\n 'you have set the correct directory with the `RSM_REPORT_DIR` '\n 'environment variable, or (2) that your `.environ.json` '\n 'file is in the same directory as your notebook.'.format(rsm_environ_config))\n \nenviron_config = parse_json_with_comments(rsm_environ_config)",
"<style type=\"text/css\">\n div.prompt.output_prompt { \n color: white; \n }\n\n span.highlight_color {\n color: red;\n }\n\n span.highlight_bold {\n font-weight: bold; \n }\n\n @media print {\n @page {\n size: landscape;\n margin: 0cm 0cm 0cm 0cm;\n }\n\n * {\n margin: 0px;\n padding: 0px;\n }\n\n #toc {\n display: none;\n }\n\n span.highlight_color, span.highlight_bold {\n font-weight: bolder;\n text-decoration: underline;\n }\n\n div.prompt.output_prompt {\n display: none;\n }\n\n h3#Python-packages, div#packages {\n display: none;\n }\n</style>",
"# NOTE: you will need to set the following manually\n# if you are using this notebook interactively.\nsummary_id = environ_config.get('SUMMARY_ID')\ndescription = environ_config.get('DESCRIPTION')\njsons = environ_config.get('JSONS')\noutput_dir = environ_config.get('OUTPUT_DIR')\nuse_thumbnails = environ_config.get('USE_THUMBNAILS')\nfile_format_summarize = environ_config.get('FILE_FORMAT')\n\n# groups for subgroup analysis.\ngroups_desc = environ_config.get('GROUPS_FOR_DESCRIPTIVES') \ngroups_eval = environ_config.get('GROUPS_FOR_EVALUATIONS') \n\n# javascript path\njavascript_path = environ_config.get(\"JAVASCRIPT_PATH\")\n\n# initialize id generator for thumbnails\nid_generator = itertools.count(1)\n\nwith open(join(javascript_path, \"sort.js\"), \"r\", encoding=\"utf-8\") as sortf:\n display(Javascript(data=sortf.read()))\n\n# load the information about all models\nmodel_list = []\nfor (json_file, experiment_name) in jsons:\n model_config = json.load(open(json_file))\n model_id = model_config['experiment_id']\n model_name = experiment_name if experiment_name else model_id\n model_csvdir = dirname(json_file)\n model_file_format = get_output_directory_extension(model_csvdir, model_id)\n model_list.append((model_id, model_name, model_config, model_csvdir, model_file_format))\n\n\nMarkdown(\"This report presents the analysis for **{}**: {} \\n \".format(summary_id, description))\n\n\nHTML(time.strftime('%c'))\n\n# get a matched list of model ids and descriptions\nmodels_and_desc = zip([model_name for (model_id, model_name, config, csvdir, model_file_format) in model_list],\n [config['description'] for (model_id, model_name, config, csvdir, file_format) in model_list])\nmodel_desc_list = '\\n\\n'.join(['**{}**: {}'.format(m, d) for (m, d) in models_and_desc])\n\nMarkdown(\"The report compares the following models: \\n\\n {}\".format(model_desc_list))\n\nif use_thumbnails:\n display(Markdown(\"\"\"***Note: Images in this report have been converted to \"\"\"\n \"\"\"clickable thumbnails***\"\"\"))\n\n%%html\n<div id=\"toc\"></div>"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/pcmdi/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | 0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb | bsd-3-clause | [
"%matplotlib inline",
"The role of dipole orientations in distributed source localization\nWhen performing source localization in a distributed manner\n(MNE/dSPM/sLORETA/eLORETA),\nthe source space is defined as a grid of dipoles that spans a large portion of\nthe cortex. These dipoles have both a position and an orientation. In this\ntutorial, we will look at the various options available to restrict the\norientation of the dipoles and the impact on the resulting source estimate.\nSee inverse_orientation_constrains\nLoading data\nLoad everything we need to perform source localization on the sample dataset.",
"import mne\nimport numpy as np\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\ndata_path = sample.data_path()\nevokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')\nleft_auditory = evokeds[0].apply_baseline()\nfwd = mne.read_forward_solution(\n data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')\nmne.convert_forward_solution(fwd, surf_ori=True, copy=False)\nnoise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')\nsubject = 'sample'\nsubjects_dir = data_path + '/subjects'\ntrans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'",
"The source space\nLet's start by examining the source space as constructed by the\n:func:mne.setup_source_space function. Dipoles are placed along fixed\nintervals on the cortex, determined by the spacing parameter. The source\nspace does not define the orientation for these dipoles.",
"lh = fwd['src'][0] # Visualize the left hemisphere\nverts = lh['rr'] # The vertices of the source space\ntris = lh['tris'] # Groups of three vertices that form triangles\ndip_pos = lh['rr'][lh['vertno']] # The position of the dipoles\ndip_ori = lh['nn'][lh['vertno']]\ndip_len = len(dip_pos)\ndip_times = [0]\nwhite = (1.0, 1.0, 1.0) # RGB values for a white color\n\nactual_amp = np.ones(dip_len) # misc amp to create Dipole instance\nactual_gof = np.ones(dip_len) # misc GOF to create Dipole instance\ndipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)\ntrans = mne.read_trans(trans_fname)\n\nfig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)\ncoord_frame = 'mri'\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans, surfaces='white',\n coord_frame=coord_frame, fig=fig)\n\n# Mark the position of the dipoles with small red dots\nfig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,\n mode='sphere', subject=subject,\n subjects_dir=subjects_dir,\n coord_frame=coord_frame,\n scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)",
"Fixed dipole orientations\nWhile the source space defines the position of the dipoles, the inverse\noperator defines the possible orientations of them. One of the options is to\nassign a fixed orientation. Since the neural currents from which MEG and EEG\nsignals originate flows mostly perpendicular to the cortex [1]_, restricting\nthe orientation of the dipoles accordingly places a useful restriction on the\nsource estimate.\nBy specifying fixed=True when calling\n:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are\nfixed to be orthogonal to the surface of the cortex, pointing outwards. Let's\nvisualize this:",
"fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the dipoles as arrows pointing along the surface normal\nfig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,\n mode='arrow', subject=subject,\n subjects_dir=subjects_dir,\n coord_frame='head',\n scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)",
"Restricting the dipole orientations in this manner leads to the following\nsource estimate for the sample data:",
"# Compute the source estimate for the 'left - auditory' condition in the sample\n# dataset.\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.get_peak(hemi='lh')\nbrain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))",
"The direction of the estimated current is now restricted to two directions:\ninward and outward. In the plot, blue areas indicate current flowing inwards\nand red areas indicate current flowing outwards. Given the curvature of the\ncortex, groups of dipoles tend to point in the same direction: the direction\nof the electromagnetic field picked up by the sensors.\nLoose dipole orientations\nForcing the source dipoles to be strictly orthogonal to the cortex makes the\nsource estimate sensitive to the spacing of the dipoles along the cortex,\nsince the curvature of the cortex changes within each ~10 square mm patch.\nFurthermore, misalignment of the MEG/EEG and MRI coordinate frames is more\ncritical when the source dipole orientations are strictly constrained [2]_.\nTo lift the restriction on the orientation of the dipoles, the inverse\noperator has the ability to place not one, but three dipoles at each\nlocation defined by the source space. These three dipoles are placed\northogonally to form a Cartesian coordinate system. Let's visualize this:",
"fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the three dipoles defined at each location in the source space\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans, fwd=fwd,\n surfaces='white', coord_frame='head', fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)",
"When computing the source estimate, the activity at each of the three dipoles\nis collapsed into the XYZ components of a single vector, which leads to the\nfollowing source estimate for the sample data:",
"# Make an inverse operator with loose dipole orientations\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=1.0)\n\n# Compute the source estimate, indicate that we want a vector solution\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)",
"Limiting orientations, but not fixing them\nOften, the best results will be obtained by allowing the dipoles to have\nsomewhat free orientation, but not stray too far from a orientation that is\nperpendicular to the cortex. The loose parameter of the\n:func:mne.minimum_norm.make_inverse_operator allows you to specify a value\nbetween 0 (fixed) and 1 (unrestricted or \"free\") to indicate the amount the\norientation is allowed to deviate from the surface normal.",
"# Set loose to 0.2, the default value\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=0.2)\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)",
"Discarding dipole orientation information\nOften, further analysis of the data does not need information about the\norientation of the dipoles, but rather their magnitudes. The pick_ori\nparameter of the :func:mne.minimum_norm.apply_inverse function allows you\nto specify whether to return the full vector solution ('vector') or\nrather the magnitude of the vectors (None, the default) or only the\nactivity in the direction perpendicular to the cortex ('normal').",
"# Only retain vector magnitudes\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.get_peak(hemi='lh')\nbrain = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))",
"References\n.. [1] Hämäläinen, M. S., Hari, R., Ilmoniemi, R. J., Knuutila, J., &\n Lounasmaa, O. V. \"Magnetoencephalography - theory, instrumentation, and\n applications to noninvasive studies of the working human brain\", Reviews\n of Modern Physics, 1993. https://doi.org/10.1103/RevModPhys.65.413\n.. [2] Lin, F. H., Belliveau, J. W., Dale, A. M., & Hämäläinen, M. S. (2006).\n Distributed current estimates using cortical orientation constraints.\n Human Brain Mapping, 27(1), 1–13. http://doi.org/10.1002/hbm.20155"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cavestruz/MLPipeline | notebooks/anomaly_detection/sample_anomaly_detection.ipynb | mit | [
"Let us first explore an example that falls under novelty detection. Here, we train a model on data with some distribution and no outliers. The test data, has some \"novel\" subset of data that does not follow that distribution.",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import svm\n%matplotlib inline",
"Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two \"clusters\". Concatenate them so you have 2,000 data points in two dimensions. Plot the points. This will be the training set.\nPlot the points.\nGenerate 100 data points with the same distribution as your first random normal 2-d set, and 100 data points with the same distribution as your second random normal 2-d set. This will be the test set labeled X_test_normal.\nGenerate 100 data points with a random uniform distribution. This will be the test set labeled X_test_uniform.\nDefine a model classifier with the svm.OneClassSVM",
"model = svm.OneClassSVM()",
"Fit the model to the training data.\nUse the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of \"false\" predictions.\nUse the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of \"false\" predictions.\nUse the trained model to see how well it recovers the training data. (Predict on the training data, and calculate the fraction of \"false\" predictions.)\nCreate another instance of the model classifier, but change the kwarg value for nu. Hint: Use help to figure out what the kwargs are.\nRedo the prediction on the training set, prediction on X_test_random, and prediction on X_test.\nPlot in scatter points the X_train in blue, X_test_normal in red, and X_test_uniform in black. Overplot the trained model decision function boundary for the first instance of the model classifier.\nDo the same for the second instance of the model classifier.",
"from sklearn.covariance import EllipticEnvelope",
"Test how well EllipticEnvelope predicts the outliers when you concatenate the training data with the X_test_uniform data.\nCompute and plot the mahanalobis distances of X_test, X_train_normal, X_train_uniform"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mzszym/oedes | examples/scl/scl-trapping.ipynb | agpl-3.0 | [
"Steady-state space-charge-limited current with traps\nThis example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.",
"%matplotlib inline\nimport matplotlib.pylab as plt\nimport oedes\nimport numpy as np\noedes.init_notebook() # for displaying progress bars ",
"Model and parameters\nElectron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.",
"L = 200e-9 # device thickness, m\nmodel = oedes.models.std.electrononly(L, traps=['trap'])\n\nparams = {\n 'T': 300, # K\n 'electrode0.workfunction': 0, # eV\n 'electrode1.workfunction': 0, # eV\n 'electron.energy': 0, # eV\n 'electron.mu': 1e-9, # m2/(Vs)\n 'electron.N0': 2.4e26, # 1/m^3\n 'electron.trap.energy': 0, # eV\n 'electron.trap.trate': 1e-22, # 1/(m^3 s)\n 'electron.trap.N0': 6.2e22, # 1/m^3\n 'electrode0.voltage': 0, # V\n 'electrode1.voltage': 0, # V\n 'epsilon_r': 3. # 1\n}",
"Sweep parameters\nFor simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.",
"trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.]))\nvoltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))",
"Result",
"c=oedes.context(model)\n\nfor tdepth,ct in c.sweep(params, trapenergy_sweep):\n for _ in ct.sweep(ct.params, voltage_sweep):\n pass\n v,j = ct.teval(voltage_sweep.parameter_name,'J')\n oedes.testing.store(j, rtol=1e-3) # for automatic testing\n if tdepth < 0:\n label = 'no traps'\n else:\n label = 'trap depth %s eV' % tdepth\n plt.plot(v,j,label=label)\nplt.xscale('log')\nplt.yscale('log')\nplt.xlabel('V')\nplt.ylabel(r'$\\mathrm{A/m^2}$')\nplt.legend(loc=0,frameon=False);",
"This file is a part of oedes, an open source organic electronic device \nsimulator. For more information, see https://www.github.com/mzszym/oedes."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ptpro3/ptpro3.github.io | Projects/Project2/Project2_Prashant.ipynb | mit | [
"Project: Project 2: Luther\nDate: 02/03/2017\nName: Prashant Tatineni\nProject Overview\nFor Project Luther, I gathered the set of all films listed under movie franchises on boxofficemojo.com. My goal was to predict the success of a movie sequel (i.e., domestic gross in USD) based on the performance of other sequels, and especially based on previous films in that particular franchise. I saw some linear correlation between certain variables, like number of theaters, and the total domestic gross, but the predictions from my final model were not entirely reasonable. More time could be spent on better addressing the various outliers in the dataset.\nSummary of Solution Steps\n\nRetrieve data from boxofficemojo.com.\nClean up data and reduce to a set of predictor variables, with \"Adjusted Gross\" as the target for prediction.\nRun Linear Regression model.\nReview model performance.",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom IPython.display import Image\nimport requests\nfrom bs4 import BeautifulSoup\nimport dateutil.parser\nimport statsmodels.api as sm\nimport patsy\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import PolynomialFeatures\nimport sys, sklearn\nfrom sklearn import linear_model, preprocessing\nfrom sklearn import metrics\n\n%matplotlib inline",
"Step 1\nI started with the \"Franchises\" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for \"Franchises\" list was sorted Ascending, so this conveniently rolls \"subfranchises\" into their \"parent\" franchise.\nE.g., \"Fantastic Beasts\" and the \"Harry Potter\" movies have their own separate Franchises, but they will all be tagged as the \"JKRowling\" franchise, i.e. \"./chart/?id=jkrowling.htm\"\nAlso, because I was comparing sequels to their predecessors, I focused on Domestic Gross, adjusted for ticket price inflation.",
"url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm'\nresponse = requests.get(url)\npage = response.text\nsoup = BeautifulSoup(page,\"lxml\")\ntables = soup.find_all(\"table\")\nrows = [row for row in tables[3].find_all('tr')]\nrows = rows[1:]\n\n# Initialize empty dictionary of movies\nmovies = {}\n\nfor row in rows:\n items = row.find_all('td')\n franchise = items[0].find('a')['href']\n franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:]\n response = requests.get(franchiseurl)\n \n franchise_page = response.text\n franchise_soup = BeautifulSoup(franchise_page,\"lxml\")\n franchise_tables = franchise_soup.find_all(\"table\")\n franchise_gross = [row for row in franchise_tables[4].find_all('tr')]\n franchise_gross = franchise_gross[1:len(franchise_gross)-2]\n franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')]\n franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2]\n\n # Assign movieurl as key\n # Add title, franchise, inflation-adjusted gross, release date.\n for row in franchise_adjgross:\n movie_info = row.find_all('td')\n movieurl = movie_info[1].find('a')['href']\n title = movie_info[1]\n adjgross = movie_info[3]\n release = movie_info[5]\n movies[movieurl] = [title.text]\n movies[movieurl].append(franchise) \n movies[movieurl].append(adjgross.text) \n movies[movieurl].append(release.text)\n \n # Add number of theaters for the above movies\n for row in franchise_gross:\n movie_info = row.find_all('td')\n movieurl = movie_info[1].find('a')['href']\n theaters = movie_info[4]\n if movieurl in movies.keys():\n movies[movieurl].append(theaters.text)\n\ndf = pd.DataFrame(movies.values())\ndf.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters']\ndf.head()\n\ndf.shape",
"Step 2\nClean up data.",
"# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions.\ndf['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower())\ndf = df[(df.Ignore == False)]\ndel df['Ignore']\ndf.shape\n\n# Convert Adjusted Gross to a number\ndf['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',','')))\n\n# Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060'\ndf['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x)\ndf['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))",
"The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation.\n- The Average Adjusted Gross of all previous films in the franchise\n- The Adjusted Gross of the very first film in the franchise\n- The Release Date of the previous film in the franchise\n- The Release Date of the very first film in the franchise\n- The Series Number of the film in that franchise\n-- I considered using the film's number in the franchise as a rank value that could be split into indicator variables, but it's useful as a linear value because the total accrued sum of $ earned by the franchise is a linear combination of \"SeriesNum\" and \"PrevAvgGross\"",
"df = df.sort_values(['Franchise','Release'])\ndf['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum())\ndf['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank())\ndf['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)",
"Number of Theaters in which the film showed\n-- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.",
"df.Theaters = df.Theaters.replace('-','0')\ndf['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',','')))\n\ndf['PrevRelease'] = df['Release'].shift()\n\n# Create a second dataframe with franchise group-related information.\ndf_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count()))\ndf_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first()\ndf_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first()\ndf_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum())\n\ndf_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters']\ndf_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms']\n\ndf_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first()\n\ndf = df.merge(df_group, on='Franchise')\n\ndf.head()\n\ndf['Theaters'] = df.Theaters.replace(0,df.AvgTheaters)\n\n# Drop rows with NaN. Drops all first films, but I've already stored first film information within other features.\ndf = df.dropna()\ndf.shape\n\ndf['DaysSinceFirstFilm'] = df.Release - df.FirstRelease\ndf['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days)\n\ndf['DaysSincePrevFilm'] = df.Release - df.PrevRelease\ndf['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days)\n\ndf.sort_values('Release',ascending=False).head()",
"For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.",
"films17 = df.loc[[530,712,676]]\n\n# Grabbing columns for regression model and dropping 2017 films\ndfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]\ndfreg = dfreg.drop([530,712,676])\ndfreg.shape",
"Step 3\nApply Linear Regression.",
"dfreg.corr()\n\nsns.pairplot(dfreg);\n\nsns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross));\n\nsns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));",
"In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.",
"y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type=\"dataframe\")",
"First try: Initial linear regression model with statsmodels",
"model = sm.OLS(y, X)\nfit = model.fit()\nfit.summary()\n\nfit.resid.plot(style='o');",
"Try Polynomial Regression",
"polyX=PolynomialFeatures(2).fit_transform(X)\n\npolymodel = sm.OLS(y, polyX)\npolyfit = polymodel.fit()\npolyfit.rsquared\n\npolyfit.resid.plot(style='o');\n\npolyfit.rsquared_adj",
"Heteroskedasticity\nThe polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:",
"hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']\nhettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog)\nzip(hetnames,hettest)\n\nhetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']\nhettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)\nzip(hetnames,hettest)",
"Apply Box-Cox Transformation\nAs seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.",
"dfPolyX = pd.DataFrame(polyX)\nbcPolyX = pd.DataFrame()\nfor i in range(dfPolyX.shape[1]):\n bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0]\n\n# Transformed data with Box-Cox:\nbcPolyX.head()\n\n# Introduce log(y) for target variable:\ny = y.reset_index(drop=True)\nlogy = np.log(y)",
"Try Polynomial Regression again with Log Y and Box-Cox transformed X",
"logPolyModel = sm.OLS(logy, bcPolyX)\nlogPolyFit = logPolyModel.fit()\nlogPolyFit.rsquared_adj",
"Apply Regularization using Elastic Net to optimize this model.",
"X_scaled = preprocessing.scale(bcPolyX)\nen_cv = linear_model.ElasticNetCV(cv=10, normalize=False)\nen_cv.fit(X_scaled, logy)\n\nen_cv.coef_\n\nlogy_en = en_cv.predict(X_scaled)\nmse = metrics.mean_squared_error(logy, logy_en)\n\n# The mean square error for this model\nmse\n\nplt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));",
"Step 4\nAs seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.",
"films17\n\ndf17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]\ny17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type=\"dataframe\")\npolyX17 = PolynomialFeatures(2).fit_transform(X17)\n\ndfPolyX17 = pd.DataFrame(polyX17)\nbcPolyX17 = pd.DataFrame()\nfor i in range(dfPolyX17.shape[1]):\n bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0]\nX17_scaled = preprocessing.scale(bcPolyX17)\n\n# Run the \"en_cv\" model from above on the 2017 data:\nlogy_en_2017 = en_cv.predict(X17_scaled)\n\n# Predicted Adjusted Gross:\npd.DataFrame(np.exp(logy_en_2017))\n\n# Adjusted Gross as of 2/1:\ny17"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyLCARS/PythonUberHDL | myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb | bsd-3-clause | [
"\\title{myHDL Combinational Logic Elements: Multiplexers (MUXs))}\n\\author{Steven K Armour}\n\\maketitle\n<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#Refrances\" data-toc-modified-id=\"Refrances-1\"><span class=\"toc-item-num\">1 </span>Refrances</a></span></li><li><span><a href=\"#Libraries-and-Helper-functions\" data-toc-modified-id=\"Libraries-and-Helper-functions-2\"><span class=\"toc-item-num\">2 </span>Libraries and Helper functions</a></span></li><li><span><a href=\"#Multiplexers\" data-toc-modified-id=\"Multiplexers-3\"><span class=\"toc-item-num\">3 </span>Multiplexers</a></span></li><li><span><a href=\"#2-Channel-Input:1-Channel-Output-multiplexer-in-Gate-Level-Logic\" data-toc-modified-id=\"2-Channel-Input:1-Channel-Output-multiplexer-in-Gate-Level-Logic-4\"><span class=\"toc-item-num\">4 </span>2 Channel Input:1 Channel Output multiplexer in Gate Level Logic</a></span><ul class=\"toc-item\"><li><span><a href=\"#Sympy-Expression\" data-toc-modified-id=\"Sympy-Expression-4.1\"><span class=\"toc-item-num\">4.1 </span>Sympy Expression</a></span></li><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-4.2\"><span class=\"toc-item-num\">4.2 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-4.3\"><span class=\"toc-item-num\">4.3 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-4.4\"><span class=\"toc-item-num\">4.4 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-4.5\"><span class=\"toc-item-num\">4.5 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-4.6\"><span class=\"toc-item-num\">4.6 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-4.6.1\"><span class=\"toc-item-num\">4.6.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-4.6.2\"><span class=\"toc-item-num\">4.6.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-4.6.3\"><span class=\"toc-item-num\">4.6.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href=\"#4-Channel-Input-:-1-Channel-Output-multiplexer-in-Gate-Level-Logic\" data-toc-modified-id=\"4-Channel-Input-:-1-Channel-Output-multiplexer-in-Gate-Level-Logic-5\"><span class=\"toc-item-num\">5 </span>4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic</a></span><ul class=\"toc-item\"><li><span><a href=\"#Sympy-Expression\" data-toc-modified-id=\"Sympy-Expression-5.1\"><span class=\"toc-item-num\">5.1 </span>Sympy Expression</a></span></li><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-5.2\"><span class=\"toc-item-num\">5.2 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-5.3\"><span class=\"toc-item-num\">5.3 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-5.4\"><span class=\"toc-item-num\">5.4 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-5.5\"><span class=\"toc-item-num\">5.5 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-5.6\"><span class=\"toc-item-num\">5.6 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-5.6.1\"><span class=\"toc-item-num\">5.6.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-5.6.2\"><span class=\"toc-item-num\">5.6.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-5.6.3\"><span class=\"toc-item-num\">5.6.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href=\"#Shannon's-Expansion-Formula-&-Stacking-of-MUXs\" data-toc-modified-id=\"Shannon's-Expansion-Formula-&-Stacking-of-MUXs-6\"><span class=\"toc-item-num\">6 </span>Shannon's Expansion Formula & Stacking of MUXs</a></span></li><li><span><a href=\"#4-Channel-Input:-1-Channel-Output-multiplexer-via-MUX-Stacking\" data-toc-modified-id=\"4-Channel-Input:-1-Channel-Output-multiplexer-via-MUX-Stacking-7\"><span class=\"toc-item-num\">7 </span>4 Channel Input: 1 Channel Output multiplexer via MUX Stacking</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-7.1\"><span class=\"toc-item-num\">7.1 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-7.2\"><span class=\"toc-item-num\">7.2 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-7.3\"><span class=\"toc-item-num\">7.3 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-7.4\"><span class=\"toc-item-num\">7.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-7.5\"><span class=\"toc-item-num\">7.5 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-7.5.1\"><span class=\"toc-item-num\">7.5.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-7.5.2\"><span class=\"toc-item-num\">7.5.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-7.5.3\"><span class=\"toc-item-num\">7.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href=\"#Introduction-to-HDL-Behavioral-Modeling\" data-toc-modified-id=\"Introduction-to-HDL-Behavioral-Modeling-8\"><span class=\"toc-item-num\">8 </span>Introduction to HDL Behavioral Modeling</a></span></li><li><span><a href=\"#2:1-MUX-via-Behavioral-IF\" data-toc-modified-id=\"2:1-MUX-via-Behavioral-IF-9\"><span class=\"toc-item-num\">9 </span>2:1 MUX via Behavioral IF</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-9.1\"><span class=\"toc-item-num\">9.1 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-9.2\"><span class=\"toc-item-num\">9.2 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-9.3\"><span class=\"toc-item-num\">9.3 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-9.4\"><span class=\"toc-item-num\">9.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-9.5\"><span class=\"toc-item-num\">9.5 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-9.5.1\"><span class=\"toc-item-num\">9.5.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-9.5.2\"><span class=\"toc-item-num\">9.5.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-9.5.3\"><span class=\"toc-item-num\">9.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href=\"#4:1-MUX-via-Behavioral-if-elif-else\" data-toc-modified-id=\"4:1-MUX-via-Behavioral-if-elif-else-10\"><span class=\"toc-item-num\">10 </span>4:1 MUX via Behavioral if-elif-else</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-10.1\"><span class=\"toc-item-num\">10.1 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-10.2\"><span class=\"toc-item-num\">10.2 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-10.3\"><span class=\"toc-item-num\">10.3 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-10.4\"><span class=\"toc-item-num\">10.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-10.5\"><span class=\"toc-item-num\">10.5 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-10.5.1\"><span class=\"toc-item-num\">10.5.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-10.5.2\"><span class=\"toc-item-num\">10.5.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-10.5.3\"><span class=\"toc-item-num\">10.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href=\"#Multiplexer-4:1-Behavioral-via-Bitvectors\" data-toc-modified-id=\"Multiplexer-4:1-Behavioral-via-Bitvectors-11\"><span class=\"toc-item-num\">11 </span>Multiplexer 4:1 Behavioral via Bitvectors</a></span><ul class=\"toc-item\"><li><span><a href=\"#myHDL-Module\" data-toc-modified-id=\"myHDL-Module-11.1\"><span class=\"toc-item-num\">11.1 </span>myHDL Module</a></span></li><li><span><a href=\"#myHDL-Testing\" data-toc-modified-id=\"myHDL-Testing-11.2\"><span class=\"toc-item-num\">11.2 </span>myHDL Testing</a></span></li><li><span><a href=\"#Verilog-Conversion\" data-toc-modified-id=\"Verilog-Conversion-11.3\"><span class=\"toc-item-num\">11.3 </span>Verilog Conversion</a></span></li><li><span><a href=\"#myHDL-to-Verilog-Testbench\" data-toc-modified-id=\"myHDL-to-Verilog-Testbench-11.4\"><span class=\"toc-item-num\">11.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href=\"#PYNQ-Z1-Deployment\" data-toc-modified-id=\"PYNQ-Z1-Deployment-11.5\"><span class=\"toc-item-num\">11.5 </span>PYNQ-Z1 Deployment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Board-Circuit\" data-toc-modified-id=\"Board-Circuit-11.5.1\"><span class=\"toc-item-num\">11.5.1 </span>Board Circuit</a></span></li><li><span><a href=\"#Board-Constraint\" data-toc-modified-id=\"Board-Constraint-11.5.2\"><span class=\"toc-item-num\">11.5.2 </span>Board Constraint</a></span></li><li><span><a href=\"#Video-of-Deployment\" data-toc-modified-id=\"Video-of-Deployment-11.5.3\"><span class=\"toc-item-num\">11.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li></ul></div>\n\nRefrances\n@misc{xu_2018,\ntitle={Introduction to Digital Systems Supplementary Reading Shannon's Expansion Formulas and Compressed Truth Table},\nauthor={Xu, Xuping},\nyear={Fall 2017}\nsite=http://ecse.bd.psu.edu/cse271/comprttb.pdf\n}\nLibraries and Helper functions",
"#This notebook also uses the `(some) LaTeX environments for Jupyter`\n#https://github.com/ProfFan/latex_envs wich is part of the\n#jupyter_contrib_nbextensions package\n\nfrom myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport itertools\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, itertools, SchemDraw\n\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText\n\ndef ConstraintXDCTextReader(loc, printresult=True):\n with open(f'{loc}.xdc', 'r') as xdcText:\n ConstraintText=xdcText.read()\n if printresult:\n print(f'***Constraint file from {loc}.xdc***\\n\\n', ConstraintText)\n return ConstraintText\n\ndef TruthTabelGenrator(BoolSymFunc):\n \"\"\"\n Function to generate a truth table from a sympy boolian expression\n BoolSymFunc: sympy boolian expression\n return TT: a Truth table stored in a pandas dataframe\n \"\"\"\n colsL=sorted([i for i in list(BoolSymFunc.rhs.atoms())], key=lambda x:x.sort_key())\n colsR=sorted([i for i in list(BoolSymFunc.lhs.atoms())], key=lambda x:x.sort_key())\n bitwidth=len(colsL)\n cols=colsL+colsR; cols\n \n TT=pd.DataFrame(columns=cols, index=range(2**bitwidth))\n \n for i in range(2**bitwidth):\n inputs=[int(j) for j in list(np.binary_repr(i, bitwidth))]\n outputs=BoolSymFunc.rhs.subs({j:v for j, v in zip(colsL, inputs)})\n inputs.append(int(bool(outputs)))\n TT.iloc[i]=inputs\n \n return TT\n \n \n ",
"Multiplexers\n\\begin{definition}\\label{def:MUX}\nA Multiplexer, typically referred to as a MUX, is a Digital(or analog) switching unit that picks one input channel to be streamed to an output via a control input. For single output MUXs with $2^n$ inputs, there are then $n$ input selection signals that make up the control word to select the input channel for output.\nFrom a behavioral standpoint, a MUX can be thought of as an element that performs the same functionality as the if-elif-else (case) control statements found in almost every software language.\n\\end{definition}\n2 Channel Input:1 Channel Output multiplexer in Gate Level Logic\n\\begin{figure}\n\\centerline{\\includegraphics{MUX21Gate.png}}\n\\caption{\\label{fig:M21G} 2:1 MUX Symbol and Gate internals}\n\\end{figure}\nSympy Expression",
"x0, x1, s, y=symbols('x0, x1, s, y')\ny21Eq=Eq(y, (~s&x0) |(s&x1) ); y21Eq\n\nTruthTabelGenrator(y21Eq)[[x1, x0, s, y]]\n\ny21EqN=lambdify([x0, x1, s], y21Eq.rhs, dummify=False)\nSystmaticVals=np.array(list(itertools.product([0,1], repeat=3)))\nprint(SystmaticVals)\ny21EqN(SystmaticVals[:, 1], SystmaticVals[:, 2], SystmaticVals[:, 0]).astype(int)",
"myHDL Module",
"@block\ndef MUX2_1_Combo(x0, x1, s, y):\n \"\"\"\n 2:1 Multiplexer written in full combo\n Input:\n x0(bool): input channel 0\n x1(bool): input channel 1\n s(bool): channel selection input \n Output:\n y(bool): ouput\n \"\"\"\n \n @always_comb\n def logic():\n y.next= (not s and x0) |(s and x1)\n \n return instances()",
"myHDL Testing",
"#generate systmatic and random test values \n#stimules inputs X1 and X2\nTestLen=10\nSystmaticVals=list(itertools.product([0,1], repeat=3))\n\nx0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)\nnp.random.seed(15)\nx0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx1TVs=np.array([i[2] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(16)\nx1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nsTVs=np.array([i[0] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(17)\nsTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\nTestLen=len(x0TVs)\nx0TVs, x1TVs, sTVs, TestLen\n\nPeeker.clear()\nx0=Signal(bool(0)); Peeker(x0, 'x0')\nx1=Signal(bool(0)); Peeker(x1, 'x1')\ns=Signal(bool(0)); Peeker(s, 's')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX2_1_Combo(x0, x1, s, y)\n\ndef MUX2_1_Combo_TB():\n \"\"\"\n myHDL only testbench for module `MUX2_1_Combo`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TVs[i])\n x1.next=int(x1TVs[i])\n s.next=int(sTVs[i])\n\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX2_1_Combo_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom('x1', 'x0', 's', 'y')\n\nMUX2_1_ComboData=Peeker.to_dataframe()\nMUX2_1_ComboData=MUX2_1_ComboData[['x1', 'x0', 's', 'y']]\nMUX2_1_ComboData\n\nMUX2_1_ComboData['yRef']=MUX2_1_ComboData.apply(lambda row:y21EqN(row['x0'], row['x1'], row['s']), axis=1).astype(int)\nMUX2_1_ComboData\n\nTest=(MUX2_1_ComboData['y']==MUX2_1_ComboData['yRef']).all()\nprint(f'Module `MUX2_1_Combo` works as exspected: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX2_1_Combo');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_Combo_RTL.png}}\n\\caption{\\label{fig:M21CRTL} MUX2_1_Combo RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_Combo_SYN.png}}\n\\caption{\\label{fig:M21CSYN} MUX2_1_Combo Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_Combo_IMP.png}}\n\\caption{\\label{fig:M21CIMP} MUX2_1_Combo Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench",
"#create BitVectors\nx0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:]\nx1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:]\nsTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]\n\nx0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs)\n\n@block\ndef MUX2_1_Combo_TBV():\n \"\"\"\n myHDL -> Verilog testbench for module `MUX2_1_Combo`\n \"\"\"\n x0=Signal(bool(0))\n x1=Signal(bool(0))\n s=Signal(bool(0))\n y=Signal(bool(0))\n \n @always_comb\n def print_data():\n print(x0, x1, s, y)\n \n #Test Signal Bit Vectors\n x0TV=Signal(x0TVs)\n x1TV=Signal(x1TVs)\n sTV=Signal(sTVs)\n\n\n DUT=MUX2_1_Combo(x0, x1, s, y)\n\n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TV[i])\n x1.next=int(x1TV[i])\n s.next=int(sTV[i])\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n\nTB=MUX2_1_Combo_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('MUX2_1_Combo_TBV'); ",
"PYNQ-Z1 Deployment\nBoard Circuit\n\\begin{figure}\n\\centerline{\\includegraphics[width=5cm]{MUX21PYNQZ1Circ.png}}\n\\caption{\\label{fig:M21Circ} 2:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit}\n\\end{figure}\nBoard Constraint",
"ConstraintXDCTextReader('MUX2_1');",
"Video of Deployment\nMUX2_1_Combo myHDL PYNQ-Z1 (YouTube)\n4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\nSympy Expression",
"x0, x1, x2, x3, s0, s1, y=symbols('x0, x1, x2, x3, s0, s1, y')\ny41Eq=Eq(y, (~s0&~s1&x0) | (s0&~s1&x1)| (~s0&s1&x2)|(s0&s1&x3))\ny41Eq\n\nTruthTabelGenrator(y41Eq)[[x3, x2, x1, x0, s1, s0, y]]\n\ny41EqN=lambdify([x0, x1, x2, x3, s0, s1], y41Eq.rhs, dummify=False)\nSystmaticVals=np.array(list(itertools.product([0,1], repeat=6)))\nSystmaticVals\ny41EqN(*[SystmaticVals[:, i] for i in range(6)] ).astype(int)",
"myHDL Module",
"@block\ndef MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y):\n \"\"\"\n 4:1 Multiplexer written in full combo\n Input:\n x0(bool): input channel 0\n x1(bool): input channel 1\n x2(bool): input channel 2\n x3(bool): input channel 3\n s1(bool): channel selection input bit 1\n s0(bool): channel selection input bit 0 \n Output:\n y(bool): ouput\n \"\"\"\n \n @always_comb\n def logic():\n y.next= (not s0 and not s1 and x0) or (s0 and not s1 and x1) or (not s0 and s1 and x2) or (s0 and s1 and x3)\n \n return instances()",
"myHDL Testing",
"#generate systmatic and random test values \nTestLen=5\nSystmaticVals=list(itertools.product([0,1], repeat=6))\n\ns0TVs=np.array([i[0] for i in SystmaticVals]).astype(int)\nnp.random.seed(15)\ns0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\ns1TVs=np.array([i[1] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(16)\ns1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\nx0TVs=np.array([i[2] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(17)\nx0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx1TVs=np.array([i[3] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(18)\nx1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx2TVs=np.array([i[4] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(19)\nx2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx3TVs=np.array([i[5] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(20)\nx3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\n\n\nTestLen=len(x0TVs)\nSystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen\n\nPeeker.clear()\nx0=Signal(bool(0)); Peeker(x0, 'x0')\nx1=Signal(bool(0)); Peeker(x1, 'x1')\nx2=Signal(bool(0)); Peeker(x2, 'x2')\nx3=Signal(bool(0)); Peeker(x3, 'x3')\n\ns0=Signal(bool(0)); Peeker(s0, 's0')\ns1=Signal(bool(0)); Peeker(s1, 's1')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y)\n\ndef MUX4_1_Combo_TB():\n \"\"\"\n myHDL only testbench for module `MUX4_1_Combo`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TVs[i])\n x1.next=int(x1TVs[i])\n x2.next=int(x2TVs[i])\n x3.next=int(x3TVs[i])\n s0.next=int(s0TVs[i])\n s1.next=int(s1TVs[i])\n\n \n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX4_1_Combo_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom()\n\nMUX4_1_ComboData=Peeker.to_dataframe()\nMUX4_1_ComboData=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]\nMUX4_1_ComboData\n\nMUX4_1_ComboData['yRef']=MUX4_1_ComboData.apply(lambda row:y41EqN(row['x0'], row['x1'], row['x2'], row['x3'], row['s0'], row['s1']), axis=1).astype(int)\nMUX4_1_ComboData\n\nTest=(MUX4_1_ComboData['y']==MUX4_1_ComboData['yRef']).all()\nprint(f'Module `MUX4_1_Combo` works as exspected: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX4_1_Combo');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_Combo_RTL.png}}\n\\caption{\\label{fig:M41CRTL} MUX4_1_Combo RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_Combo_SYN.png}}\n\\caption{\\label{fig:M41CSYN} MUX4_1_Combo Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_Combo_IMP.png}}\n\\caption{\\label{fig:M41CIMP} MUX4_1_Combo Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench",
"#create BitVectors for MUX4_1_Combo_TBV\nx0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:]\nx1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:]\nx2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:]\nx3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:]\n\n\ns0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]\ns1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]\n\n\nx0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)\n\n@block\ndef MUX4_1_Combo_TBV():\n \"\"\"\n myHDL -> Verilog testbench for module `MUX4_1_Combo`\n \"\"\"\n \n x0=Signal(bool(0))\n x1=Signal(bool(0))\n x2=Signal(bool(0))\n x3=Signal(bool(0))\n y=Signal(bool(0))\n s0=Signal(bool(0))\n s1=Signal(bool(0))\n\n \n @always_comb\n def print_data():\n print(x0, x1, x2, x3, s0, s1, y)\n \n #Test Signal Bit Vectors\n x0TV=Signal(x0TVs)\n x1TV=Signal(x1TVs)\n x2TV=Signal(x2TVs)\n x3TV=Signal(x3TVs)\n s0TV=Signal(s0TVs)\n s1TV=Signal(s1TVs)\n\n\n DUT=MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y)\n\n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TV[i])\n x1.next=int(x1TV[i])\n x2.next=int(x2TV[i])\n x3.next=int(x3TV[i])\n s0.next=int(s0TV[i])\n s1.next=int(s1TV[i])\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n\nTB=MUX4_1_Combo_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('MUX4_1_Combo_TBV'); ",
"PYNQ-Z1 Deployment\nBoard Circuit\n\\begin{figure}\n\\centerline{\\includegraphics[width=5cm]{MUX41PYNQZ1Circ.png}}\n\\caption{\\label{fig:M41Circ} 4:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit}\n\\end{figure}\nBoard Constraint",
"ConstraintXDCTextReader('MUX4_1');",
"Video of Deployment\nMUX4_1_MS myHDL PYNQ-Z1 (YouTube)\nShannon's Expansion Formula & Stacking of MUXs\nClaude Shannon, of the famed Shannon-Nyquist theorem, discovered that any boolean expression $F(x_0, x_1, \\ldots, x_n)$ can be decomposed in a manner akin to polynomials of perfect squares via\n$$\nF(x_0, x_1, \\ldots, x_n)=x_0 \\cdot F(x_0=1, x_1, \\ldots, x_n) +\\overline{x_0} \\cdot F(x_0=0, x_1, \\ldots, x_n)\n$$\nknown as the Sum of Products (SOP) form since when the expansion is completed for all $x_n$ the result is that \n$$\nF(x_0, x_1, \\ldots, x_n)=\\sum^{2^n-1}_{i=0} (m_i \\cdot F(m_i))\n$$ \naka the Sum of all Minterms ($m_i$) belonging to the original boolean expression $F$ factored down to the $i$th of $n$ variables belonging to $F$ and product (&) of $F$ evaluated with the respective minterm as the argument\nThe Dual to the SOP form of Shannon's expansion formula is the Product of Sum (POS) form \n$$\nF(x_0, x_1, \\ldots, x_n)=(x_0+ F(x_0=1, x_1, \\ldots, x_n)) \\cdot (\\overline{x_0} + F(x_0=0, x_1, \\ldots, x_n))\n$$\nthus \n$$F(x_0, x_1, \\ldots, x_n)=\\prod^{2^n-1}_{i=0} (M_i + F(M_i))\n$$\nwith $M_i$ being the $i$th Maxterm\nit is for this reason that Shannon's Expansion Formula is known is further liked to the fundamental theorem of algebra that it is called the \"fundamental theorem of Boolean algebra\"\nSo why then is Shannon's decomposition formula discussed in terms of Multiplexers. Because the general expression for a $2^n:1$ multiplexer is \n$$y_{\\text{MUX}}=\\sum^{2^n-1}_{i=0}m_i\\cdot x_n$$ where then $n$ is the required number of control inputs (referred to in this tutorial as $s_i$). Which is the same as the SOP form of Shannon's Formula for a boolean expression that has been fully decomposed (Factored). And further, if the boolean expression has not been fully factored we can replace $n-1$ parts of the partially factored expression with multiplexers. This then gives way to what is called \"Multiplexer Stacking\" in order to implement large boolean expressions and or large multiplexers\n4 Channel Input: 1 Channel Output multiplexer via MUX Stacking\n\\begin{figure}\n\\centerline{\\includegraphics{MUX41MS.png}}\n\\caption{\\label{fig:M41MS} 4:1 MUX via MUX stacking 2:1MUXs}\n\\end{figure}\nmyHDL Module",
"@block\ndef MUX4_1_MS(x0, x1, x2, x3, s0, s1, y):\n \"\"\"\n 4:1 Multiplexer via 2:1 MUX stacking\n Input:\n x0(bool): input channel 0\n x1(bool): input channel 1\n x2(bool): input channel 2\n x3(bool): input channel 3\n s1(bool): channel selection input bit 1\n s0(bool): channel selection input bit 0 \n Output:\n y(bool): ouput\n \"\"\"\n #create ouput from x0x1 input MUX to y ouput MUX\n x0x1_yWire=Signal(bool(0))\n #create instance of 2:1 mux and wire in inputs\n #a, b, s0 and wire to ouput mux\n x0x1MUX=MUX2_1_Combo(x0, x1, s0, x0x1_yWire)\n \n #create ouput from x2x3 input MUX to y ouput MUX\n x2x3_yWire=Signal(bool(0))\n #create instance of 2:1 mux and wire in inputs\n #c, d, s0 and wire to ouput mux\n x2x3MUX=MUX2_1_Combo(x2, x3, s0, x2x3_yWire)\n \n #create ouput MUX and wire to internal wires, \n #s1 and ouput y\n yMUX=MUX2_1_Combo(x0x1_yWire, x2x3_yWire, s1, y)\n \n return instances()",
"myHDL Testing",
"#generate systmatic and random test values \nTestLen=5\nSystmaticVals=list(itertools.product([0,1], repeat=6))\n\ns0TVs=np.array([i[0] for i in SystmaticVals]).astype(int)\nnp.random.seed(15)\ns0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\ns1TVs=np.array([i[1] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(16)\ns1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\nx0TVs=np.array([i[2] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(17)\nx0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx1TVs=np.array([i[3] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(18)\nx1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx2TVs=np.array([i[4] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(19)\nx2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx3TVs=np.array([i[5] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(20)\nx3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\n\n\nTestLen=len(x0TVs)\nSystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen\n\nPeeker.clear()\nx0=Signal(bool(0)); Peeker(x0, 'x0')\nx1=Signal(bool(0)); Peeker(x1, 'x1')\nx2=Signal(bool(0)); Peeker(x2, 'x2')\nx3=Signal(bool(0)); Peeker(x3, 'x3')\n\ns0=Signal(bool(0)); Peeker(s0, 's0')\ns1=Signal(bool(0)); Peeker(s1, 's1')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX4_1_MS(x0, x1, x2, x3, s0, s1, y)\n\ndef MUX4_1_MS_TB():\n \"\"\"\n myHDL only testbench for module `MUX4_1_MS`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TVs[i])\n x1.next=int(x1TVs[i])\n x2.next=int(x2TVs[i])\n x3.next=int(x3TVs[i])\n s0.next=int(s0TVs[i])\n s1.next=int(s1TVs[i])\n\n \n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX4_1_MS_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom()\n\nMUX4_1_MSData=Peeker.to_dataframe()\nMUX4_1_MSData=MUX4_1_MSData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]\nMUX4_1_MSData\n\nTest=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]==MUX4_1_MSData\nTest=Test.all().all()\nprint(f'Module `MUX4_1_MS` works as exspected: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX4_1_MS');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_MS_RTL.png}}\n\\caption{\\label{fig:M41MSRTL} MUX4_1_MS RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_MS_SYN.png}}\n\\caption{\\label{fig:M41MSSYN} MUX4_1_MS Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_MS_IMP.png}}\n\\caption{\\label{fig:M41MSIMP} MUX4_1_MS Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench",
"#create BitVectors \nx0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:]\nx1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:]\nx2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:]\nx3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:]\n\n\ns0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]\ns1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]\n\n\nx0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)\n\n@block\ndef MUX4_1_MS_TBV():\n \"\"\"\n myHDL -> Verilog testbench for module `MUX4_1_MS`\n \"\"\"\n \n x0=Signal(bool(0))\n x1=Signal(bool(0))\n x2=Signal(bool(0))\n x3=Signal(bool(0))\n y=Signal(bool(0))\n s0=Signal(bool(0))\n s1=Signal(bool(0))\n\n \n @always_comb\n def print_data():\n print(x0, x1, x2, x3, s0, s1, y)\n \n #Test Signal Bit Vectors\n x0TV=Signal(x0TVs)\n x1TV=Signal(x1TVs)\n x2TV=Signal(x2TVs)\n x3TV=Signal(x3TVs)\n s0TV=Signal(s0TVs)\n s1TV=Signal(s1TVs)\n\n\n DUT=MUX4_1_MS(x0, x1, x2, x3, s0, s1, y)\n\n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TV[i])\n x1.next=int(x1TV[i])\n x2.next=int(x2TV[i])\n x3.next=int(x3TV[i])\n s0.next=int(s0TV[i])\n s1.next=int(s1TV[i])\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n\nTB=MUX4_1_MS_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('MUX4_1_MS_TBV'); ",
"PYNQ-Z1 Deployment\nBoard Circuit\nSee Board Circuit for \"4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\"\nBoard Constraint\nuses same 'MUX4_1.xdc' as \"4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\"\nVideo of Deployment\nMUX4_1_MS myHDL PYNQ-Z1 (YouTube)\nIntroduction to HDL Behavioral Modeling\nHDL behavioral modeling is a \"High\" level, though not at the HLS level, HDL syntax where the intended hardware element is modeled via its intended abstract algorithm behavior. Thus the common computer science (and mathematician)tool of abstraction is borrowed and incorporated into the HDL syntax. The abstraction that follows has, like all things, its pros and cons. \nAs a pro, this means that the Hard Ware Designer is no longer consumed by the manuchia of implementing boolean algebra for every device and can instead focus on implementing the intended algorithm in hardware. And it is thanks to this blending of Software and Hardware that the design of digital devices has grown as prolific as it has. However, there is quite a cache for using behavioral modeling. First off HDL now absolutely requires synthesis tools that can map the behavioral statements to hardware. And even when the behavioral logic is mapped at least to the RTL level there is no escaping two points. 1. At the end of the day, the RTL will be implemented via Gate level devices in some form or another. 2. the way the synthesis tool has mapped the abstract behavioral to RTL may not be physical implementable especially in ASIC implementations. \nFor these reasons it as Hardware Developers using Behavioral HDL we have to be able to still be able to implement the smallest indivisible units of our HDL at the gate level. Must know what physical limits our target architecture (FPGA, ASIC, etc) has and keep within these limits when writing our HDL code. And lastly, we can not grow lazy in writing behavioral HDL, but must always see at least down to the major RTL elements that our behavioral statements are embodying.\n2:1 MUX via Behavioral IF\nmyHDL Module",
"@block\ndef MUX2_1_B(x0, x1, s, y):\n \"\"\"\n 2:1 Multiplexer written via behavioral if\n Input:\n x0(bool): input channel 0\n x1(bool): input channel 1\n s(bool): channel selection input \n Output:\n y(bool): ouput\n \"\"\"\n \n @always_comb\n def logic():\n if s:\n y.next=x1\n else:\n y.next=x0\n \n return instances()",
"myHDL Testing",
"#generate systmatic and random test values \nTestLen=10\nSystmaticVals=list(itertools.product([0,1], repeat=3))\n\nx0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)\nnp.random.seed(15)\nx0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx1TVs=np.array([i[2] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(16)\nx1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nsTVs=np.array([i[0] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(17)\nsTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\nTestLen=len(x0TVs)\nx0TVs, x1TVs, sTVs, TestLen\n\nPeeker.clear()\nx0=Signal(bool(0)); Peeker(x0, 'x0')\nx1=Signal(bool(0)); Peeker(x1, 'x1')\ns=Signal(bool(0)); Peeker(s, 's')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX2_1_B(x0, x1, s, y)\n\ndef MUX2_1_B_TB():\n \"\"\"\n myHDL only testbench for module `MUX2_1_B`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TVs[i])\n x1.next=int(x1TVs[i])\n s.next=int(sTVs[i])\n\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX2_1_B_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom('x1', 'x0', 's', 'y')\n\nMUX2_1_BData=Peeker.to_dataframe()\nMUX2_1_BData=MUX2_1_BData[['x1', 'x0', 's', 'y']]\nMUX2_1_BData\n\nTest=MUX2_1_ComboData[['x1', 'x0', 's', 'y']]==MUX2_1_BData\nTest=Test.all().all()\nprint(f'`MUX2_1_B` Behavioral is Eqivlint to `MUX2_1_Combo`: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX2_1_B');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_B_RTL.png}}\n\\caption{\\label{fig:M21BRTL} MUX2_1_B RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_B_SYN.png}}\n\\caption{\\label{fig:M21BSYN} MUX2_1_B Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX2_1_B_IMP.png}}\n\\caption{\\label{fig:M21BIMP} MUX2_1_B Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench",
"#create BitVectors \nx0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:]\nx1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:]\nsTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]\n\nx0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs)\n\n@block\ndef MUX2_1_B_TBV():\n \"\"\"\n myHDL -> Verilog testbench for module `MUX2_1_B`\n \"\"\"\n x0=Signal(bool(0))\n x1=Signal(bool(0))\n s=Signal(bool(0))\n y=Signal(bool(0))\n \n @always_comb\n def print_data():\n print(x0, x1, s, y)\n \n #Test Signal Bit Vectors\n x0TV=Signal(x0TVs)\n x1TV=Signal(x1TVs)\n sTV=Signal(sTVs)\n\n\n DUT=MUX2_1_B(x0, x1, s, y)\n\n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TV[i])\n x1.next=int(x1TV[i])\n s.next=int(sTV[i])\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n\nTB=MUX2_1_B_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('MUX2_1_B_TBV'); ",
"PYNQ-Z1 Deployment\nBoard Circuit\nSee Board Circuit for \"2 Channel Input:1 Channel Output multiplexer in Gate Level Logic\"\nBoard Constraint\nuses the same MUX2_1.xdc as \"2 Channel Input:1 Channel Output multiplexer in Gate Level Logic\"\nVideo of Deployment\nMUX2_1_B myHDL PYNQ-Z1 (YouTube)\n4:1 MUX via Behavioral if-elif-else\nmyHDL Module",
"@block\ndef MUX4_1_B(x0, x1, x2, x3, s0, s1, y):\n \"\"\"\n 4:1 Multiblexer written in if-elif-else Behavioral\n Input:\n x0(bool): input channel 0\n x1(bool): input channel 1\n x2(bool): input channel 2\n x3(bool): input channel 3\n s1(bool): channel selection input bit 1\n s0(bool): channel selection input bit 0 \n Output:\n y(bool): ouput\n \"\"\"\n \n @always_comb\n def logic():\n if s0==0 and s1==0:\n y.next=x0\n elif s0==1 and s1==0:\n y.next=x1\n elif s0==0 and s1==1:\n y.next=x2\n else:\n y.next=x3\n \n return instances()",
"myHDL Testing",
"#generate systmatic and random test values \nTestLen=5\nSystmaticVals=list(itertools.product([0,1], repeat=6))\n\ns0TVs=np.array([i[0] for i in SystmaticVals]).astype(int)\nnp.random.seed(15)\ns0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\ns1TVs=np.array([i[1] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(16)\ns1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\nx0TVs=np.array([i[2] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(17)\nx0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx1TVs=np.array([i[3] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(18)\nx1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx2TVs=np.array([i[4] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(19)\nx2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int)\n\nx3TVs=np.array([i[5] for i in SystmaticVals]).astype(int)\n#the random genrator must have a differint seed beween each generation\n#call in order to produce differint values for each call\nnp.random.seed(20)\nx3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int)\n\n\n\n\nTestLen=len(x0TVs)\nSystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen\n\nPeeker.clear()\nx0=Signal(bool(0)); Peeker(x0, 'x0')\nx1=Signal(bool(0)); Peeker(x1, 'x1')\nx2=Signal(bool(0)); Peeker(x2, 'x2')\nx3=Signal(bool(0)); Peeker(x3, 'x3')\n\ns0=Signal(bool(0)); Peeker(s0, 's0')\ns1=Signal(bool(0)); Peeker(s1, 's1')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX4_1_B(x0, x1, x2, x3, s0, s1, y)\n\ndef MUX4_1_B_TB():\n \"\"\"\n myHDL only testbench for module `MUX4_1_B`\n \"\"\"\n \n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TVs[i])\n x1.next=int(x1TVs[i])\n x2.next=int(x2TVs[i])\n x3.next=int(x3TVs[i])\n s0.next=int(s0TVs[i])\n s1.next=int(s1TVs[i])\n\n \n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX4_1_B_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom()\n\nMUX4_1_BData=Peeker.to_dataframe()\nMUX4_1_BData=MUX4_1_BData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]\nMUX4_1_BData\n\nTest=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]==MUX4_1_BData\nTest=Test.all().all()\nprint(f'Module `MUX4_1_B` works as exspected: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX4_1_B');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_B_RTL.png}}\n\\caption{\\label{fig:M41BRTL} MUX4_1_B RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_B_SYN.png}}\n\\caption{\\label{fig:M41BSYN} MUX4_1_B Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_B_IMP.png}}\n\\caption{\\label{fig:M41BIMP} MUX4_1_B Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench",
"#create BitVectors \nx0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:]\nx1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:]\nx2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:]\nx3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:]\n\n\ns0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]\ns1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]\n\n\nx0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)\n\n@block\ndef MUX4_1_B_TBV():\n \"\"\"\n myHDL -> Verilog testbench for module `MUX4_1_B`\n \"\"\"\n \n x0=Signal(bool(0))\n x1=Signal(bool(0))\n x2=Signal(bool(0))\n x3=Signal(bool(0))\n y=Signal(bool(0))\n s0=Signal(bool(0))\n s1=Signal(bool(0))\n\n \n @always_comb\n def print_data():\n print(x0, x1, x2, x3, s0, s1, y)\n \n #Test Signal Bit Vectors\n x0TV=Signal(x0TVs)\n x1TV=Signal(x1TVs)\n x2TV=Signal(x2TVs)\n x3TV=Signal(x3TVs)\n s0TV=Signal(s0TVs)\n s1TV=Signal(s1TVs)\n\n\n DUT=MUX4_1_B(x0, x1, x2, x3, s0, s1, y)\n\n @instance\n def stimules():\n for i in range(TestLen):\n x0.next=int(x0TV[i])\n x1.next=int(x1TV[i])\n x2.next=int(x2TV[i])\n x3.next=int(x3TV[i])\n s0.next=int(s0TV[i])\n s1.next=int(s1TV[i])\n yield delay(1)\n \n raise StopSimulation()\n return instances()\n\nTB=MUX4_1_B_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('MUX4_1_B_TBV'); ",
"PYNQ-Z1 Deployment\nBoard Circuit\nSee Board Circuit for \"4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\"\nBoard Constraint\nuses same 'MUX4_1.xdc' as \"4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\"\nVideo of Deployment\nMUX4_1_B myHDL PYNQ-Z1 (YouTube)\nMultiplexer 4:1 Behavioral via Bitvectors\nmyHDL Module",
"@block\ndef MUX4_1_BV(X, S, y):\n \"\"\"\n 4:1 Multiblexerwritten in behvioral \"if-elif-else\"(case)\n with BitVector inputs\n Input:\n X(4bitBV):input bit vector; min=0, max=15\n S(2bitBV):selection bit vector; min=0, max=3\n Output:\n y(bool): ouput\n \"\"\"\n \n @always_comb\n def logic():\n if S==0:\n y.next=X[0]\n elif S==1:\n y.next=X[1]\n elif S==2:\n y.next=X[2]\n else:\n y.next=X[3]\n \n return instances()",
"myHDL Testing",
"XTVs=np.array([1,2,4,8])\nXTVs=np.append(XTVs, np.random.choice([1,2,4,8], 6)).astype(int)\nTestLen=len(XTVs)\n\nnp.random.seed(12)\nSTVs=np.arange(0,4)\nSTVs=np.append(STVs, np.random.randint(0,4, 5))\nTestLen, XTVs, STVs\n\nPeeker.clear()\nX=Signal(intbv(0)[4:]); Peeker(X, 'X')\nS=Signal(intbv(0)[2:]); Peeker(S, 'S')\ny=Signal(bool(0)); Peeker(y, 'y')\n\nDUT=MUX4_1_BV(X, S, y)\n\ndef MUX4_1_BV_TB():\n \n @instance\n def stimules():\n for i in STVs:\n for j in XTVs:\n S.next=int(i)\n X.next=int(j)\n yield delay(1)\n \n raise StopSimulation()\n \n return instances()\n\nsim=Simulation(DUT, MUX4_1_BV_TB(), *Peeker.instances()).run() \n\nPeeker.to_wavedrom('X', 'S', 'y', start_time=0, stop_time=2*TestLen+2)\n\nMUX4_1_BVData=Peeker.to_dataframe()\nMUX4_1_BVData=MUX4_1_BVData[['X', 'S', 'y']]\nMUX4_1_BVData\n\nMUX4_1_BVData['x0']=None; MUX4_1_BVData['x1']=None; MUX4_1_BVData['x2']=None; MUX4_1_BVData['x3']=None\nMUX4_1_BVData[['x3', 'x2', 'x1', 'x0']]=MUX4_1_BVData[['X']].apply(lambda bv: [int(i) for i in bin(bv, 4)], axis=1, result_type='expand')\n\nMUX4_1_BVData['s0']=None; MUX4_1_BVData['s1']=None\nMUX4_1_BVData[['s1', 's0']]=MUX4_1_BVData[['S']].apply(lambda bv: [int(i) for i in bin(bv, 2)], axis=1, result_type='expand')\n\nMUX4_1_BVData=MUX4_1_BVData[['X', 'x0', 'x1', 'x2', 'x3', 'S', 's0', 's1', 'y']]\nMUX4_1_BVData\n\nMUX4_1_BVData['yRef']=MUX4_1_BVData.apply(lambda row:y41EqN(row['x0'], row['x1'], row['x2'], row['x3'], row['s0'], row['s1']), axis=1).astype(int)\nMUX4_1_BVData\n\nTest=(MUX4_1_BVData['y']==MUX4_1_BVData['yRef']).all()\nprint(f'Module `MUX4_1_BVData` works as exspected: {Test}')",
"Verilog Conversion",
"DUT.convert()\nVerilogTextReader('MUX4_1_BV');",
"\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_BV_RTL.png}}\n\\caption{\\label{fig:M41BVRTL} MUX4_1_BV RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_BV_SYN.png}}\n\\caption{\\label{fig:M41BVSYN} MUX4_1_BV Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{MUX4_1_BV_IMP.png}}\n\\caption{\\label{fig:M41BVIMP} MUX4_1_BV Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nmyHDL to Verilog Testbench\nWill Do later\nPYNQ-Z1 Deployment\nBoard Circuit\nSee Board Circuit for \"4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic\"\nBoard Constraint\nnotice that in get_ports the pin is set to the a single bit of the bitvector via bitvector indexing",
"ConstraintXDCTextReader('MUX4_1_BV');",
"Video of Deployment\nMUX4_1_BV myHDL PYNQ-Z1 (YouTube)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dfm/emcee3 | docs/user/parallel.ipynb | mit | [
"Parallelization\nemcee supports parallelization out of the box. The algorithmic details are given in the paper but the implementation is very simple. The parallelization is applied across the walkers in the ensemble at each step and it must therefore be synchronized after each iteration. This means that you will really only benefit from this feature when your probability function is relatively expensive to compute.\nThe recommended method is to use IPython's parallel feature but it's possible to use other \"mappers\" like the Python standard library's multiprocessing.Pool. The only requirement of the mapper is that it exposes a map method.\nUsing multiprocessing\nAs mentioned above, it's possible to parallelize your model using the standard library's multiprocessing package. Instead, I would recommend the pools.InterruptiblePool that is included with emcee because it is a simple thin wrapper around multiprocessing.Pool with support for a keyboard interrupt (^C)... you'll thank me later! If we wanted to use this pool, the final few lines from the example on the front page would become the following:",
"import emcee3\nimport numpy as np\n\ndef log_prob(x):\n return -0.5 * np.sum(x ** 2)\n\nndim, nwalkers = 10, 100\nwith emcee3.pools.InterruptiblePool() as pool:\n ensemble = emcee3.Ensemble(log_prob, np.random.randn(nwalkers, ndim), pool=pool)\n sampler = emcee3.Sampler()\n sampler.run(ensemble, 1000)",
"Using MPI\nTo distribute emcee3 across nodes on a cluster, you'll need to use MPI. This can be done with the MPIPool from schwimmbad. To use this, you'll need to install the dependency mpi4py. Otherwise, the code is almost the same as the multiprocessing example above – the main change is the definition of the pool:\nThe if not pool.is_master() block is crucial otherwise the code will hang at the end of execution. To run this code, you would execute something like the following: \nUsing ipyparallel\nipyparallel is a\nflexible and powerful framework for running distributed computation in Python.\nIt works on a single machine with multiple cores in the same way as it does on\na huge compute cluster and in both cases it is very efficient!\nTo use IPython parallel, make sure that you have a recent version of IPython\ninstalled (ipyparallel docs) and start up the cluster\nby running:\nThen, run the following:",
"# Connect to the cluster.\nfrom ipyparallel import Client\nrc = Client()\ndv = rc.direct_view()\n\n# Run the imports on the cluster too.\nwith dv.sync_imports():\n import emcee3\n import numpy\n\n# Define the model.\ndef log_prob(x):\n return -0.5 * numpy.sum(x ** 2)\n\n# Distribute the model to the nodes of the cluster.\ndv.push(dict(log_prob=log_prob), block=True)\n\n# Set up the ensemble with the IPython \"DirectView\" as the pool.\nndim, nwalkers = 10, 100\nensemble = emcee3.Ensemble(log_prob, numpy.random.randn(nwalkers, ndim), pool=dv)\n\n# Run the sampler in the same way as usual.\nsampler = emcee3.Sampler()\nensemble = sampler.run(ensemble, 1000)",
"There is a significant overhead incurred when using any of these\nparallelization methods so for this simple example, the parallel version is\nactually slower but this effect will be quickly offset if your probability\nfunction is computationally expensive.\nOne major benefit of using ipyparallel is that it can also be used\nidentically on a cluster with MPI if you have a really big problem. The Python\ncode would look identical and the only change that you would have to make is\nto start the cluster using:\nTake a look at the documentation for more details of all of the features available in ipyparallel."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
whitead/numerical_stats | unit_7/hw_2018/Homework_7_Key.ipynb | gpl-3.0 | [
"Homework 7 Key\nCHE 116: Numerical Methods and Statistics\n3/8/2018",
"%matplotlib inline\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy\nimport scipy.stats\nimport seaborn as sns\nplt.style.use('seaborn-whitegrid')\n\nimport pydataset",
"1. Conceptual Questions (8 Points)\nAnswer these in Markdown\n\n[1 point] In problem 4 from HW 3 we discussed probabilities of having HIV and results of a test being positive. What was the sample space for this problem? \n[4 points] One of the notations in the answer key is a random variable $H$ which indicated if a person has HIV. Make a table showing this functions inputs and outputs for the sample space. Making Markdown Tables\n[1 point] A probability density function is used for what types of probability distributions?\n[2 points] What is the probability of $t > 4$ in an exponential distribution with $\\lambda = 1$? Leave your answer in terms of an exponential.\n\n1.1\nFirst element is HIV and second is test\n$$\n{ (0,0), (1,0), (0,1), (1,1)}\n$$\n1.2\n|$x$|$H$|\n|---|---:|\n|(0,0)| 0|\n|(0,1)| 0|\n|(1,0)| 1|\n|(1,1)| 1|\n1.3\nContinuous\n1.4\n$$\n\\int_4^{\\infty} e^{-t} \\, dt = \\left. -e^{-t}\\right]_4^{\\infty} = 0 - - e^{-4} = e^{-4}\n$$\n2. The Nile (10 Points)\nAnswer in Python\n\n\n[4 points] Load the Nile dataset and convert to a numpy array. It contains measurements of the annual flow of the river Nile at Aswan. Make a scatter plot of the year vs flow rate. If you get an error when loading pydataset that says No Module named 'pydataset', then execute this code in a new cell once: !pip install pydataset\n\n\n[2 points] Report the correlation coefficient between year and flow rate.\n\n\n[4 points] Create a histogram of the flow rates and show the median with a vertical line. Labels your axes and make a legend indicating what the vertical line is.",
"#2.1\nnile = pydataset.data('Nile').as_matrix()\nplt.plot(nile[:,0], nile[:,1], '-o')\nplt.xlabel('Year')\nplt.ylabel('Nile Flow Rate')\nplt.show()\n\n#2.2\nprint('{:.3}'.format(np.corrcoef(nile[:,0], nile[:,1])[0,1]))\n\n#2.3 ok to distplot or plt.hist\nsns.distplot(nile[:,1])\nplt.axvline(np.mean(nile[:,1]), color='C2', label='Mean')\nplt.legend()\nplt.xlabel('Flow Rate')\nplt.show()",
"2. Insect Spray (10 Points)\nAnswer in Python\n\n\n[2 points] Load the 'InsectSpray' dataset, convert to a numpy array and print the number of rows and columns. Recall that numpy arrays can only hold one type of data (e.g., string, float, int). What is the data type of the loaded dataset?\n\n\n[2 points] Using np.unique, print out the list of insect spray used. This data is a count insects on a crop field with various insect sprays.\n\n\n[4 points] Create a violin plot of the data. Label your axes.\n\n\n[2 points] Which insect spray worked best? What is the mean number of insects for the best insect spray?",
"#1.1\ninsect = pydataset.data('InsectSprays').as_matrix()\nprint(insect.shape, 'string or object is acceptable')\n\n\n#1.2\nprint(np.unique(insect[:,1]))\n\n#1.3\nlabels = np.unique(insect[:,1])\nldata = []\n#slice out each set of rows that matches label\n#and add to list\nfor l in labels:\n ldata.append(insect[insect[:,1] == l, 0].astype(float))\nsns.violinplot(data=ldata)\nplt.xticks(range(len(labels)), labels)\nplt.xlabel('Insecticide Type')\nplt.ylabel('Insect Count')\nplt.show()\n\n#1.4\nprint('C is best and its mean is {:.2}'.format(np.mean(ldata[2])))",
"3. NY Air Quality (6 Points)\nLoad the 'airquality' dataset and convert into to a numpy array. Make a scatter plot of wind (column 2, mph) and ozone concentration (column 0, ppb). Using the plt.text command, display the correlation coefficient in the plot. This data as nan, which means \"not a number\". You can select non-nans by using x[~numpy.isnan(x)]. You'll need to remove these to calculate correlation coefficient.",
"nyair = pydataset.data('airquality').as_matrix()\nplt.plot(nyair[:,2], nyair[:,0], 'o')\nplt.xlabel('Wind [mph]')\nplt.ylabel('Ozone [ppb]')\nnans = np.isnan(nyair[:,0])\nr = np.corrcoef(nyair[~nans,2], nyair[~nans,0])[0,1]\nplt.text(10, 130, 'Correlation Coefficient = {:.2}'.format(r))\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thewtex/SimpleITK-Notebooks | 62_Registration_Tuning.ipynb | apache-2.0 | [
"<h1 align=\"center\">Registration Settings: Choices, Choices, Choices</h1>\n\nThe performance of most registration algorithms is dependent on a large number of parameter settings. For optimal performance you will need to customize your settings, turning all the knobs to their \"optimal\" position:<br>\n<img src=\"knobs.jpg\" style=\"width:700px\"/>\n<font size=\"1\"> [This image was originally posted to Flickr and downloaded from wikimedia commons https://commons.wikimedia.org/wiki/File:TASCAM_M-520_knobs.jpg]</font>\nThis notebook illustrates the use of reference data (a.k.a \"gold\" standard) to empirically tune a registration framework for specific usage. This is dependent on the characteristics of your images (anatomy, modality, image's physical spacing...) and on the clinical needs.\nAlso keep in mind that the defintion of optimal settings does not necessarily correspond to those that provide the most accurate results. \nThe optimal settings are task specific and should provide:\n<ul>\n<li>Sufficient accuracy in the Region Of Interest (ROI).</li>\n<li>Complete the computation in the alloted time.</li>\n</ul>\n\nWe will be using the training data from the Retrospective Image Registration Evaluation (<a href=\"http://www.insight-journal.org/rire/\">RIRE</a>) project.",
"import SimpleITK as sitk\n\n# Utility method that either downloads data from the network or\n# if already downloaded returns the file name for reading from disk (cached data).\nfrom downloaddata import fetch_data as fdata\n\n# Always write output to a separate directory, we don't want to pollute the source directory. \nOUTPUT_DIR = 'Output'\n\nimport registration_callbacks as rc\nimport registration_utilities as ru\n\n%matplotlib inline",
"Read the RIRE data and generate a larger point set as a reference",
"fixed_image = sitk.ReadImage(fdata(\"training_001_ct.mha\"), sitk.sitkFloat32)\nmoving_image = sitk.ReadImage(fdata(\"training_001_mr_T1.mha\"), sitk.sitkFloat32) \nfixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata(\"ct_T1.standard\"))\n\n# Estimate the reference_transform defined by the RIRE fiducials and check that the FRE makes sense (low) \nR, t = ru.absolute_orientation_m(fixed_fiducial_points, moving_fiducial_points)\nreference_transform = sitk.Euler3DTransform()\nreference_transform.SetMatrix(R.flatten())\nreference_transform.SetTranslation(t)\nreference_errors_mean, reference_errors_std, _, reference_errors_max,_ = ru.registration_errors(reference_transform, fixed_fiducial_points, moving_fiducial_points)\nprint('Reference data errors (FRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(reference_errors_mean, reference_errors_std, reference_errors_max))\n\n# Generate a reference dataset from the reference transformation \n# (corresponding points in the fixed and moving images).\nfixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100)\nmoving_points = [reference_transform.TransformPoint(p) for p in fixed_points] \n\n# Compute the TRE prior to registration.\npre_errors_mean, pre_errors_std, pre_errors_min, pre_errors_max, _ = ru.registration_errors(sitk.Euler3DTransform(), fixed_points, moving_points, display_errors = True)\nprint('Before registration, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(pre_errors_mean, pre_errors_std, pre_errors_max))",
"Initial Alignment\nWe use the CenteredTransformInitializer. Should we use the GEOMETRY based version or the MOMENTS based one?",
"initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()), \n moving_image, \n sitk.Euler3DTransform(), \n sitk.CenteredTransformInitializerFilter.GEOMETRY)\n\ninitial_errors_mean, initial_errors_std, initial_errors_min, initial_errors_max, _ = ru.registration_errors(initial_transform, fixed_points, moving_points, min_err=pre_errors_min, max_err=pre_errors_max, display_errors=True)\nprint('After initialization, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max))",
"Registration\nPossible choices for simple rigid multi-modality registration framework (<b>300</b> component combinations, in addition to parameter settings for each of the components):\n<ul>\n<li>Similarity metric, 2 options (Mattes MI, JointHistogram MI):\n<ul>\n <li>Number of histogram bins.</li>\n <li>Sampling strategy, 3 options (NONE, REGULAR, RANDOM)</li>\n <li>Sampling percentage.</li>\n</ul>\n</li>\n<li>Interpolator, 10 options (sitkNearestNeighbor, sitkLinear, sitkGaussian, sitkBSpline,...)</li>\n<li>Optimizer, 5 options (GradientDescent, GradientDescentLineSearch, RegularStepGradientDescent...): \n<ul>\n <li>Number of iterations.</li>\n <li>learning rate (step size along parameter space traversal direction).</li>\n</ul>\n</li>\n</ul>\n\nIn this example we will plot the similarity metric's value and more importantly the TREs for our reference data. A good choice for the former should be reflected by the later. That is, the TREs should go down as the similarity measure value goes down (not necessarily at the same rates).\nFinally, we are also interested in timing our registration. Ipython allows us to do this with minimal effort using the <a href=\"http://ipython.org/ipython-doc/stable/interactive/magics.html?highlight=timeit#magic-timeit\">timeit</a> cell magic (Ipython has a set of predefined functions that use a command line syntax, and are referred to as magic functions).",
"#%%timeit -r1 -n1\n# to time this cell uncomment the line above\n#the arguments to the timeit magic specify that this cell should only be run once. running it multiple \n#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy \n#results from multiple runs you will have to modify the code to save them instead of just printing them out.\n\nregistration_method = sitk.ImageRegistrationMethod()\nregistration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\nregistration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\nregistration_method.SetMetricSamplingPercentage(0.01)\nregistration_method.SetInterpolator(sitk.sitkNearestNeighbor) #2. Replace with sitkLinear\nregistration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #1. Increase to 1000\nregistration_method.SetOptimizerScalesFromPhysicalShift() \n \n# Don't optimize in-place, we would like to run this cell multiple times\nregistration_method.SetInitialTransform(initial_transform, inPlace=False)\n\n# Add callbacks which will display the similarity measure value and the reference data during the registration process\nregistration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)\nregistration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)\nregistration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))\n\nfinal_transform_single_scale = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), \n sitk.Cast(moving_image, sitk.sitkFloat32))\n\nprint('Final metric value: {0}'.format(registration_method.GetMetricValue()))\nprint('Optimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))\nfinal_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, min_err=initial_errors_min, max_err=initial_errors_max, display_errors=True)\nprint('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))",
"In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.",
"final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, display_errors=True)",
"Now using the built in multi-resolution framework\nPerform registration using the same settings as above, but take advantage of the multi-resolution framework which provides a significant speedup with minimal effort (3 lines of code).\nIt should be noted that when using this framework the similarity metric value will not necessarily decrease between resolutions, we are only ensured that it decreases per resolution. This is not an issue, as we are actually observing the values of a different function at each resolution. \nThe example below shows that registration is improving even though the similarity value increases when changing resolution levels.",
"%%timeit -r1 -n1\n#the arguments to the timeit magic specify that this cell should only be run once. running it multiple \n#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy \n#results from multiple runs you will have to modify the code to save them instead of just printing them out.\n\nregistration_method = sitk.ImageRegistrationMethod()\nregistration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\nregistration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\nregistration_method.SetMetricSamplingPercentage(0.1)\nregistration_method.SetInterpolator(sitk.sitkLinear) #2. Replace with sitkLinear\nregistration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) \nregistration_method.SetOptimizerScalesFromPhysicalShift() \n \n# Don't optimize in-place, we would like to run this cell multiple times\nregistration_method.SetInitialTransform(initial_transform, inPlace=False)\n\n# Add callbacks which will display the similarity measure value and the reference data during the registration process\nregistration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)\nregistration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)\nregistration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))\n\nregistration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\nregistration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])\nregistration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n\nfinal_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), \n sitk.Cast(moving_image, sitk.sitkFloat32))\n\nprint('Final metric value: {0}'.format(registration_method.GetMetricValue()))\nprint('Optimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))\nfinal_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, fixed_points, moving_points, True)\n\nprint('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))",
"Sufficient accuracy <u>inside</u> the ROI\nUp to this point our accuracy evaluation has ignored the content of the image and is likely overly conservative. We have been looking at the registration errors inside the volume, but not necesserily in the smaller ROI.\nTo see the difference you will have to <b>comment out the timeit magic in the code above</b>, run it again, and then run the following cell.",
"# Threshold the original fixed, CT, image at 0HU (water), resulting in a binary labeled [0,1] image.\nroi = fixed_image> 0\n\n# Our ROI consists of all voxels with a value of 1, now get the bounding box surrounding the head.\nlabel_shape_analysis = sitk.LabelShapeStatisticsImageFilter()\nlabel_shape_analysis.SetBackgroundValue(0)\nlabel_shape_analysis.Execute(roi)\nbounding_box = label_shape_analysis.GetBoundingBox(1)\n\n# Bounding box in physical space.\nsub_image_min = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0],bounding_box[1], bounding_box[2]))\nsub_image_max = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0]+bounding_box[3]-1,\n bounding_box[1]+bounding_box[4]-1, \n bounding_box[2]+bounding_box[5]-1))\n# Only look at the points inside our bounding box.\nsub_fixed_points = []\nsub_moving_points = []\nfor fixed_pnt, moving_pnt in zip(fixed_points, moving_points):\n if sub_image_min[0]<=fixed_pnt[0]<=sub_image_max[0] and \\\n sub_image_min[1]<=fixed_pnt[1]<=sub_image_max[1] and \\\n sub_image_min[2]<=fixed_pnt[2]<=sub_image_max[2] : \n sub_fixed_points.append(fixed_pnt)\n sub_moving_points.append(moving_pnt)\n\nfinal_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, sub_fixed_points, sub_moving_points, True)\nprint('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bloomberg/bqplot | examples/Interactions/Interaction Layer.ipynb | apache-2.0 | [
"import pandas as pd\nimport numpy as np\n\nsymbol = \"Security 1\"\nsymbol2 = \"Security 2\"\n\nprice_data = pd.DataFrame(\n np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.4], [0.4, 1.0]]), axis=0) + 100,\n columns=[\"Security 1\", \"Security 2\"],\n index=pd.date_range(start=\"01-01-2007\", periods=150),\n)\n\ndates_actual = price_data.index.values\nprices = price_data[symbol].values\n\nfrom bqplot import DateScale, LinearScale, Axis, Lines, Scatter, Bars, Hist, Figure\nfrom bqplot.interacts import (\n FastIntervalSelector,\n IndexSelector,\n BrushIntervalSelector,\n BrushSelector,\n MultiSelector,\n LassoSelector,\n PanZoom,\n HandDraw,\n)\nfrom traitlets import link\n\nfrom ipywidgets import ToggleButtons, VBox, HTML",
"Line Chart Selectors\nFast Interval Selector",
"## First we define a Figure\ndt_x_fast = DateScale()\nlin_y = LinearScale()\n\nx_ax = Axis(label=\"Index\", scale=dt_x_fast)\nx_ay = Axis(label=(symbol + \" Price\"), scale=lin_y, orientation=\"vertical\")\nlc = Lines(\n x=dates_actual, y=prices, scales={\"x\": dt_x_fast, \"y\": lin_y}, colors=[\"orange\"]\n)\nlc_2 = Lines(\n x=dates_actual[50:],\n y=prices[50:] + 2,\n scales={\"x\": dt_x_fast, \"y\": lin_y},\n colors=[\"blue\"],\n)\n\n## Next we define the type of selector we would like\nintsel_fast = FastIntervalSelector(scale=dt_x_fast, marks=[lc, lc_2])\n\n## Now, we define a function that will be called when the FastIntervalSelector is interacted with\ndef fast_interval_change_callback(change):\n db_fast.value = \"The selected period is \" + str(change.new)\n\n## Now we connect the selectors to that function\nintsel_fast.observe(fast_interval_change_callback, names=[\"selected\"])\n\n## We use the HTML widget to see the value of what we are selecting and modify it when an interaction is performed\n## on the selector\ndb_fast = HTML()\ndb_fast.value = \"The selected period is \" + str(intsel_fast.selected)\n\nfig_fast_intsel = Figure(\n marks=[lc, lc_2],\n axes=[x_ax, x_ay],\n title=\"Fast Interval Selector Example\",\n interaction=intsel_fast,\n) # This is where we assign the interaction to this particular Figure\n\nVBox([db_fast, fig_fast_intsel])",
"Index Selector",
"db_index = HTML(value=\"[]\")\n\n## Now we try a selector made to select all the y-values associated with a single x-value\nindex_sel = IndexSelector(scale=dt_x_fast, marks=[lc, lc_2])\n\n## Now, we define a function that will be called when the selectors are interacted with\ndef index_change_callback(change):\n db_index.value = \"The selected date is \" + str(change.new)\n\nindex_sel.observe(index_change_callback, names=[\"selected\"])\n\nfig_index_sel = Figure(\n marks=[lc, lc_2],\n axes=[x_ax, x_ay],\n title=\"Index Selector Example\",\n interaction=index_sel,\n)\nVBox([db_index, fig_index_sel])",
"Returning indexes of selected values",
"from datetime import datetime as py_dtime\n\ndt_x_index = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))\nlin_y2 = LinearScale()\n\nlc2_index = Lines(x=dates_actual, y=prices, scales={\"x\": dt_x_index, \"y\": lin_y2})\n\nx_ax1 = Axis(label=\"Date\", scale=dt_x_index)\nx_ay2 = Axis(label=(symbol + \" Price\"), scale=lin_y2, orientation=\"vertical\")\n\nintsel_date = FastIntervalSelector(scale=dt_x_index, marks=[lc2_index])\n\ndb_date = HTML()\ndb_date.value = str(intsel_date.selected)\n\n## Now, we define a function that will be called when the selectors are interacted with - a callback\ndef date_interval_change_callback(change):\n db_date.value = str(change.new)\n\n## Notice here that we call the observe on the Mark lc2_index rather than on the selector intsel_date\nlc2_index.observe(date_interval_change_callback, names=[\"selected\"])\n\nfig_date_mark = Figure(\n marks=[lc2_index],\n axes=[x_ax1, x_ay2],\n title=\"Fast Interval Selector Selected Indices Example\",\n interaction=intsel_date,\n)\n\nVBox([db_date, fig_date_mark])",
"Brush Selector\nWe can do the same with any type of selector",
"## Defining a new Figure\ndt_x_brush = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))\nlin_y2_brush = LinearScale()\n\nlc3_brush = Lines(x=dates_actual, y=prices, scales={\"x\": dt_x_brush, \"y\": lin_y2_brush})\n\nx_ax_brush = Axis(label=\"Date\", scale=dt_x_brush)\nx_ay_brush = Axis(label=(symbol + \" Price\"), scale=lin_y2_brush, orientation=\"vertical\")\n\ndb_brush = HTML(value=\"[]\")\n\nbrushsel_date = BrushIntervalSelector(\n scale=dt_x_brush, marks=[lc3_brush], color=\"FireBrick\"\n)\n\n## Now, we define a function that will be called when the selectors are interacted with - a callback\ndef date_brush_change_callback(change):\n db_brush.value = str(change.new)\n\nlc3_brush.observe(date_brush_change_callback, names=[\"selected\"])\n\nfig_brush_sel = Figure(\n marks=[lc3_brush],\n axes=[x_ax_brush, x_ay_brush],\n title=\"Brush Selector Selected Indices Example\",\n interaction=brushsel_date,\n)\n\nVBox([db_brush, fig_brush_sel])",
"Scatter Chart Selectors\nBrush Selector",
"date_fmt = \"%m-%d-%Y\"\n\nsec2_data = price_data[symbol2].values\ndates = price_data.index.values\n\nsc_x = LinearScale()\nsc_y = LinearScale()\n\nscatt = Scatter(x=prices, y=sec2_data, scales={\"x\": sc_x, \"y\": sc_y})\n\nsc_xax = Axis(label=(symbol), scale=sc_x)\nsc_yax = Axis(label=(symbol2), scale=sc_y, orientation=\"vertical\")\n\nbr_sel = BrushSelector(x_scale=sc_x, y_scale=sc_y, marks=[scatt], color=\"red\")\n\ndb_scat_brush = HTML(value=\"[]\")\n\n## call back for the selector\ndef brush_callback(change):\n db_scat_brush.value = str(br_sel.selected)\n\nbr_sel.observe(brush_callback, names=[\"brushing\"])\n\nfig_scat_brush = Figure(\n marks=[scatt],\n axes=[sc_xax, sc_yax],\n title=\"Scatter Chart Brush Selector Example\",\n interaction=br_sel,\n)\n\nVBox([db_scat_brush, fig_scat_brush])",
"Brush Selector with Date Values",
"sc_brush_dt_x = DateScale(date_format=date_fmt)\nsc_brush_dt_y = LinearScale()\n\nscatt2 = Scatter(\n x=dates_actual, y=sec2_data, scales={\"x\": sc_brush_dt_x, \"y\": sc_brush_dt_y}\n)\n\nbr_sel_dt = BrushSelector(x_scale=sc_brush_dt_x, y_scale=sc_brush_dt_y, marks=[scatt2])\n\ndb_brush_dt = HTML(value=str(br_sel_dt.selected))\n\n## call back for the selector\ndef brush_dt_callback(change):\n db_brush_dt.value = str(br_sel_dt.selected)\n\nbr_sel_dt.observe(brush_dt_callback, names=[\"brushing\"])\n\nsc_xax = Axis(label=(symbol), scale=sc_brush_dt_x)\nsc_yax = Axis(label=(symbol2), scale=sc_brush_dt_y, orientation=\"vertical\")\nfig_brush_dt = Figure(\n marks=[scatt2],\n axes=[sc_xax, sc_yax],\n title=\"Brush Selector with Dates Example\",\n interaction=br_sel_dt,\n)\n\nVBox([db_brush_dt, fig_brush_dt])",
"Histogram Selectors",
"## call back for selectors\ndef interval_change_callback(name, value):\n db3.value = str(value)\n\n\n## call back for the selector\ndef brush_callback(change):\n if not br_intsel.brushing:\n db3.value = str(br_intsel.selected)\n\nreturns = np.log(prices[1:]) - np.log(prices[:-1])\nhist_x = LinearScale()\nhist_y = LinearScale()\nhist = Hist(sample=returns, scales={\"sample\": hist_x, \"count\": hist_y})\n\nbr_intsel = BrushIntervalSelector(scale=hist_x, marks=[hist])\nbr_intsel.observe(brush_callback, names=[\"selected\"])\nbr_intsel.observe(brush_callback, names=[\"brushing\"])\n\ndb3 = HTML()\ndb3.value = str(br_intsel.selected)\n\nh_xax = Axis(\n scale=hist_x, label=\"Returns\", grids=\"off\", set_ticks=True, tick_format=\"0.2%\"\n)\nh_yax = Axis(scale=hist_y, label=\"Freq\", orientation=\"vertical\", grid_lines=\"none\")\n\nfig_hist = Figure(\n marks=[hist],\n axes=[h_xax, h_yax],\n title=\"Histogram Selection Example\",\n interaction=br_intsel,\n)\nVBox([db3, fig_hist])",
"Multi Selector\n\nThis selector provides the ability to have multiple brush selectors on the same graph.\nThe first brush works like a regular brush.\nCtrl + click creates a new brush, which works like the regular brush.\nThe active brush has a Green border while all the inactive brushes have a Red border.\nShift + click deactivates the current active brush. Now, click on any inactive brush to make it active.\nCtrl + Alt + Shift + click clears and resets all the brushes.",
"def multi_sel_callback(change):\n if not multi_sel.brushing:\n db4.value = str(multi_sel.selected)\n\nline_x = LinearScale()\nline_y = LinearScale()\nline = Lines(\n x=np.arange(100), y=np.random.randn(100), scales={\"x\": line_x, \"y\": line_y}\n)\n\nmulti_sel = MultiSelector(scale=line_x, marks=[line])\nmulti_sel.observe(multi_sel_callback, names=[\"selected\"])\nmulti_sel.observe(multi_sel_callback, names=[\"brushing\"])\n\ndb4 = HTML()\ndb4.value = str(multi_sel.selected)\n\nh_xax = Axis(scale=line_x, label=\"Returns\", grid_lines=\"none\")\nh_yax = Axis(scale=hist_y, label=\"Freq\", orientation=\"vertical\", grid_lines=\"none\")\n\nfig_multi = Figure(\n marks=[line],\n axes=[h_xax, h_yax],\n title=\"Multi-Selector Example\",\n interaction=multi_sel,\n)\nVBox([db4, fig_multi])\n\n# changing the names of the intervals.\nmulti_sel.names = [\"int1\", \"int2\", \"int3\"]",
"Multi Selector with Date X",
"def multi_sel_dt_callback(change):\n if not multi_sel_dt.brushing:\n db_multi_dt.value = str(multi_sel_dt.selected)\n\nline_dt_x = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))\nline_dt_y = LinearScale()\nline_dt = Lines(\n x=dates_actual, y=sec2_data, scales={\"x\": line_dt_x, \"y\": line_dt_y}, colors=[\"red\"]\n)\n\nmulti_sel_dt = MultiSelector(scale=line_dt_x)\nmulti_sel_dt.observe(multi_sel_dt_callback, names=[\"selected\"])\nmulti_sel_dt.observe(multi_sel_dt_callback, names=[\"brushing\"])\n\ndb_multi_dt = HTML()\ndb_multi_dt.value = str(multi_sel_dt.selected)\n\nh_xax_dt = Axis(scale=line_dt_x, label=\"Returns\", grid_lines=\"none\")\nh_yax_dt = Axis(\n scale=line_dt_y, label=\"Freq\", orientation=\"vertical\", grid_lines=\"none\"\n)\n\nfig_multi_dt = Figure(\n marks=[line_dt],\n axes=[h_xax_dt, h_yax_dt],\n title=\"Multi-Selector with Date Example\",\n interaction=multi_sel_dt,\n)\nVBox([db_multi_dt, fig_multi_dt])",
"Lasso Selector",
"lasso_sel = LassoSelector()\n\nxs, ys = LinearScale(), LinearScale()\ndata = np.arange(20)\nline_lasso = Lines(x=data, y=data, scales={\"x\": xs, \"y\": ys})\nscatter_lasso = Scatter(x=data, y=data, scales={\"x\": xs, \"y\": ys}, colors=[\"skyblue\"])\nbar_lasso = Bars(x=data, y=data / 2.0, scales={\"x\": xs, \"y\": ys})\nxax_lasso, yax_lasso = Axis(scale=xs, label=\"X\"), Axis(\n scale=ys, label=\"Y\", orientation=\"vertical\"\n)\nfig_lasso = Figure(\n marks=[scatter_lasso, line_lasso, bar_lasso],\n axes=[xax_lasso, yax_lasso],\n title=\"Lasso Selector Example\",\n interaction=lasso_sel,\n)\nlasso_sel.marks = [scatter_lasso, line_lasso]\nfig_lasso\n\nscatter_lasso.selected, line_lasso.selected",
"Pan Zoom",
"xs_pz = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))\nys_pz = LinearScale()\nline_pz = Lines(\n x=dates_actual, y=sec2_data, scales={\"x\": xs_pz, \"y\": ys_pz}, colors=[\"red\"]\n)\n\npanzoom = PanZoom(scales={\"x\": [xs_pz], \"y\": [ys_pz]})\nxax = Axis(scale=xs_pz, label=\"Date\", grids=\"off\")\nyax = Axis(scale=ys_pz, label=\"Price\", orientation=\"vertical\", grid_lines=\"none\")\n\nFigure(marks=[line_pz], axes=[xax, yax], interaction=panzoom)",
"Hand Draw",
"xs_hd = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))\nys_hd = LinearScale()\nline_hd = Lines(\n x=dates_actual, y=sec2_data, scales={\"x\": xs_hd, \"y\": ys_hd}, colors=[\"red\"]\n)\n\nhanddraw = HandDraw(lines=line_hd)\nxax = Axis(scale=xs_hd, label=\"Date\", grid_lines=\"none\")\nyax = Axis(scale=ys_hd, label=\"Price\", orientation=\"vertical\", grid_lines=\"none\")\n\nFigure(marks=[line_hd], axes=[xax, yax], interaction=handdraw)",
"Unified Figure with All Interactions",
"dt_x = DateScale(date_format=date_fmt, min=py_dtime(2007, 1, 1))\nlc1_x = LinearScale()\nlc2_y = LinearScale()\n\nlc2 = Lines(\n x=np.linspace(0.0, 10.0, len(prices)),\n y=prices * 0.25,\n scales={\"x\": lc1_x, \"y\": lc2_y},\n display_legend=True,\n labels=[\"Security 1\"],\n)\n\nlc3 = Lines(\n x=dates_actual,\n y=sec2_data,\n scales={\"x\": dt_x, \"y\": lc2_y},\n colors=[\"red\"],\n display_legend=True,\n labels=[\"Security 2\"],\n)\n\nlc4 = Lines(\n x=np.linspace(0.0, 10.0, len(prices)),\n y=sec2_data * 0.75,\n scales={\"x\": LinearScale(min=5, max=10), \"y\": lc2_y},\n colors=[\"green\"],\n display_legend=True,\n labels=[\"Security 2 squared\"],\n)\n\nx_ax1 = Axis(label=\"Date\", scale=dt_x)\nx_ax2 = Axis(label=\"Time\", scale=lc1_x, side=\"top\", grid_lines=\"none\")\nx_ay2 = Axis(label=(symbol + \" Price\"), scale=lc2_y, orientation=\"vertical\")\n\n\nfig = Figure(marks=[lc2, lc3, lc4], axes=[x_ax1, x_ax2, x_ay2])\n\n## declaring the interactions\nmulti_sel = MultiSelector(scale=dt_x, marks=[lc2, lc3])\nbr_intsel = BrushIntervalSelector(scale=lc1_x, marks=[lc2, lc3])\nindex_sel = IndexSelector(scale=dt_x, marks=[lc2, lc3])\nint_sel = FastIntervalSelector(scale=dt_x, marks=[lc3, lc2])\n\nhd = HandDraw(lines=lc2)\nhd2 = HandDraw(lines=lc3)\npz = PanZoom(scales={\"x\": [dt_x], \"y\": [lc2_y]})\n\ndeb = HTML()\ndeb.value = \"[]\"\n\n## Call back handler for the interactions\ndef test_callback(change):\n deb.value = str(change.new)\n\n\nmulti_sel.observe(test_callback, names=[\"selected\"])\nbr_intsel.observe(test_callback, names=[\"selected\"])\nindex_sel.observe(test_callback, names=[\"selected\"])\nint_sel.observe(test_callback, names=[\"selected\"])\n\nfrom collections import OrderedDict\n\nselection_interacts = ToggleButtons(\n options=OrderedDict(\n [\n (\"HandDraw1\", hd),\n (\"HandDraw2\", hd2),\n (\"PanZoom\", pz),\n (\"FastIntervalSelector\", int_sel),\n (\"IndexSelector\", index_sel),\n (\"BrushIntervalSelector\", br_intsel),\n (\"MultiSelector\", multi_sel),\n (\"None\", None),\n ]\n )\n)\n\nlink((selection_interacts, \"value\"), (fig, \"interaction\"))\nVBox([deb, fig, selection_interacts], align_self=\"stretch\")\n\n# Set the scales of lc4 to the ones of lc2 and check if panzoom pans the two.\nlc4.scales = lc2.scales"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
crystalzhaizhai/cs207_yi_zhai | homeworks/HW6/HW6_finished.ipynb | mit | [
"Homework 6\nDue: Tuesday, October 10 at 11:59 PM\nProblem 1: Bank Account Revisited\nWe are going to rewrite the bank account closure problem we had a few assignments ago, only this time developing a formal class for a Bank User and Bank Account to use in our closure (recall previously we just had a nonlocal variable amount that we changed). \nSome Preliminaries:\nFirst we are going to define two types of bank accounts. Use the code below to do this:",
"from enum import Enum\nclass AccountType(Enum):\n SAVINGS = 1\n CHECKING = 2",
"An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing:",
"AccountType.SAVINGS",
"returns a Python representation of an enumeration. You can compare these account types:",
"AccountType.SAVINGS == AccountType.SAVINGS\n\nAccountType.SAVINGS == AccountType.CHECKING",
"To get a string representation of an Enum, you can use:",
"AccountType.SAVINGS.name",
"Part 1: Create a BankAccount class with the following specification:\nConstructor is BankAccount(self, owner, accountType) where owner is a string representing the name of the account owner and accountType is one of the AccountType enums\nMethods withdraw(self, amount) and deposit(self, amount) to modify the account balance of the account\nOverride methods __str__ to write an informative string of the account owner and the type of account, and __len__ to return the balance of the account",
"class BankAccount():\n def __init__(self,owner,accountType):\n self.owner=owner\n self.accountType=accountType\n self.balance=0\n def withdraw(self,amount):\n if amount<0:\n raise ValueError(\"amount<0\")\n if self.balance<amount:\n raise ValueError(\"withdraw more than balance\")\n self.balance-=amount\n def deposit(self,amount):\n if amount<0:\n raise ValueError(\"amount<0\")\n self.balance+=amount\n def __str__(self):\n return \"owner:{!s} account type:{!s}\".format(self.owner,self.accountType.name)\n def __len__(self):\n return self.balance\n\nmyaccount=BankAccount(\"zhaizhai\",AccountType.CHECKING)\n\n\nprint(myaccount.balance)\n",
"Part 2: Write a class BankUser with the following specification:\nConstructor BankUser(self, owner) where owner is the name of the account.\nMethod addAccount(self, accountType) - to start, a user will have no accounts when the BankUser object is created. addAccount will add a new account to the user of the accountType specified. Only one savings/checking account per user, return appropriate error otherwise\nMethods getBalance(self, accountType), deposit(self, accountType, amount), and withdraw(self, accountType, amount) for a specific AccountType.\nOverride __str__ to have an informative summary of user's accounts.",
"class BankUser():\n def __init__(self,owner):\n self.owner=owner\n self.SavingAccount=None\n self.CheckingAccount=None\n def addAccount(self,accountType):\n if accountType==AccountType.SAVINGS:\n if self.SavingAccount==None:\n self.SavingAccount=BankAccount(self.owner,accountType)\n else:\n print(\"more than one saving account!\")\n raise AttributeError(\"more than one saving account!\")\n elif accountType==AccountType.CHECKING:\n if self.CheckingAccount==None:\n self.CheckingAccount=BankAccount(self.owner,accountType)\n else:\n print(\"more than one checking account!\")\n raise AttributeError(\"more than one checking account!\")\n else:\n print(\"no such account type!\")\n raise ValueError(\"no such account type!\")\n def getBalance(self,accountType):\n if accountType==AccountType.SAVINGS:\n if self.SavingAccount==None:\n print(\"saving account not exist\")\n raise AttributeError(\"saving account not exist\")\n else:\n return self.SavingAccount.balance\n elif accountType==AccountType.CHECKING:\n if self.CheckingAccount==None:\n print(\"checking account not exist\")\n raise AttributeError(\"checking account not exist\")\n else:\n return self.CheckingAccount.balance\n else:\n print(\"no such account type!\")\n raise AttributeError(\"no such account type!\")\n \n def deposit(self,accountType,amount):\n if accountType==AccountType.SAVINGS:\n if self.SavingAccount==None:\n print(\"saving account not exist\")\n raise AttributeError(\"saving account not exist\")\n else:\n return self.SavingAccount.deposit(amount)\n elif accountType==AccountType.CHECKING:\n if self.CheckingAccount==None:\n print(\"checking account not exist\")\n raise AttributeError(\"checking account not exist\")\n else:\n return self.CheckingAccount.deposit(amount)\n else:\n print(\"no such account type!\")\n raise AttributeError(\"no such account type!\")\n\n \n def withdraw(self,accountType,amount):\n if accountType==AccountType.SAVINGS:\n if self.SavingAccount==None:\n print(\"saving account not exist\")\n raise AttributeError(\"saving account not exist\")\n else:\n return self.SavingAccount.withdraw(amount)\n elif accountType==AccountType.CHECKING:\n if self.CheckingAccount==None:\n print(\"checking account not exist\")\n raise AttributeError(\"checking account not exist\")\n else:\n return self.CheckingAccount.withdraw(amount)\n else:\n print(\"no such account type!\")\n raise AttributeError(\"no such account type!\")\n \n def __str__(self):\n s=\"owner:{!s}\".format(self.owner)\n if self.SavingAccount!=None:\n s=s+\"account type: Saving balance:{:.2f}\".format(self.SavingAccount.balance)\n if self.CheckingAccount!=None:\n s=s+\"account type: Checking balance:{:.2f}\".format(self.CheckingAccount.balance)\n return s\n \n\nnewuser=BankUser(\"zhaizhai\")\nprint(newuser)\nnewuser.addAccount(AccountType.SAVINGS)\nprint(newuser)\nnewuser.deposit(AccountType.SAVINGS,2)\nnewuser.withdraw(AccountType.SAVINGS,1)\nprint(newuser)\nnewuser.withdraw(AccountType.CHECKING,1)",
"Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do.\nPart 3: ATM Closure\nFinally, we are going to rewrite a closure to use our bank account. We will make use of the input function which takes user input to decide what actions to take.\nWrite a closure called ATMSession(bankUser) which takes in a BankUser object. Return a method called Interface that when called, would provide the following interface:\nFirst screen for user will look like:\nEnter Option:\n1)Exit\n2)Create Account\n3)Check Balance\n4)Deposit\n5)Withdraw\nPressing 1 will exit, any other option will show the options:\nEnter Option:\n1)Checking\n2)Savings\nIf a deposit or withdraw was chosen, then there must be a third screen:\nEnter Integer Amount, Cannot Be Negative:\nThis is to keep the code relatively simple, if you'd like you can also curate the options depending on the BankUser object (for example, if user has no accounts then only show the Create Account option), but this is up to you. In any case, you must handle any input from the user in a reasonable way that an actual bank would be okay with, and give the user a proper response to the action specified.\nUpon finishing a transaction or viewing balance, it should go back to the original screen",
"def ATMSession(bankUser):\n def Interface():\n option1=input(\"Enter Options:\\\n 1)Exit\\\n 2)Creat Account\\\n 3)Check Balance\\\n 4)Deposit\\\n 5)Withdraw\")\n if option1==\"1\":\n Interface()\n return\n option2=input(\"Enter Options:\\\n 1)Checking\\\n 2)Saving\")\n if option1==\"2\":\n if option2==\"1\":\n bankUser.addAccount(AccountType.CHECKING)\n Interface()\n return\n elif option2==\"2\":\n bankUser.addAccount(AccountType.SAVINGS)\n Interface()\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n if option1==\"3\":\n if option2==\"1\":\n print(bankUser.getBalance(AccountType.CHECKING))\n Interface()\n return\n elif option2==\"2\":\n print(bankUser.getBalance(AccountType.SAVINGS))\n Interface()\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n \n if option1==\"4\":\n option3=input(\"Enter Interger Amount, Cannot be Negative:\")\n if option2==\"1\":\n bankUser.deposit(AccountType.CHECKING,int(option3))\n Interface()\n return\n elif option2==\"2\":\n bankUser.deposit(AccountType.SAVINGS,int(option3))\n Interface()\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n \n if option1==\"5\":\n option3=input(\"Enter Interger Amount, Cannot be Negative:\")\n if option2==\"1\":\n bankUser.withdraw(AccountType.CHECKING,int(option3))\n Interface()\n return\n elif option2==\"2\":\n bankUser.withdraw(AccountType.SAVINGS,int(option3))\n Interface()\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n print(\"no such operation\")\n raise AttributeError(\"no such operation\")\n \n return Interface\n\nmyATM=ATMSession(newuser)\nmyATM()\n\nprint(newuser)",
"Part 4: Put everything in a module Bank.py\nWe will be grading this problem with a test suite. Put the enum, classes, and closure in a single file named Bank.py. It is very important that the class and method specifications we provided are used (with the same capitalization), otherwise you will receive no credit.",
"%%file bank.py\nfrom enum import Enum\nclass AccountType(Enum):\n SAVINGS = 1\n CHECKING = 2\n \nclass BankAccount():\n def __init__(self,owner,accountType):\n self.owner=owner\n self.accountType=accountType\n self.balance=0\n def withdraw(self,amount):\n if type(amount)!=int:\n raise ValueError(\"not integer amount\")\n if amount<0:\n raise ValueError(\"amount<0\")\n if self.balance<amount:\n raise ValueError(\"withdraw more than balance\")\n self.balance-=amount\n def deposit(self,amount):\n if type(amount)!=int:\n raise ValueError(\"not integer amount\")\n if amount<0:\n raise ValueError(\"amount<0\")\n self.balance+=amount\n def __str__(self):\n return \"owner:{!s} account type:{!s}\".format(self.owner,self.accountType.name)\n def __len__(self):\n return self.balance\n \n \ndef ATMSession(bankUser):\n def Interface():\n option1=input(\"Enter Options:\\\n 1)Exit\\\n 2)Creat Account\\\n 3)Check Balance\\\n 4)Deposit\\\n 5)Withdraw\")\n if option1==\"1\":\n return\n option2=input(\"Enter Options:\\\n 1)Checking\\\n 2)Saving\")\n if option1==\"2\":\n if option2==\"1\":\n bankUser.addAccount(AccountType.CHECKING)\n return\n elif option2==\"2\":\n bankUser.addAccount(AccountType.SAVINGS)\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n if option1==\"3\":\n if option2==\"1\":\n print(bankUser.getBalance(AccountType.CHECKING))\n return\n elif option2==\"2\":\n print(bankUser.getBalance(AccountType.SAVINGS))\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n \n if option1==\"4\":\n option3=input(\"Enter Interger Amount, Cannot be Negative:\")\n if option2==\"1\":\n bankUser.deposit(AccountType.CHECKING,int(option3))\n return\n elif option2==\"2\":\n bankUser.deposit(AccountType.SAVINGS,int(option3))\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n \n if option1==\"5\":\n option3=input(\"Enter Interger Amount, Cannot be Negative:\")\n if option2==\"1\":\n bankUser.withdraw(AccountType.CHECKING,int(option3))\n return\n elif option2==\"2\":\n bankUser.withdraw(AccountType.SAVINGS,int(option3))\n return\n else:\n print(\"no such account type\")\n raise AttributeError(\"no such account type\")\n print(\"no such operation\")\n raise AttributeError(\"no such operation\")\n return Interface",
"Problem 2: Linear Regression Class\nLet's say you want to create Python classes for three related types of linear regression: Ordinary Least Squares Linear Regression, Ridge Regression, and Lasso Regression. \nConsider the multivariate linear model:\n$$y = X\\beta + \\epsilon$$\nwhere $y$ is a length $n$ vector, $X$ is an $m \\times p$ matrix, and $\\beta$\nis a $p$ length vector of coefficients.\nOrdinary Least Squares Linear Regression\nOLS Regression seeks to minimize the following cost function:\n$$\\|y - \\beta\\mathbf {X}\\|^{2}$$\nThe best fit coefficients can be obtained by:\n$$\\hat{\\beta} = (X^T X)^{-1}X^Ty$$\nwhere $X^T$ is the transpose of the matrix $X$ and $X^{-1}$ is the inverse of the matrix $X$.\nRidge Regression\nRidge Regression introduces an L2 regularization term to the cost function:\n$$\\|y - \\beta\\mathbf {X}\\|^{2}+\\|\\Gamma \\mathbf {x} \\|^{2}$$\nWhere $\\Gamma = \\alpha I$ for some constant $\\alpha$ and the identity matrix $I$.\nThe best fit coefficients can be obtained by:\n$$\\hat{\\beta} = (X^T X+\\Gamma^T\\Gamma)^{-1}X^Ty$$\nLasso Regression\nLasso Regression introduces an L1 regularization term and restricts the total number of predictor variables in the model.\nThe following cost function:\n$${\\displaystyle \\min {\\beta {0},\\beta }\\left{{\\frac {1}{m}}\\left\\|y-\\beta {0}-X\\beta \\right\\|{2}^{2}\\right}{\\text{ subject to }}\\|\\beta \\|_{1}\\leq \\alpha.}$$\ndoes not have a nice closed form solution. For the sake of this exercise, you may use the sklearn.linear_model.Lasso class, which uses a coordinate descent algorithm to find the best fit. You should only use the class in the fit() method of this exercise (ie. do not re-use the sklearn for other methods in your class).\n$R^2$ score\nThe $R^2$ score is defined as:\n$${R^{2} = {1-{SS_E \\over SS_T}}}$$\nWhere:\n$$SS_T=\\sum_i (y_i-\\bar{y})^2, SS_R=\\sum_i (\\hat{y_i}-\\bar{y})^2, SS_E=\\sum_i (y_i - \\hat{y_i})^2$$\nwhere ${y_i}$ are the original data values, $\\hat{y_i}$ are the predicted values, and $\\bar{y_i}$ is the mean of the original data values.\nPart 1: Base Class\nWrite a class called Regression with the following methods:\n$fit(X, y)$: Fits linear model to $X$ and $y$.\n$get_params()$: Returns $\\hat{\\beta}$ for the fitted model. The parameters should be stored in a dictionary.\n$predict(X)$: Predict new values with the fitted model given $X$.\n$score(X, y)$: Returns $R^2$ value of the fitted model.\n$set_params()$: Manually set the parameters of the linear model.\nThis parent class should throw a NotImplementedError for methods that are intended to be implemented by subclasses.",
"class Regression():\n def __init__(self,X,y):\n self.X=X\n self.y=y\n self.alpha=0.1\n def fit(self,X,y):\n return\n def get_params(self):\n return self.beta\n def predict(self,X):\n import numpy as np\n return np.dot(X,self.beta) \n def score(self,X,y):\n return 1-np.sum((y-self.predict(X))**2)/np.sum((y-np.mean(y))**2)\n def set_params(self,alpha):\n self.alpha=alpha",
"Part 2: OLS Linear Regression\nWrite a class called OLSRegression that implements the OLS Regression model described above and inherits the Regression class.",
"class OLSRegression(Regression):\n def fit(self):\n import numpy as np\n X=self.X\n y=self.y\n self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)\n\n\nols1=OLSRegression([[2],[3]],[[1],[2]])\nols1.fit()\nols1.predict([[2],[3]])\n\n\nX=[[2],[3]]\ny=[[1],[2]]\nbeta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y)\n\n",
"Part 3: Ridge Regression\nWrite a class called RidgeRegression that implements Ridge Regression and inherits the OLSRegression class.",
"class RidgeRegression(Regression):\n def fit(self):\n import numpy as np\n X=self.X\n y=self.y\n self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)+self.alpha**2),np.transpose(X)),y)\n return\n\nridge1=RidgeRegression([[2],[3]],[[1],[2]])\nridge1.fit()\nridge1.predict([[2],[3]])\n\nridge1.score([[2],[3]],[[1],[2]])\n",
"Part 3: Lasso Regression\nWrite a class called LassoRegression that implements Lasso Regression and inherits the OLSRegression class. You should only use Lasso(), Lasso.fit(), Lasso.coef_, and Lasso._intercept from the sklearn.linear_model.Lasso class.",
"class LassoRegression(Regression):\n def fit(self):\n from sklearn.linear_model import Lasso\n myLs=Lasso(self.alpha)\n myLs.fit(self.X,self.y) \n self.beta=myLs.coef_.reshape((-1,1))\n self.beta0=myLs.intercept_ \n return\n def predict(self,X):\n import numpy as np\n return np.dot(X,self.beta)+self.beta0\n\nlasso1=LassoRegression([[2],[3]],[[1],[2]])\nlasso1.fit()\nlasso1.predict([[2],[3]])\n\nlasso1.score([[2],[3]],[[1],[2]])\n\nfrom sklearn.linear_model import Lasso\nmyLs=Lasso(alpha=0.1)\nmyLs.fit([[2],[3]],[[1],[1]])\nbeta=np.array(myLs.coef_)\nprint(beta.reshape((-1,1)))\nbeta0=myLs.intercept_\nprint(beta0)",
"Part 4: Model Scoring\nYou will use the Boston dataset for this part.\nInstantiate each of the three models above. Using a for loop, fit (on the training data) and score (on the testing data) each model on the Boston dataset. \nPrint out the $R^2$ value for each model and the parameters for the best model using the get_params() method. Use an $\\alpha$ value of 0.1.\nHint: You can consider using the sklearn.model_selection.train_test_split method to create the training and test datasets.",
"from sklearn.datasets import load_boston\nfrom sklearn.model_selection import KFold\nfrom sklearn.metrics import r2_score\nimport statsmodels.api as sm\nimport numpy as np\nboston=load_boston()\nboston_x=boston.data\nboston_y=boston.target\n\nkf=KFold(n_splits=2)\nkf.get_n_splits(boston)\nols1_m=0\nridge1_m=0\nlasso1_m=0\n\nfor train_index, test_index in kf.split(boston_x):\n \n X_train, X_test = boston_x[train_index], boston_x[test_index]\n y_train, y_test = boston_y[train_index], boston_y[test_index]\n \n y_train=y_train.reshape(-1,1)\n y_test=y_test.reshape(-1,1)\n \n \n\n ols1=OLSRegression(sm.add_constant(X_train),y_train)\n ols1.fit()\n ols1_m+=ols1.score(sm.add_constant(X_test),y_test)\n print(\"OLS score:\",ols1.score(sm.add_constant(X_test),y_test))\n\n ridge1=RidgeRegression(sm.add_constant(X_train),y_train)\n ridge1.fit()\n ridge1_m+=ridge1.score(sm.add_constant(X_test),y_test)\n print(\"ridge score:\",ridge1.score(sm.add_constant(X_test),y_test))\n \n lasso1=LassoRegression(X_train,y_train)\n lasso1.fit()\n lasso1_m+=lasso1.score(X_test,y_test)\n print(\"lasso score:\",lasso1.score(X_test,y_test))\n \n break\n \nprint(ols1_m,ridge1_m,lasso1_m)\n \nols1.get_params() \n",
"Part 5: Visualize Model Performance\nWe can evaluate how the models perform for various values of $\\alpha$. Calculate the $R^2$ scores for each model for $\\alpha \\in [0.05, 1]$ and plot the three lines on the same graph. To change the parameters, use the set_params() method. Be sure to label each line and add axis labels.",
"ols_r=[]\nridge_r=[]\nlasso_r=[]\nalpha_l=[]\nfor alpha_100 in range(5,100,5):\n alpha=alpha_100/100\n alpha_l.append(alpha)\n for train_index, test_index in kf.split(boston_x):\n \n X_train, X_test = boston_x[train_index], boston_x[test_index]\n y_train, y_test = boston_y[train_index], boston_y[test_index]\n \n y_train=y_train.reshape(-1,1)\n y_test=y_test.reshape(-1,1)\n\n ols1=OLSRegression(sm.add_constant(X_train),y_train)\n ols1.set_params(alpha)\n ols1.fit()\n ols_r.append(ols1.score(sm.add_constant(X_test),y_test))\n \n ridge1=RidgeRegression(sm.add_constant(X_train),y_train)\n ridge1.set_params(alpha)\n ridge1.fit()\n ridge_r.append(ridge1.score(sm.add_constant(X_test),y_test))\n \n \n lasso1=LassoRegression(X_train,y_train)\n lasso1.set_params(alpha)\n lasso1.fit()\n lasso_r.append(lasso1.score(X_test,y_test))\n \n break\n \n\nimport matplotlib.pyplot as plt\nplt.plot(alpha_l,ols_r,label=\"linear regression\")\nplt.plot(alpha_l,ridge_r,label=\"ridge\")\nplt.plot(alpha_l,lasso_r,label=\"lasso\")\nplt.xlabel(\"alpha\")\nplt.ylabel(\"$R^{2}$\")\nplt.title(\"the relation of R squared with alpha\")\nplt.legend()\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
VUInformationRetrieval/IR2016_2017 | 04_analysis.ipynb | gpl-2.0 | [
"Mini-Assignment 4: Link Analysis\nIn this mini-assignment, we will exploit graph algorithms to improve search results. For our dataset of scientific papers, we look at two graphs in particular: the co-authorship network and the citation network.\nThe citation network is similar to the link network of the web: Citations are like web links pointing to other documents. We can therefore apply the same network-based ranking methods.\nCode from previous exercises",
"import pickle, bz2\nfrom collections import defaultdict, namedtuple, Counter\nfrom math import log10, sqrt\nfrom IPython.display import display, HTML\nimport matplotlib.pyplot as plt\n\n# show plots inline within the notebook\n%matplotlib inline\n# set plots' resolution\nplt.rcParams['savefig.dpi'] = 100\n\nIds_file = 'data/malaria__Ids.pkl.bz2'\nSummaries_file = 'data/malaria__Summaries.pkl.bz2'\nCitations_file = 'data/malaria__Citations.pkl.bz2'\nAbstracts_file = 'data/malaria__Abstracts.pkl.bz2'\n\nIds = pickle.load( bz2.BZ2File( Ids_file, 'rb' ) )\nSummaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )\nCitations = pickle.load( bz2.BZ2File( Citations_file, 'rb' ) )\nAbstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )\n\npaper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )\n\nfor (id, paper_info) in Summaries.items():\n Summaries[id] = paper( *paper_info )\n\ndef display_summary( id, show_abstract=False, show_id=True, extra_text='' ):\n \"\"\"\n Function for printing a paper's summary through IPython's Rich Display System.\n Trims long author lists, and adds a link to the paper's DOI (when available).\n \"\"\"\n s = Summaries[id]\n lines = []\n title = s.title\n if s.doi != '':\n title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)\n title = '<strong>' + title + '</strong>'\n lines.append(title)\n authors = ', '.join( s.authors[:20] ) + ('' if len(s.authors) <= 20 else ', ...')\n lines.append(str(s.year) + '. ' + authors)\n if (show_abstract):\n lines.append('<small><strong>Abstract:</strong> <em>%s</em></small>' % Abstracts[id])\n if (show_id):\n lines.append('[ID: %d]' % id)\n if (extra_text != ''):\n lines.append(extra_text)\n display( HTML('<br>'.join(lines)) )\n\ndef tokenize(text):\n return text.split(' ')\n\ndef preprocess(tokens):\n result = []\n for token in tokens:\n result.append(token.lower())\n return result\n\ninverted_index = defaultdict(set)\n\nfor (id, abstract) in Abstracts.items():\n for term in preprocess(tokenize(abstract)):\n inverted_index[term].add(id)\n\ntf_matrix = defaultdict(Counter)\nlength_values = defaultdict(int)\n\nfor (doc_id, abstract) in Abstracts.items():\n tokens = preprocess(tokenize(abstract))\n tf_matrix[doc_id] = Counter(tokens)\n l = 0\n for t in tf_matrix[doc_id].keys():\n l += tf_matrix[doc_id][t] ** 2\n length_values[doc_id] = sqrt(l)\n\ndef tf(t,d):\n return float(tf_matrix[d][t])\n\ndef df(t):\n return float(len(inverted_index[t]))\n \ndef num_documents():\n return float(len(Abstracts))\n\ndef length_tf(d):\n return length_values[d]\n\ndef tfidf(t,d):\n return tf(t,d) * log10(num_documents()/df(t))",
"Co-authorship network\nWe start by building a mapping from authors to the set of identifiers of papers they authored. We'll be using Python's sets again for that purpose.",
"papers_of_author = defaultdict(set)\n\nfor (id, p) in Summaries.items():\n for a in p.authors:\n papers_of_author[a].add(id)",
"Let's try it out:",
"papers_of_author['Clauset A']\n\nfor id in papers_of_author['Clauset A']:\n display_summary(id)",
"We can now build a co-authorship network, that is a graph linking authors to the set of co-authors they have published with:",
"coauthors = defaultdict(set)\n\nfor p in Summaries.values():\n for a in p.authors:\n coauthors[a].update(p.authors)\n\n# The code above results in each author being listed as having co-authored with himself/herself.\n# We remove these self-references here:\nfor (a, ca) in coauthors.items():\n ca.remove(a)",
"And let's try it out again:",
"print(', '.join( coauthors['Clauset A'] ))",
"Now we can have a look at some basic statistics about our graph:",
"print('Number of nodes (authors): ', len(coauthors))\n\ncoauthor_rel_count = sum( len(c) for c in coauthors.values() )\nprint('Number of links (co-authorship relations): ', coauthor_rel_count)",
"With this data at hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with:",
"plt.hist( x=[ len(ca) for ca in coauthors.values() ], bins=range(60) )\nplt.xlabel('number of co-authors')\nplt.ylabel('number of researchers')\nplt.xlim(0,51);",
"Citations network\nNext, we can look at the citation network. We'll start by expanding the our data about citations into two mappings: \n\npapers_citing[id]: papers citing a given paper\ncited_by[id]: papers cited by a given paper (in other words: its list of references)\n\npapers_citing will give us the list of a node's incoming links, whereas cited_by will give us the list of its outgoing links.",
"papers_citing = Citations # no changes needed, this is what we are storing already in the Citations dataset\n\ncited_by = defaultdict(list)\n\nfor ref, papers_citing_ref in papers_citing.items():\n for id in papers_citing_ref:\n cited_by[ id ].append( ref )\n\ndisplay_summary(24130474)",
"As we are dealing with a subset of the data, papers_citing can contain references to papers outside of our subset. On the other hand, the way we created cited_by, it will only contain backward references from within our dataset, meaning that it is incomplete with respect to the whole dataset. Nethertheless, we can use this citation network on our subset of malaria-related papers to implement link analysis techniques.\nLet us now look at an exemlary paper, let's say the one with identifier 24130474. We can now use the cited_by mapping to retrieve its (incomplete) list of references:",
"paper_id = 24130474\nrefs = { id : Summaries[id].title for id in cited_by[paper_id] }\nprint(len(refs), 'references found for paper', paper_id)\nrefs",
"If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (shown below as '??'):",
"{ id : Summaries.get(id,['??'])[0] for id in papers_citing[paper_id] }",
"Paper 25122340, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct some of its references. Below is the list of papers in our dataset cited by that paper:",
"paper_id2 = 25122340\nrefs2 = { id : Summaries[id].title for id in cited_by[paper_id2] }\nprint(len(refs2), 'references identified for the paper with id', paper_id2)\nrefs2",
"Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph.",
"n = len(Ids)\nprint('Number of papers in our subset: %d (%.2f %%)' % (n, 100.0) )\n\nwith_citation = [ id for id in Ids if papers_citing[id] != [] ]\nwith_citation_rel = 100. * len(with_citation) / n\nprint('Number of papers cited at least once: %d (%.2f %%)' % (len(with_citation), with_citation_rel) )\n\nisolated = set( id for id in Ids if papers_citing[id] == [] and id not in cited_by )\nisolated_rel = 100. * len(isolated) / n\nprint('Number of isolated nodes: %d (%.2f %%)' % (len(isolated), isolated_rel) )\n\nid_set = set( Ids )\nciting_set = set( cited_by.keys() )\n\noutsiders = citing_set - id_set # set difference\nnodes = citing_set | id_set # set union\nnon_isolated = nodes - isolated # set difference\n\nprint('Overall number of nodes: %d (%.2f %%)' % (len(nodes), 100.0) )\n\nnon_isolated_rel = 100. * len(non_isolated) / len(nodes)\nprint('Number of non-isolated nodes: %d (%.2f %%)' % (len(non_isolated), non_isolated_rel) )\n\noutsiders_rel = 100. * len(outsiders) / len(nodes)\nprint('Number of nodes outside our subset: %d (%.2f %%)' % ( len(outsiders), outsiders_rel ) )\n\nall_citations = [ c for citing in papers_citing.values() for c in citing ]\noutsider_citations = [ c for citing in papers_citing.values() for c in citing if c in outsiders ]\n\nprint('Overal number of links (citations): %d (%.2f %%)' % (len(all_citations), 100.0) )\n\noutsider_citations_rel = 100. * len(outsider_citations) / len(all_citations)\nprint('Citations from outside the subset: %d (%.2f %%)' % (len(outsider_citations), outsider_citations_rel) )",
"Let us now find which 10 papers are the most cited in our dataset.",
"citation_count_per_paper = [ (id, len(citations)) for (id,citations) in papers_citing.items() ]\nsorted_by_citation_count = sorted(citation_count_per_paper, key=lambda i:i[1], reverse=True)\n\nfor (id, c) in sorted_by_citation_count[:10]:\n display_summary(id, extra_text = 'Citation count: ' + str(c))",
"Link Analysis for Search Engines\nIn order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will use NetworkX, a Python package for dealing with complex networks. You might have to install the NetworkX package first.",
"import networkx as nx\n\nG = nx.DiGraph(cited_by)",
"We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph:",
"print(nx.info(G))\nprint('Directed graph:', nx.is_directed(G))\nprint('Density of graph:', nx.density(G))",
"As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well:",
"G.add_nodes_from(isolated)",
"And now we get slightly different values:",
"print(nx.info(G))\nprint('Directed graph:', nx.is_directed(G))\nprint('Density of graph:', nx.density(G))",
"Assignments\nYour name: ...\nTask 1\nPlot the in-degree distribution (the distribution of the number of incoming links) for the citation network. What can you tell about the shape of this distribution, and what does this tell us about the network?",
"# Add your code here",
"Answer: [Write your answer text here]\nTask 2\nUsing the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below.\nYou can also use the pagerank_scipy implementation, which tends to be considerably faster than its regular pagerank counterpart (but you have to install the SciPy package for that). To print and compare PageRank values, you might want to use commands like print('%.6f' % var) to use regular decimal notation with a fixed number of decimal places.",
"# Add your code here\n\n# print PageRank for paper 10399593\n# print PageRank for paper 23863622",
"Task 3\nWhy do the two papers above have such different PageRank values? Write code below to investigate and show the cause of this, and then explain the cause of this difference based on the results generated by your code.",
"# Add your code here",
"Answer: [Write your answer text here]\nTask 4\nCopy the scoring function score_ntn from Task 4 of mini-assignment 3. Rename it to score_ntn_pagerank and change its code to incorporate a paper's PageRank score in it's final score, in addition to tf-idf. In other words, the new function should return a single value that is calculated based on both scores (PageRank and tf-idf). Explain your decision on how to combine the two scores.",
"# Add your code here",
"Answer: [Write your answer text here]\nTask 5\nCopy the query function query_ntn from Task 4 of mini-assignment 3. Rename it to query_ntn_pagerank and change the code to use our new scoring function score_ntn_pagerank from task 4 above. Demonstrate these functions with an example query that returns our paper 10399593 from above as the top result.",
"# Add your code here"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
shashank14/Asterix | 1-Python Crash course/Python-Crash-Course/Python Crash Course Exercises .ipynb | apache-2.0 | [
"Python Crash Course Exercises\nThis is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp\nExercises\nAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.\n What is 7 to the power of 4?",
"7**4",
"Split this string:\ns = \"Hi there Sam!\"\n\ninto a list.",
"s = \"Hi there Sam!\"\n\ns.split()",
"Given the variables:\nplanet = \"Earth\"\ndiameter = 12742\n\n Use .format() to print the following string: \nThe diameter of Earth is 12742 kilometers.",
"planet = \"Earth\"\ndiameter = 12742\n\nprint(\"The diameter of {} is {} kilometers.\".format(planet,diameter))",
"Given this nested list, use indexing to grab the word \"hello\"",
"lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]\n\nlst[3][1][2]",
"Given this nested dictionary grab the word \"hello\". Be prepared, this will be annoying/tricky",
"d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}",
"What is the main difference between a tuple and a list?",
"# Tuple is immutable\nna = \"[email protected]\"\nna.split(\"@\")[1]",
"Create a function that grabs the email website domain from a string in the form: \[email protected]\n\nSo for example, passing \"[email protected]\" would return: domain.com",
"def domainGet(name):\n return name.split(\"@\")[1]\n\ndomainGet('[email protected]')",
"Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.",
"def findDog(sentence):\n x = sentence.split()\n for item in x:\n if item == \"dog\":\n return True\n\n \n \n\nfindDog('Is there a dog here?')",
"Create a function that counts the number of times the word \"dog\" occurs in a string. Again ignore edge cases.",
"countDog('This dog runs faster than the other dog dude!')",
"Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:\nseq = ['soup','dog','salad','cat','great']\n\nshould be filtered down to:\n['soup','salad']",
"seq = ['soup','dog','salad','cat','great']",
"Final Problem\nYou are driving a little too fast, and a police officer stops you. Write a function\n to return one of 3 possible results: \"No ticket\", \"Small ticket\", or \"Big Ticket\". \n If your speed is 60 or less, the result is \"No Ticket\". If speed is between 61 \n and 80 inclusive, the result is \"Small Ticket\". If speed is 81 or more, the result is \"Big Ticket\". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all \n cases.",
"def caught_speeding(speed, is_birthday):\n if s_birthday == False:\n if speed <= 60: \n return \"No ticket\"\n elif speed >= 61 and speed <=80:\n return \"small ticket\"\n elif speed >81:\n return \"Big ticket\"\n else:\n return \"pass\"\n\ncaught_speeding(81,False)\n\ncaught_speeding(81,False)\n\nlst = [\"7:00\",\"7:30\"]\n",
"Great job!",
"lst\n\ntype(lst)\n\ntype(lst[1])\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chusine/dlnd | autoencoder/Convolutional_Autoencoder.ipynb | mit | [
"Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called \"transpose convolution\" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. \nHowever, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.",
"learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'inputs')\ntargets_ = tf.placeholder(tf.float32, shape = (None, 28, 28, 1), name = 'targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding = 'same')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding = 'same')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding = 'same')\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(conv6, 1, (3, 3), padding = 'same', activation = None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits, name = 'decoded')\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)",
"Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.",
"sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n if ii % 100 == 0:\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.",
"learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 28x28x32\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding = 'same')\n# Now 14x14x32\nconv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 14x14x32\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding = 'same')\n# Now 7x7x32\nconv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 7x7x16\nencoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding = 'same')\n# Now 4x4x16\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))\n# Now 7x7x16\nconv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 7x7x16\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))\n# Now 14x14x16\nconv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 14x14x32\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))\n# Now 28x28x32\nconv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding = 'same', activation = tf.nn.relu)\n# Now 28x28x32\n\nlogits = tf.layers.conv2d(conv6, 1, (3, 3), padding = 'same', activation = None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits, name = 'decoded')\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 10\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n if ii % 100 == 0:\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OSGeoLabBp/tutorials | english/data_processing/lessons/ml_clustering.ipynb | cc0-1.0 | [
"<a href=\"https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/ml_clustering.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nClustering with Machine Learning\nWhat is Machine Learning?\nNowadays Machine Learning algorithms are widely used. This technology is behind chatbots, language translation apps, the shows Netflix suggest to you and how your social media feeds are presented. This is the basis of the idea of autonomous vehicles and machines too.\nMachine Learning (ML) is a subfield of Artificial Intelligence (AI). The basic idea of the ML is to teach computers to 'learn' information directly from data with computational methods. \nThere are three subcategories of Machine Learning:\n\nIn the following we are going to focus on an unsupervised learning method, within that clustering methods.\nClustering\nClustering or cluster analysis is an unsupervised learning problem. There are many types of clustering algorithm. Most of these use similarity or distance measures between points.\nSome of the clustering algorithms require to specify or guess at the number of clusters to discover in the data, whereas others require the specification of some minimum distance between observations in which examples may be considered “close” or “connected.”\nCluster analysis is an iterative process where subjective evaluation of the identified clusters is fed back into changes to algorithm configuration until a desired or appropriate result is achieved.\nThere are several clustering algorithm to choose from: \n- Affinity Propagation\n- Agglomerative Clustering\n- BIRCH\n- DBSCAN\n- K-Means\n- Mini-Batch K-Means\n- Mean Shift\n- OPTICS\n- Spectral Clustering\n- Mixture of Gaussians\n- etc...\nIn the following we are going to check on some of these with the help of scikit-learn library. \nScikit-learn is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities.\nLet's import the modules!",
"# modules\nimport sklearn\nfrom numpy import where\nfrom sklearn.datasets import make_classification\nfrom matplotlib import pyplot\n",
"To test different clustering methods we need a sample data. In the scikit-learining module there are built-in functions to create it. We will use make_classification() to create a dataset of 1000 points with 2 clusters.",
"# define dataset\nX, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=1, random_state=4)\n\n# create scatter plot for samples from each class\nfor class_value in range(2):\n\t# get row indexes for samples with this class\n\trow_ix = where(y == class_value)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1]) \n# show the plot\npyplot.title('The generated dataset')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()",
"Now let's apply the different clustering algorithms on the dataset!\nAffinity propagation\nThe method takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges.",
"from sklearn.cluster import AffinityPropagation\nfrom numpy import unique\n\n# define the model\nmodel = AffinityPropagation(damping=0.9)\n# fit the model\nmodel.fit(X)\n# assign a cluster to each example\nyhat = model.predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('Affinity propagation clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()\n",
"Agglomerative clustering\nIt is type of hierarchical clustering, which is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree. The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample.\nAgglomerative clustering performs\nusing a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The merging continues until the desired number of clusters is achieved.\nThe merge strategy contains the following steps:\n- minimizes the sum of squared differences within all clusters\n- minimizes the maximum distance between observations of pairs of clusters\n- minimizes the average of the distances between all observations of pairs of clusters\n- minimizes the distance between the closest observations of pairs of clusters\nTo use agglomerative clustering the number of clusters have to be defined.",
"from sklearn.cluster import AgglomerativeClustering\n\n# define the model\nmodel = AgglomerativeClustering(n_clusters=2)\n# fit model and predict clusters\nyhat = model.fit_predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('Agglomerative clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()",
"BIRCH\nBIRCH clustering (Balanced Iterative Reducing and Clustering using\nHierarchies) involves constructing a tree structure from which cluster centroids are extracted.\nBRICH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources. This is the first clustering algorothm that handle noise effectively. It is also effective on \nlarge datasets like point clouds.\nTo use this method the threshold and number of clusters values have to be deifned.",
"from sklearn.cluster import Birch\n\nmodel = Birch(threshold=0.01, n_clusters=2)\n# fit the model\nmodel.fit(X)\n# assign a cluster to each example\nyhat = model.predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('BRICH clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()",
"DBSCAN\nDBSCAN clustering (Density-Based Spatial Clustering of Applications with Noise) involves finding high-density areas in the domain and expanding those areas of the feature space around them as clusters.\nIt can be used on large databases with good efficiency. The usage of the DBSCAN is not complicated, it requires only one parameter. The number of clusters are determined by the algorithm.",
"from sklearn.cluster import DBSCAN\nfrom matplotlib import pyplot\n\n# define the model\nmodel = DBSCAN(eps=0.30, min_samples=9)\n# fit model and predict clusters\nyhat = model.fit_predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('DBSCAN clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()",
"k-Means clustering\nMay be the most widely known clustering method. During the creation of the clusters the algorithm trys to minimize the variance within each cluster.\nTo use it we have to define the number of clusters.",
"from sklearn.cluster import KMeans\n\n# define the model\nmodel = KMeans(n_clusters=2)\n\n# fit the model\nmodel.fit(X)\n# assign a cluster to each example\nyhat = model.predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('k-Means clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()\n",
"There is a modified version of k-Means, which is called Mini-Batch K-Means clustering. The difference between the two that updated vesion using mini-batches of samples rather than the entire dataset. It makes faster for large datasets, and more robust to statistical noise. \nMean shift clustering\nThe algorithm is finding and adapting centroids based on the density of examples in the feature space.\nTo apply it we don't have to define any parameters.",
"from sklearn.cluster import MeanShift\n\n# define the model\nmodel = MeanShift()\n# fit model and predict clusters\nyhat = model.fit_predict(X)\n# retrieve unique clusters\nclusters = unique(yhat)\n# create scatter plot for samples from each cluster\nfor cluster in clusters:\n\t# get row indexes for samples with this cluster\n\trow_ix = where(yhat == cluster)\n\t# create scatter of these samples\n\tpyplot.scatter(X[row_ix, 0], X[row_ix, 1])\n# show the plot\npyplot.title('Mean shift clustering')\npyplot.xlabel('x')\npyplot.ylabel('y')\npyplot.show()",
"The main characteristics of the clustering algorithms\n\nTask\n - Test the different clustering algorithms on different datasets!\n - Check and use scikit-learn's documentation to compare the algorithms! \nApplying ML based clustering algorithm on point cloud\nThe presented culstering method can be useful when we would like to separate group of points in a point cloud. \nMost cases when we would like to apply clustering on a point cloud the number of clusters is unknown, but as we have seen above there are severeal algorithms (like DBSCAN, OPTICS, mean shift) where the number of clusters don't have to be defined. \nTherefore, in the following section we are going to apply one of these, the DBSCAN clustering algorithm to separate roof points of buildings.\nFirst, let's download the point cloud!",
"!wget -q https://github.com/OSGeoLabBp/tutorials/raw/master/english/data_processing/lessons/code/barnag_roofs.ply",
"Let's install Open3D!",
"!pip install open3d -q",
"After the installation import modules and display the point cloud!",
"import open3d as o3d\nimport numpy as np\n\nfrom numpy import unique\nfrom numpy import where\nfrom sklearn.datasets import make_classification\nfrom sklearn.cluster import DBSCAN\nfrom matplotlib import pyplot\n\npc = o3d.io.read_point_cloud('barnag_roofs.ply',format='ply')\nxyz = np.asarray(pc.points)\n\n# display the point cloud\npyplot.scatter(xyz[:, 0], xyz[:, 1])\npyplot.title('The point cloud of the roofs')\npyplot.xlabel('y_EOV [m]')\npyplot.ylabel('x_EOV [m]')\npyplot.axis('equal')\npyplot.show()\n\n'''\n3d display TODO\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.scatter(xyz[:, 0], xyz[:, 1],xyz[:, 2])\nax.view_init(30, 70)\n'''\n# define the model\nmodel = DBSCAN(eps=0.30, min_samples=100)\n\n# fit model and predict clusters\nyhat = model.fit_predict(xyz)\n#print(yhat)\n\n# retrieve unique clusters\nclusters = unique(yhat)\nprint('Number of clusters: '+str(clusters))\n\n",
"Let's use DBSCAN on the imported point cloud.",
"# Save clusters as \nfor cluster in clusters:\n # get row indexes for samples with this cluster\n row_ix = where(yhat == cluster)\n\n # create scatter of these samples\n pyplot.scatter(xyz[row_ix, 0], xyz[row_ix, 1], label=str(cluster)+' cluster')\n\n # export the clusters as a point cloud\n xyz_cluster = xyz[row_ix]\n pc_cluster = o3d.geometry.PointCloud()\n pc_cluster.points = o3d.utility.Vector3dVector(xyz_cluster)\n if cluster >= 0:\n o3d.io.write_point_cloud('cluster_' + str(cluster) + '.ply', pc_cluster) # export .ply format\n else:\n o3d.io.write_point_cloud('noise.ply', pc_cluster) # export noise \n\n# show the plot\npyplot.title('Point cloud clusters')\npyplot.xlabel('y_EOV [m]')\npyplot.ylabel('x_EOV [m]')\npyplot.axis('equal')\npyplot.show()\n",
"Task for practice\n- Use other clustering algorithms on point clouds!\n- Compare the built-in Open3D and scikit-learn DBSCAN algorithm!\nSources\n\nhttps://scikit-learn.org/stable/index.html\nhttps://machinelearningmastery.com/clustering-algorithms-with-python/\nhttps://uk.mathworks.com/content/dam/mathworks/ebook/gated/machine-learning-ebook-all-chapters.pdf\nhttps://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eds-uga/csci4360-fa17 | workshops/w7/Workshop6_ Auto-Differentiation.ipynb | mit | [
"Autograd",
"import time",
"Have to install autograd module first: pip install autograd",
"import autograd.numpy as np # Thinly-wrapped version of Numpy\nfrom autograd import grad",
"EX1, Normal Numpy",
"def tanh(x):\n y = np.exp(-x)\n return (1.0 - y) / (1.0 + y)\n\nstart = time.time()\n\ngrad_tanh = grad(tanh)\nprint (\"Gradient at x = 1.0\\n\", grad_tanh(1.0))\n\nend = time.time()\nprint(\"Operation time:\\n\", end-start)",
"EX2-1, Taylor approximation to sine function",
"def taylor_sine(x): \n ans = currterm = x\n i = 0\n while np.abs(currterm) > 0.001:\n currterm = -currterm * x**2 / ((2 * i + 3) * (2 * i + 2))\n ans = ans + currterm\n i += 1\n return ans\n\nstart = time.time()\n\ngrad_sine = grad(taylor_sine)\nprint (\"Gradient of sin(pi):\\n\", grad_sine(np.pi))\n\nend = time.time()\nprint(\"Operation time:\\n\", end-start)",
"EX2-2, Second-order gradient",
"start = time.time()\n\n#second-order\nggrad_sine = grad(grad_sine)\nprint (\"Gradient of second-order:\\n\", ggrad_sine(np.pi))\n\nend = time.time()\nprint(\"Operation time:\\n\", end-start)",
"EX3, Logistic Regression\nA common use case for automatic differentiation is to train a probabilistic model. <br>\nA Simple (but complete) example of specifying and training a logistic regression model for binary classification:",
"def sigmoid(x):\n return 0.5*(np.tanh(x) + 1)\n\ndef logistic_predictions(weights, inputs):\n # Outputs probability of a label being true according to logistic model.\n return sigmoid(np.dot(inputs, weights))\n\ndef training_loss(weights):\n # Training loss is the negative log-likelihood of the training labels.\n preds = logistic_predictions(weights, inputs)\n label_probabilities = preds * targets + (1 - preds) * (1 - targets)\n return -np.sum(np.log(label_probabilities))\n\n# Build a toy dataset.\ninputs = np.array([[0.52, 1.12, 0.77],\n [0.88, -1.08, 0.15],\n [0.52, 0.06, -1.30],\n [0.74, -2.49, 1.39]])\ntargets = np.array([True, True, False, True])\n\n# Define a function that returns gradients of training loss using autograd.\ntraining_gradient_fun = grad(training_loss)\n\n# Optimize weights using gradient descent.\nweights = np.array([0.0, 0.0, 0.0])\nprint (\"Initial loss:\", training_loss(weights))\n\nfor i in range(100):\n weights -= training_gradient_fun(weights) * 0.01\n\nprint (\"Trained loss:\", training_loss(weights))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ioam/scipy-2017-holoviews-tutorial | notebooks/08-deploying-bokeh-apps.ipynb | bsd-3-clause | [
"<a href='http://www.holoviews.org'><img src=\"assets/hv+bk.png\" alt=\"HV+BK logos\" width=\"40%;\" align=\"left\"/></a>\n<div style=\"float:right;\"><h2>08. Deploying Bokeh Apps</h2></div>\n\nIn the previous sections we discovered how to use a HoloMap to build a Jupyter notebook with interactive visualizations that can be exported to a standalone HTML file, as well as how to use DynamicMap and Streams to set up dynamic interactivity backed by the Jupyter Python kernel. However, frequently we want to package our visualization or dashboard for wider distribution, backed by Python but run outside of the notebook environment. Bokeh Server provides a flexible and scalable architecture to deploy complex interactive visualizations and dashboards, integrating seamlessly with Bokeh and with HoloViews.\nFor a detailed background on Bokeh Server see the bokeh user guide. In this tutorial we will discover how to deploy the visualizations we have created so far as a standalone bokeh server app, and how to flexibly combine HoloViews and Bokeh APIs to build highly customized apps. We will also reuse a lot of what we have learned so far---loading large, tabular datasets, applying datashader operations to them, and adding linked streams to our app.\nA simple bokeh app\nThe preceding sections of this tutorial focused solely on the Jupyter notebook, but now let's look at a bare Python script that can be deployed using Bokeh Server:",
"with open('./apps/server_app.py', 'r') as f:\n print(f.read())",
"Of the three parts of this app, part 2 should be very familiar by now -- load some taxi dropoff locations, declare a Points object, datashade them, and set some plot options.\nStep 1 is new: Instead of loading the bokeh extension using hv.extension('bokeh'), we get a direct handle on a bokeh renderer using the hv.renderer function. This has to be done at the top of the script, to be sure that options declared are passed to the Bokeh renderer. \nStep 3 is also new: instead of typing app to see the visualization as we would in the notebook, here we create a Bokeh document from it by passing the HoloViews object to the renderer.server_doc method. \nSteps 1 and 3 are essentially boilerplate, so you can now use this simple skeleton to turn any HoloViews object into a fully functional, deployable Bokeh app!\nDeploying the app\nAssuming that you have a terminal window open with the hvtutorial environment activated, in the notebooks/ directory, you can launch this app using Bokeh Server:\nbokeh serve --show apps/server_app.py\nIf you don't already have a favorite way to get a terminal, one way is to open it from within Jupyter, then make sure you are in the notebooks directory, and activate the environment using source activate hvtutorial (or activate tutorial on Windows). You can also open the app script file in the inbuilt text editor, or you can use your own preferred editor.",
"# Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve\n# Tip: Refer to the previous notebook\n",
"Iteratively building a bokeh app in the notebook\nThe above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it.\nTo illustrate this process, let's quickly go through such a workflow. As before we will set up our imports, load the extension, and load the taxi dataset:",
"import holoviews as hv\nimport geoviews as gv\nimport dask.dataframe as dd\n\nfrom holoviews.operation.datashader import datashade, aggregate, shade\nfrom bokeh.models import WMTSTileSource\n\nhv.extension('bokeh', logo=False)\n\nusecols = ['tpep_pickup_datetime', 'dropoff_x', 'dropoff_y']\nddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'], usecols=usecols)\nddf['hour'] = ddf.tpep_pickup_datetime.dt.hour\nddf = ddf.persist()",
"Next we define a Counter stream which we will use to select taxi trips by hour.",
"stream = hv.streams.Counter()\npoints = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])\ndmap = hv.DynamicMap(lambda counter: points.select(hour=counter%24).relabel('Hour: %s' % (counter % 24)),\n streams=[stream])\nshaded = datashade(dmap)\n\nhv.opts('RGB [width=800, height=600, xaxis=None, yaxis=None]')\n\nurl = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'\nwmts = gv.WMTS(WMTSTileSource(url=url))\n\noverlay = wmts * shaded",
"Up to this point, we have a normal HoloViews notebook that we could display using Jupyter's rich display of overlay, as we would with an any notebook. But having come up with the objects we want interactively in this way, we can now display the result as a Bokeh app, without leaving the notebook. To do that, first edit the following cell to change \"8888\" to whatever port your jupyter session is using, in case your URL bar doesn't say \"localhost:8888/\".\nThen run this cell to launch the Bokeh app within this notebook:",
"renderer = hv.renderer('bokeh')\nserver = renderer.app(overlay, show=True, websocket_origin='localhost:8888')",
"We could stop here, having launched an app, but so far the app will work just the same as in the normal Jupyter notebook, responding to user inputs as they occur. Having defined a Counter stream above, let's go one step further and add a series of periodic events that will let the visualization play on its own even without any user input:",
"dmap.periodic(1)",
"You can stop this ongoing process by clearing the cell displaying the app.\nNow let's open the text editor again and make this edit to a separate app, which we can then launch using Bokeh Server separately from this notebook.",
"# Exercise: Copy the example above into periodic_app.py and modify it so it can be run with bokeh serve\n# Hint: Use hv.renderer and renderer.server_doc\n# Note that you have to run periodic **after** creating the bokeh document\n",
"Combining HoloViews with bokeh models\nNow for a last hurrah let's put everything we have learned to good use and create a bokeh app with it. This time we will go straight to a Python script containing the app. If you run the app with bokeh serve --show ./apps/player_app.py from your terminal you should see something like this:\n<img src=\"./assets/tutorial_app.gif\"></img>\nThis more complex app consists of several components:\n\nA datashaded plot of points for the indicated hour of the daty (in the slider widget)\nA linked PointerX stream, to compute a cross-section\nA set of custom bokeh widgets linked to the hour-of-day stream\n\nWe have already covered 1. and 2. so we will focus on 3., which shows how easily we can combine a HoloViews plot with custom Bokeh models. We will not look at the precise widgets in too much detail, instead let's have a quick look at the callback defined for slider widget updates:\npython\ndef slider_update(attrname, old, new):\n stream.event(hour=new)\nWhenever the slider value changes this will trigger a stream event updating our plots. The second part is how we combine HoloViews objects and Bokeh models into a single layout we can display. Once again we can use the renderer to convert the HoloViews object into something we can display with Bokeh:\npython\nrenderer = hv.renderer('bokeh')\nplot = renderer.get_plot(hvobj, doc=curdoc())\nThe plot instance here has a state attribute that represents the actual Bokeh model, which means we can combine it into a Bokeh layout just like any other Bokeh model:\npython\nlayout = layout([[plot.state], [slider, button]], sizing_mode='fixed')\ncurdoc().add_root(layout)",
"# Advanced Exercise: Add a histogram to the bokeh layout next to the datashaded plot\n# Hint: Declare the histogram like this: hv.operation.histogram(aggregated, bin_range=(0, 20))\n# then use renderer.get_plot and hist_plot.state and add it to the layout\n",
"Onwards\nAlthough the code above is more complex than in previous sections, it's actually providing a huge range of custom types of interactivity, which if implemented in Bokeh alone would have required far more than a notebook cell of code. Hopefully it is clear that arbitrarily complex collections of visualizations and interactive controls can be built from the components provided by HoloViews, allowing you to make simple analyses very easily and making it practical to make even quite complex apps when needed. The user guide, gallery, and reference gallery should have all the information you need to get started with all this power on your own datasets and tasks. Good luck!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tanmay987/deepLearning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | [
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32 , [None, real_dim] )\n inputs_z = tf.placeholder(tf.float32 , [None, z_dim] )\n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope ('generator',reuse=reuse): # finish this\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha*h1,h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1,out_dim,activation=None)\n out = tf.tanh(logits)\n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope ('discriminator',reuse=reuse):# finish this\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1,1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Build the model\ng_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)\n# g_model is the generator output\n\nd_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"# Calculate losses\nd_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))\n\nd_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, labels=tf.zeros_like(d_logits_fake) ))\n\nd_loss = d_loss_real+d_model_fake\ng_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = \ng_train_opt = ",
"Training",
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zuphilip/ocropy | doc/line-normalization.ipynb | apache-2.0 | [
"Line Normalization (dewarping)\n( These notes are based on: https://github.com/tmbdev/ocropy/blob/758e023f808d88e5995af54034c155621eb087b2/OLD/normalization-api.ipynb from 2014 )\nThe line normalization is performed before the actual text recognition and before the actual training. Therefore, the same line normalization should be used in the recognition as it is used in the training. The line normalization tries to dewarp the line image and normalize its height. Previously different methods were explored, but nowadays the default method should work well. This notes will give some background information.",
"%pylab inline\nfrom pylab import imshow\nfrom scipy.ndimage import filters,interpolation\nimport ocrolib\nfrom ocrolib import lineest\n\n#Configure the size of the inline figures\nfigsize(8,8)",
"Generate distorted image\nFirst, we generate a distorted image from an example line.",
"image = 1-ocrolib.read_image_gray(\"../tests/010030.bin.png\")\nimage = interpolation.affine_transform(image,array([[0.5,0.015],[-0.015,0.5]]),offset=(-30,0),output_shape=(200,1400),order=0)\n\nimshow(image,cmap=cm.gray)\nprint image.shape",
"Load Normalizer and measure the image",
"#reload(lineest)\nmv = ocrolib.lineest.CenterNormalizer()\nmv.measure(image)\n\nprint mv.r\nplot(mv.center)\nplot(mv.center+mv.r)\nplot(mv.center-mv.r)\nimshow(image,cmap=cm.gray)",
"Dewarp\nThe dewarping of the text line (first image) tries to find the center (blue curve) and then cut out slices with some fixed radius around the center. See this illustration <img width=\"50%\" src=\"https://cloud.githubusercontent.com/assets/5199995/25406275/6905c7ce-2a06-11e7-89e0-ca740cd8a21c.png\"/>",
"dewarped = mv.dewarp(image)\n\nprint dewarped.shape\nimshow(dewarped,cmap=cm.gray)\n\nimshow(dewarped[:,:320],cmap=cm.gray,interpolation='nearest')",
"Normalize\nThis will also dewarp the image but additionally normalize the image size (default x_height is 48).",
"normalized = mv.normalize(image,order=0)\n\nprint normalized.shape\nimshow(normalized,cmap=cm.gray)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsacademybr/PythonFundamentos | Cap05/Notebooks/DSA-Python-Cap05-02-Objetos.ipynb | gpl-3.0 | [
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"Objetos\nEm Python, tudo é objeto!",
"# Criando uma lista\nlst_num = [\"Data\", \"Science\", \"Academy\", \"Nota\", 10, 10]\n\n# A lista lst_num é um objeto, uma instância da classe lista em Python\ntype(lst_num)\n\nlst_num.count(10)\n\n# Usamos a função type, para verificar o tipo de um objeto\nprint(type(10))\nprint(type([]))\nprint(type(()))\nprint(type({}))\nprint(type('a'))\n\n# Criando um novo tipo de objeto chamado Carro\nclass Carro(object):\n pass\n\n# Instância do Carro\npalio = Carro()\n\nprint(type(palio))\n\n# Criando uma classe\nclass Estudantes:\n def __init__(self, nome, idade, nota):\n self.nome = nome\n self.idade = idade\n self.nota = nota\n\n# Criando um objeto chamado Estudante1 a partir da classe Estudantes\nEstudante1 = Estudantes(\"Pele\", 12, 9.5)\n\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\nEstudante1.nome\n\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\nEstudante1.idade\n\n# Atributo da classe Estudante, utilizado por cada objeto criado a partir desta classe\nEstudante1.nota\n\n# Criando uma classe\nclass Funcionarios:\n def __init__(self, nome, salario):\n self.nome = nome\n self.salario = salario\n\n def listFunc(self):\n print(\"O nome do funcionário é \" + self.nome + \" e o salário é R$\" + str(self.salario))\n\n# Criando um objeto chamado Func1 a partir da classe Funcionarios\nFunc1 = Funcionarios(\"Obama\", 20000)\n\n# Usando o método da classe\nFunc1.listFunc()\n\nprint(\"**** Usando atributos *****\")\n\nhasattr(Func1, \"nome\")\n\nhasattr(Func1, \"salario\")\n\nsetattr(Func1, \"salario\", 4500)\n\nhasattr(Func1, \"salario\")\n\ngetattr(Func1, \"salario\")\n\ndelattr(Func1, \"salario\")\n\nhasattr(Func1, \"salario\")",
"Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
marcinofulus/ProgramowanieRownolegle | CUDA/iCSE_PR_map2d.ipynb | gpl-3.0 | [
"Zastosowanie indeksowania wielowymiarowego\nZadanie: Oblicz wartości funkcji $\\sin(x^2+y^2)$ na siatce w zadanym obszarze.\nKrok pierwszy\nNapiszemy jądro obliczające wartości funkcji $\\sin(x^2)$ na zadanym wektorze danych. Jest to zadanie, które można by wykonać używając gpuarray, ale dla celów dydaktycznych wykonamy je własnym kernelem.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nprint(\"Ok\")\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n __global__ void sin1d(float *x,float *y)\n {\n int idx = threadIdx.x + blockDim.x*blockIdx.x;\n y[idx] = sinf(powf(x[idx],2.0f));\n }\n \"\"\")\n\nNx = 128\nx = np.linspace(-3,3,Nx).astype(np.float32)\ny = np.empty_like(x)\nfunc = mod.get_function(\"sin1d\")\nfunc(cuda.In(x),cuda.Out(y),block=(Nx,1,1),grid=(1,1,1))\nplt.plot(x,y)",
"Krok drugi\nNie będziemy teraz przesyłać wartości $x$ do jądra, ale obliczymy je w locie, ze wzoru:\n$$ x = x_0 + i \\frac{\\Delta x}{N}$$",
"import pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n __global__ void sin1da(float *y)\n {\n int idx = threadIdx.x + blockDim.x*blockIdx.x;\n float x = -3.0f+6.0f*float(idx)/blockDim.x;\n y[idx] = sinf(powf(x,2.0f));\n }\n \"\"\")\n\nNx = 128\nx = np.linspace(-3,3,Nx).astype(np.float32)\ny = np.empty_like(x)\nfunc = mod.get_function(\"sin1da\")\nfunc(cuda.Out(y),block=(Nx,1,1),grid=(1,1,1))\nplt.plot(x,y,'r')",
"Krok trzeci\nWykonamy probkowanie funkcji dwóch zmiennych, korzystając z wywołania jądra, które zawiera $N_x$ wątków w bloku i $N_y$ bloków. \nProszę zwrócić szczególną uwagę na linie:\nint idx = threadIdx.x;\nint idy = blockIdx.x;\n\nzawierające wykorzystanie odpowiednich indeksów na CUDA, oraz sposób obliczania globalnego indeksu tablicy wartości, która jest w formacie \"row-major\"!\nint gid = idx + blockDim.x*idy;",
"import pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n __global__ void sin2d(float *z)\n {\n int idx = threadIdx.x;\n int idy = blockIdx.x;\n\n int gid = idx + blockDim.x*idy;\n\n float x = -4.0f+6.0f*float(idx)/blockDim.x;\n float y = -3.0f+6.0f*float(idy)/gridDim.x;\n \n z[gid] = sinf(powf(x,2.0f)+powf(y,2.0f));\n }\n \"\"\")\n\nNx = 128\nNy = 64\nx = np.linspace(-4,2,Nx).astype(np.float32)\ny = np.linspace(-3,3,Ny).astype(np.float32)\nXX,YY = np.meshgrid(x,y)\nz = np.zeros(Nx*Ny).astype(np.float32)\n\nfunc = mod.get_function(\"sin2d\")\nfunc(cuda.Out(z),block=(Nx,1,1),grid=(Ny,1,1))\n",
"Porównajmy wyniki:",
"plt.contourf(XX,YY,z.reshape(Ny,Nx) )\n\nplt.contourf(XX,YY,np.sin(XX**2+YY**2))",
"Krok czwarty\nAlgorytm ten nie jest korzystny, gdyż rozmiar bloku determinuje rozmiar siatki na, której próbkujemy funkcje. \nOptymalnie było by wykonywać operacje w blokach o zadanym rozmiarze, niezależnie od ilości próbek danego obszaru. \nPoniższy przykład wykorzystuje dwuwymiarową strukturę zarówno bloku jak i gridu. Dzielimy wątki tak by w obrębie jednego bloku były wewnatrz kwadratu o bokach 4x4.",
"import pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\n\nmod = SourceModule(\"\"\"\n __global__ void sin2da(float *z)\n {\n int ix = threadIdx.x + blockIdx.x * blockDim.x;\n int iy = threadIdx.y + blockIdx.y * blockDim.y;\n \n int gid = ix + iy * blockDim.x * gridDim.x;\n float x = -4.0f+6.0f*float(ix)/(blockDim.x*gridDim.x);\n float y = -3.0f+6.0f*float(iy)/(blockDim.y*gridDim.y);\n \n z[gid] = sinf(powf(x,2.0f)+powf(y,2.0f));\n }\n \"\"\")\n\nblock_size = 4\nNx = 32*block_size\nNy = 32*block_size\nx = np.linspace(-4,2,Nx).astype(np.float32)\ny = np.linspace(-3,3,Ny).astype(np.float32)\nXX,YY = np.meshgrid(x,y)\nz = np.zeros(Nx*Ny).astype(np.float32)\n\nfunc = mod.get_function(\"sin2da\")\nfunc(cuda.Out(z),\\\n block=(block_size,block_size,1),\\\n grid=(Nx//block_size,Ny//block_size,1) )\n\n\nplt.contourf(XX,YY,z.reshape(Ny,Nx) )"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
btw2111/intro-numerical-methods | 0_intro_numerical_methods.ipynb | mit | [
"Introduction and Motivation: Modeling and methods for scientific computing\nWhy are we here?\nCannot solve everything\n$$x^5 + 3x^2+ 2x + 3 = 0$$\n$$f(x,y,z,t) = 0$$\nProblems can be too big...\n\nActually want an answer...\nNumerics compliment analytical methods\nWhy should I care?\nThe Retirement Problem\n$$A = \\frac{P}{r} \\left((1+r)^n - 1 \\right)$$\n$P$ is the incremental payment\n$r$ is the interest rate per payment period\n$n$ is the number of payments\n$A$ is the total amount after n payments\nNote that these can all be functions of $r$, $n$, and time",
"%matplotlib inline\nimport numpy\nimport matplotlib.pyplot as plt\n\ndef A(P, r, n):\n return P / r * ((1 + r)**n - 1)\n\nn = numpy.linspace(0, 20, 100)\ntarget = 5000\nplt.hold(True)\nfor r in [0.02, 0.05, 0.08, 0.1, 0.12]:\n plt.plot(n, A(100, r, n))\nplt.plot(n, numpy.ones(n.shape) * target, 'k--')\nplt.legend([\"r = 0.02\", \"r = 0.05\", \"r = 0.08\", \"r = 0.1\", \"r = 0.12\", \"Target\"], loc=2)\nplt.xlabel(\"Years\")\nplt.ylabel(\"Annuity Value (Dollars)\")\nplt.show()",
"Boat race\nGiven a river (say a sinusoid) find the total length actually rowed over a given interval\n$$f(x) = A \\sin x$$",
"x = numpy.linspace(0, 4 * numpy.pi)\nplt.plot(x, 2.0 * numpy.sin(x))\nplt.title(\"River Sine\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.axis([0, 4*numpy.pi, -2, 2])\nplt.show()",
"We need to calculate the function $f(x)$'s arc-length from $[0, 4 \\pi]$\n$$L = \\int_0^{4 \\pi} \\sqrt{1 + |f'(x)|^2} dx$$\nIn general need numerical quadrature.\nNon-Linear population growth\nLotka-Volterra Predator-Prey model\n$$\\frac{d R}{dt} = R \\cdot (a - b \\cdot F)$$\n$$\\frac{d F}{dt} = F \\cdot (c \\cdot R + d)$$\n\nWhere are the steady states?\nHow do we solve the initial value problem?\nHow do we understand the non-linear dynamics?\nHow do we evaluate whether this is a good model?\n\nInterpolation and Data Fitting\nFinding trends in real data represented without a closed form (analytical form).\nSunspot counts",
"data = numpy.loadtxt(\"./data/sunspot.dat\")\ndata.shape\nplt.plot(data[:, 0], data[:, 1])\nplt.xlabel(\"Year\")\nplt.ylabel(\"Number\")\nplt.title(\"Number of Sunspots\")\nplt.show()",
"Why Python?\n(Based on Jake Vanderplas and extended)\nC, C++, Fortran\nPros:\n\nPerformance and legacy computing codes available\n\nCons:\n\nSyntax not optimized for casual programming\nNo interactive facilities\nDifficult visualization, text processing, etc.\n\nIDL, Matlab, Mathematica, etc.\nPros:\n\nInteractive with easy visualization tools\nExtensive scientific and engineering libraries available\n\nCons:\n\nCostly and proprietary\nUnpleasant for large-scale computing and non-mathematical tasks\n\nPython\nPros:\n\nPython is free (BSD license) and highly portable (Windows, Mac OS X, Linux, etc.)\nInteractive interpreter\nReadability\nSimple\nExtensive documentation\nMemory management is (mostly) transparent\nClean and object-oriented\nBuilt-in types\n\nPros:\n\nComprehensive standard library\nWell-established 3rd-party packages (NumPy, SciPy, matplotlib, etc.)\nEasily wraps existing legacy code in C, C++ and Fortran\nPython mastery is marketable\nScalability\nInteractive experimentation\nCode can be one-line scripts or million-line projects\nUsed by novices and full-time professionals alike\n\nCons:\n\nCan be slow\nPackaging system is a bit crufty\nToo many Monty Python jokes (not really a con)\n\nIPython Notebooks\nThe notebook environment gives us a convenient means for working with code while annotating it. We will only cover the key concepts here and hope that you will explore on your own the environments.\nToolbar\n\nNotebooks are modal, they have an edit mode (editing cells) and command mode.\nHighly recommend taking the tour and checking out the help menu \n\nContent types\n\nMarkdown\nLaTeX $x^2$\nPython\nNumPy, SciPy, and other packages\n\nObtaining the Notebooks\nAll notebooks are found on github.\nHighly recommend obtaining a github account if you do not already have one. Will allow you start to become comfortable with git.\nClone the repository\n$> git clone git://github.com/mandli/intro-numerical-methods\nPull in new changes\n$> git pull\nPush new changes (you do not have permission to do this\n$> git push\nAlso note that you can watch what changes were made and submit issues to the github project page if you find mistakes (PLEASE DO THIS!).\nInstallation\nA few options\n 1. Install on your own machine\n 2. Use a cloud service\nYour own machine\nThe easiest way to install all the components you will need for the class is to use Continuum Analytics' Anaconda distribution. We will be using python 2.7.x for all in class demos and homework so I strongly suggest you do not get the Python 3.4 version.\nAlternatives to using Anaconda also exist in the form of Enthought's Canopy distribution which provides all the tools you will need as well along with an IDE (development environment).\nThe \"cloud\"\nInstead of running things locally on your machine there are a number of cloud services that you are welcome to use in order to get everything running easily.\n 1. Sage-Math-Cloud - Create an account on Sage-Math-Cloud and interact with python via the provided terminal or Ipython notebook inteface.\n 1. Wakari - Continuum also has a free cloud service called Wakari that you can sign up for which provides an Anaconda installation along with similar tools to Sage-Math-Cloud."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hande-qmc/hande | tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb | lgpl-2.1 | [
"This demonstration shows how CCMC [1] data (analysis) results can be analysed in a more customised way. \nThis applies to FCIQMC [2] as well.",
"from pyhande.data_preparing.hande_ccmc_fciqmc import PrepHandeCcmcFciqmc\nfrom pyhande.extracting.extractor import Extractor\nfrom pyhande.error_analysing.blocker import Blocker\nfrom pyhande.error_analysing.hybrid_ana import HybridAna\nfrom pyhande.results_viewer.get_results import get_results, analyse_data",
"Part A\nFor now, still using the default, quick get_results function but this time specify merge_type to not merging\n(no effect here as calculations are independent, the default is merge using UUIDs btw),\nthe analyser to hybrid [3] (not blocking [4] as by default) and while we don't specify analysis start MC iterations,\nwe specify that the MSER find starting iteration function should be used to automatically find them\n(the default is 'blocking' for the blocking find starting iteration function).",
"results = get_results([\"data/0.01_ccsd.out.gz\", \"data/0.002_ccsd.out.gz\"], merge_type='no', analyser='hybrid', start_its='mser')",
"The summary table shows the analysed data by the analyser.\nThe hybrid analyser analyses the instantaneous projected energy (as prepared by the preparator object).",
"results.summary",
"The hybrid analyser's output can be viewed.",
"results.analyser.opt_block\n\nprint(results.analyser.start_its) # Used starting iterations, found using MSER find starting iteration function.\nprint(results.analyser.end_its) # Used end iterations, the last iteration by default.",
"Part B\nNow, we don't use get_results to get the results object but define the extractor, preparator and analyser objects ourselves.\nEven though it doesn't have an effect here as there is no calculation to merge, we state that we want to merge using\nthe 'legacy' way, i.e. don't use UUID for merging but simply determine whether iterations from one output file to the next\n(order matters here) are consecutive. If shift is already varying across that continuation, don't merge if 'shift_damping'\ndiffers from one output file to the next ('md_shift' specifies that this restriction only applies when shift is already varying,\notherwise use 'md_always' for this restriction to always hold).\nSince no merge is possibly, these options are ignored and just shown here for demonstration purposes.",
"extra = Extractor(merge={'type': 'legacy', 'md_shift': ['qmc:shift_damping'], 'shift_key': 'Shift'})",
"Define preparator object. It contains the hard coded mapping of column name meaning to column name, i.e. 'ref_key' : 'N_0,\nfor the case of HANDE CCMC/FCIQMC. If you use a different package, you'll need to create your own preparator class.",
"prep = PrepHandeCcmcFciqmc()",
"Define analyser. Use class method inst_hande_ccmc_fciqmc to pre-set what should be analysed (inst. projected energy), name\nof iteration key ('iterations'), etc.\nUse 'blocking' start iteration finder and specify that a graph should be shown by the start iteration finder.",
"ana = HybridAna.inst_hande_ccmc_fciqmc(start_its = 'blocking', find_start_kw_args={'show_graph': True})",
"Now, we can execute those three objects. 'analyse_data' is a handy helper to call their .exe() methods.\nFor each calculation, a graph is shown by the find starting iteration method.",
"results2 = analyse_data([\"data/0.01_ccsd.out.gz\", \"data/0.002_ccsd.out.gz\"], extra, prep, ana)",
"Have used different starting iteration finder, so these will be different.",
"results2.analyser.start_its",
"But results are comparable.",
"results2.summary_pretty",
"But what if we want to analyse the shift instead of the instantaneous projected energy with hybrid analysis?\n-> BEWARE this is untested. Only used for illustration here!\nDon't use class method for analyser instantiation anymore.\nKeep default settings (find start iterations using 'mser' etc).\nNote that when doing blocking [4], not hybrid [3], the order is a bit different, the columns to be analysed are 'cols'\nfor blocking [4] and 'hybrid_col' for hybrid analysis [3]. You might need to define both for a given analyser if you\nare using the starting iteration function of the other type ('blocking' with start_its='mser' or 'hybrid' with\nstart_its='blocking'). Consult the docstring.",
"ana2 = HybridAna('iterations', 'Shift', 'replica id')\n\nresults3 = analyse_data([\"data/0.01_ccsd.out.gz\", \"data/0.002_ccsd.out.gz\"], extra, prep, ana2)\n\nresults3.summary_pretty",
"[1] - A. J. W. Thom (2010), Phys. Rev. Lett. 105, 236004.\n[2] - G. H. Booth et al. (2009), J. Chem. Phys. 131, 054106; Cleland, et al. (2010), J. Chem. Phys. 132, 041103.\n[3] - T. Ichibha et al., [arXiv:1904.09934 [physics.comp-ph]].\n[4] - H. Flyvbjerg, H. G. Petersen (1989), J. Chem. Phys. 91, 461 (1989)."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ComputationalModeling/spring-2017-danielak | past-semesters/fall_2016/day-by-day/day15-Schelling-1-dimensional-segregation-day2/Day_15_Pre_Class_Notebook.ipynb | agpl-3.0 | [
"Getting ready to implement the Schelling model\nGoal for this assignment\nThe goal of this assignment is to finish up the two functions that you started in class on the first day of this project, to ensure that you're ready to hit the ground running when you get back to together with your group. \nYou are welcome to work with your group on this pre-class assignment - just make sure to list who you worked with below. Also, everybody needs to turn in their own solutions!\nYour name\n// put your name here!\nFunction 1: Creating a game board\nFunction 1: Write a function that creates a one-dimensional game board composed of agents of two different types (0 and 1, X and O, stars and pluses... whatever you want), where the agents are assigned to spots randomly with a 50% chance of being either type. As arguments to the function, take in (1) the number of spots in the game board (setting the default to 32) and (2) a random seed that you will use to initialize the board (again with some default number), and return your game board. (Hint: which makes more sense to describe the game board, a list or a Numpy array? What are the tradeoffs?) Show that your function is behaving correctly by printing out the returned game board.",
"# Put your code here, using additional cells if necessary.\n\n\n",
"Function 2: deciding if an agent is happy\nWrite a function that takes the game board generated by the function you wrote above and determines whether an agent at position i in the game board of a specified type is happy for a game board of any size and a neighborhood of size N (i.e., from position i-N to i+N), and returns that information. Make sure to check that position i is actually inside the game board (i.e., make sure the request makes sense), and ensure that it behaves correctly for agents near the edges of the game board. Show that your function is behaving correctly by giving having it check every position in the game board you generated previously, and decide whether the agent in each spot is happy or not. Verify by eye that it's behaving correctly. (Hint: You're going to use this later, when you're trying to decide where to put an agent. Should you write the function assuming that the agent is already in the board, or that you're testing to see whether or not you've trying to decide whether to put it there?)",
"# Put your code here, using additional cells if necessary.\n\n\n",
"Assignment wrapup\nPlease fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!",
"from IPython.display import HTML\nHTML(\n\"\"\"\n<iframe \n\tsrc=\"https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true\" \n\twidth=\"80%\" \n\theight=\"1200px\" \n\tframeborder=\"0\" \n\tmarginheight=\"0\" \n\tmarginwidth=\"0\">\n\tLoading...\n</iframe>\n\"\"\"\n)",
"Congratulations, you're done!\nSubmit this assignment by uploading it to the course Desire2Learn web page. Go to the \"Pre-class assignments\" folder, find the dropbox link for Day 15, and upload it there.\nSee you in class!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
UWSEDS/LectureNotes | Fall2018/09_UnitTests/unit-tests.ipynb | bsd-2-clause | [
"import numpy as np",
"Unit Tests\nOverview and Principles\nTesting is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. \nThere are two parts to writing tests.\n1. invoking the code under test so that it is exercised in a particular way;\n1. evaluating the results of executing code under test to determine if it behaved as expected.\nThe collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.\nFor dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.\nTest cases can be of several types. Below are listed some common classifications of test cases.\n- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.\n- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.\n- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.\n- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.\nAnother principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.\nA best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.\nExamples of Test Cases\nThis section presents examples of test cases. The code under test is the calculation of entropy.\nEntropy of a set of probabilities\n$$\nH = -\\sum_i p_i \\log(p_i)\n$$\nwhere $\\sum_i p_i = 1$.",
"import numpy as np\n# Code Under Test\ndef entropy(ps):\n items = ps * np.log(ps)\n return np.abs(-np.sum(items))\n\n# Smoke test\nentropy([0.2, 0.8])",
"Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.\nWhat is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!",
"# One-shot test. Need to know the correct answer.\nentries = [\n [0, [1]],\n]\n\nfor entry in entries:\n ans = entry[0]\n prob = entry[1]\n if not np.isclose(entropy(prob), ans):\n print(\"Test failed!\")\nprint (\"Test completed!\")",
"Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)\nOne edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.",
"# Edge test. This is something that should cause an exception.\nentropy([-0.5])",
"Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \\frac{1}{n}$.\n$$\nH = -\\sum_{i=1}^{n} p_i \\log(p_i) \n= -\\sum_{i=1}^{n} \\frac{1}{n} \\log(\\frac{1}{n}) \n= n (-\\frac{1}{n} \\log(\\frac{1}{n}) )\n= -\\log(\\frac{1}{n})\n$$\nFor example, entropy([0.5, 0.5]) should be $-log(0.5)$.",
"# Pattern test\ndef test_equal_probabilities(n):\n prob = 1.0/n\n ps = np.repeat(prob , n)\n if np.isclose(entropy(ps), -np.log(prob)):\n print(\"Worked!\")\n else:\n import pdb; pdb.set_trace()\n print (\"Bad result.\")\n \n \n# Run a test\ntest_equal_probabilities(100000)",
"You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.\nUnittest Infrastructure\nThere are several reasons to use a test infrastructure:\n- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.\n- The infrastructure provides a uniform way to report test results, and to handle test failures.\n- A test infrastructure can tell you about coverage so you know what tests to add.\nWe'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:\n1. import the unittest module\n1. define a class that inherits from unittest.TestCase\n1. write methods that run the code to be tested and check the outcomes.\nThe last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with \"test\".\nSecond, the \"test methods\" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.",
"import unittest\n\n# Define a class in which the tests will run\nclass UnitTests(unittest.TestCase):\n\n # Each method in the class to execute a test\n def test_success(self):\n self.assertEqual(1, 1)\n \n def test_success1(self):\n self.assertTrue(1 == 1)\n\n def test_failure(self):\n self.assertLess(1, 2)\n \nsuite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)\n_ = unittest.TextTestRunner().run(suite)\n\n\n# Function the handles test loading\n#def test_setup(argument ?):\n \n",
"Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach. \nAs expected, the first test passes, but the second test fails.\nExercise\n\nRewrite the above one-shot test for entropy using the unittest infrastructure.",
"# Implementating a pattern test. Use functions in the test.\nimport unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_equal_probability(self):\n def test(count):\n \"\"\"\n Invokes the entropy function for a number of values equal to count\n that have the same probability.\n :param int count:\n \"\"\"\n raise RuntimeError (\"Not implemented.\")\n #\n test(2)\n test(20)\n test(200)\n\n#test_setup(TestEntropy)\n\nimport unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \"\"\"Write the full set of tests.\"\"\"",
"Testing For Exceptions\nEdge test cases often involves handling exceptions. One approach is to code this directly.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_invalid_probability(self):\n try:\n entropy([0.1, 0.5])\n self.assertTrue(False)\n except ValueError:\n self.assertTrue(True)\n \n#test_setup(TestEntropy)",
"unittest provides help with testing exceptions.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntropy(unittest.TestCase):\n \n def test_invalid_probability(self):\n with self.assertRaises(ValueError):\n entropy([0.1, 0.5])\n \nsuite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)\n_ = unittest.TextTestRunner().run(suite)\n",
"Test Files\nAlthough I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.\nThe structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.\nDiscussion\nQuestion: What tests would you write for a plotting function?\nTest Driven Development\nStart by writing the tests. Then write the code.\nWe illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.",
"import unittest\n\n# Define a class in which the tests will run\nclass TestEntryopy(unittest.TestCase):\n \n def test_oneshot(self):\n self.assertEqual(geomean([1,1]), 1)\n \n def test_oneshot2(self):\n self.assertEqual(geomean([3, 3, 3]), 3)\n \n#test_setup(TestGeomean)\n\n#def geomean(argument?):\n# return ?",
"Exercise\n\nCreate a python module entropy.py with the entropy function\nCreate a python module test_entropy.py with the test codes\nTry running python test_entropy.py.\nTry using nosetests to get coverage information (nosetests --with-coverage test_entropy.py). \nYou can install nosetests with conda install nose\nYou may have to install the coverage module (look at https://stackoverflow.com/questions/14488601/how-to-fix-python-nose-coverage-not-available-unable-to-import-coverage-module).\n\nOther infrastructures\n\npytest\nnose\nUse binary functions that being with \"test\"\n\nReferences\nhttps://www.youtube.com/watch?v=GEqM9uJi64Q (Pydata 2015)\nhttps://www.youtube.com/watch?v=yACtdj1_IxE (Pycon 2017)\nThe first talk mentions some packages:\nengarde - https://github.com/TomAugspurger/engarde\nHypothesis - https://hypothesis.readthedocs.io/en/latest/\nFeature Forge - https://github.com/machinalis/featureforge\nDetlef Nauck talk: \nhttp://ukkdd.org.uk/2017/info/talks/nauck.pdf\nHe also had a list of R tools but I could not find the slides form the talk I saw.\nTest Driven Data Analysis:\nhttps://www.youtube.com/watch?v=TGwZnZYg0jw\nProfiling for Pandas:\nhttps://github.com/pandas-profiling/pandas-profiling"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xmnlab/notebooks | jupyter/Introducción.ipynb | mit | [
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Introducción-a-Jupyter-Notebook\" data-toc-modified-id=\"Introducción-a-Jupyter-Notebook-1\"><span class=\"toc-item-num\">1 </span>Introducción a Jupyter Notebook</a></div><div class=\"lev2 toc-item\"><a href=\"#¿Qué-es-Jupyter-Notebook?\" data-toc-modified-id=\"¿Qué-es-Jupyter-Notebook?-11\"><span class=\"toc-item-num\">1.1 </span>¿Qué es Jupyter Notebook?</a></div><div class=\"lev2 toc-item\"><a href=\"#¿Qué-es-un-notebook-science?\" data-toc-modified-id=\"¿Qué-es-un-notebook-science?-12\"><span class=\"toc-item-num\">1.2 </span>¿Qué es un notebook science?</a></div><div class=\"lev2 toc-item\"><a href=\"#Estructura-del-notebook-science\" data-toc-modified-id=\"Estructura-del-notebook-science-13\"><span class=\"toc-item-num\">1.3 </span>Estructura del notebook science</a></div><div class=\"lev2 toc-item\"><a href=\"#Comando-mágicos\" data-toc-modified-id=\"Comando-mágicos-14\"><span class=\"toc-item-num\">1.4 </span>Comando mágicos</a></div><div class=\"lev3 toc-item\"><a href=\"#Ejecutar-códigos-con-otros-kernels\" data-toc-modified-id=\"Ejecutar-códigos-con-otros-kernels-141\"><span class=\"toc-item-num\">1.4.1 </span>Ejecutar códigos con otros kernels</a></div><div class=\"lev2 toc-item\"><a href=\"#Cargar-datos\" data-toc-modified-id=\"Cargar-datos-15\"><span class=\"toc-item-num\">1.5 </span>Cargar datos</a></div><div class=\"lev2 toc-item\"><a href=\"#Gráficos\" data-toc-modified-id=\"Gráficos-16\"><span class=\"toc-item-num\">1.6 </span>Gráficos</a></div><div class=\"lev2 toc-item\"><a href=\"#Widgets\" data-toc-modified-id=\"Widgets-17\"><span class=\"toc-item-num\">1.7 </span>Widgets</a></div><div class=\"lev2 toc-item\"><a href=\"#Help\" data-toc-modified-id=\"Help-18\"><span class=\"toc-item-num\">1.8 </span>Help</a></div><div class=\"lev2 toc-item\"><a href=\"#LaTeX\" data-toc-modified-id=\"LaTeX-19\"><span class=\"toc-item-num\">1.9 </span>LaTeX</a></div><div class=\"lev2 toc-item\"><a href=\"#Instalación\" data-toc-modified-id=\"Instalación-110\"><span class=\"toc-item-num\">1.10 </span>Instalación</a></div><div class=\"lev1 toc-item\"><a href=\"#Referencias\" data-toc-modified-id=\"Referencias-2\"><span class=\"toc-item-num\">2 </span>Referencias</a></div>\n\n# Introducción a Jupyter Notebook\n\n## ¿Qué es Jupyter Notebook?\n\n> El IPython Notebook es ahora conocido como el Jupyter Notebook. Es un entorno computacional interactivo, en el que se pueden combinar ejecución de código, texto enriquecido, matemáticas, gráficos y contenidos multimedia [1].\n\n## ¿Qué es un notebook science?\n\n> El Open Notebook Science es la práctica de hacer que el registro primario completo de un proyecto de investigación este disponible públicamente, en línea y tal como está registrado. Esto consiste en colocar el notebook personal, de laboratorio o del investigador en línea junto con todos los datos crudos y procesados así como cualquier material asociado, tal como se genera este material. El enfoque puede resumirse por el lema \"ninguna información privilegiada\". Este es el extremo lógico de enfoques transparentes a la investigación e incluye explícitamente la puesta a disposición de experimentos inéditos fallidos, menos importantes y de otra índole; llamados \"datos oscuros\". La práctica de la ciencia del cuaderno abierto, aunque no es la norma de la comunidad académica, ha ganado considerable atención en la investigación reciente general, and peer-reviewed y de medios de comunicación revisados como parte de una tendencia general hacia enfoques más abiertos en la práctica de la investigación y publicación. La Ciencia del cuaderno abierto, por tanto, puede ser descrito como parte de un más amplio movimiento de ciencia abierta que incluye la promoción y adopción de la publicación de acceso abierto, datos abiertos, datos de crowdsourcing y ciencia ciudadana. Está inspirado en parte por el éxito del software de código abierto y se basa en muchas de sus ideas [2].\n\n## Estructura del notebook science\n\nUna manera sencilla de probar Jupyter sin tenerlo instalado en la computadora \nes por medio de un servicio web gratuito:\n\nhttps://try.jupyter.org/\n\n\nBasicamente el notebook está compuesto por celdas [3]. \nHay algunos tipos de celdas:\n\n* texto;\n* código;\n* resultado;\n\nEn la celda de texto podemos utilizar formatación con Markdown. Ej:\n\n```md\n\n**negrito**;\n*itálico*;\n***negrito y itálico***;\n```\n\nA parte de formatación de texto, con Markdown podemos inserir imágenes:\n\n\n\n```md\n\n\n```\n\n```python\nprint(a+b)\n```\n\n## Comando mágicos\n\nLos comando mágicos de Jupyter son comando própios del entorno Jupyter,\ny empiezan por % o %%. \n\nEjemplo de algunos comandos mágicos [4]:\n\n* Ejecutar códigos python (%run)\n* Inserir código desde archivo (%load)\n* Listar todas variables del escopo global (%who)\n* Visualizar tiempo de ejecución (%%time)\n* Visualizar tiempo de ejecución (%%timeit)\n* Profile (%prun)\n* Depuración (%pdb)\n* Comandos shell (!)*",
"a = 1\nb = 2.2\nc = 3\nd = 'a'\n\n%who\n\ndef f1(n):\n for x in range(n):\n pass\n\n%%time \nf1(100)\n\n%%timeit\nf1(100)",
"Ejecutar códigos con otros kernels\nEn la celda de código también es posible ejecutar códigos de otras lenguajes.\nA continuación, algunas comandos mágicos para ejecutar comandos de otros \nlenguajes:\n\n%%bash\n%%HTML\n%%python2\n%%python3\n%%ruby\n%%perl",
"%%bash\n\nls -lah",
"Cargar datos",
"import pandas as pd\n\ndf = pd.read_csv('data/kaggle-titanic.csv')\n\ndf.head()\n\ndf.info()\n\ndf.describe()",
"Gráficos",
"from matplotlib import pyplot as plt\n\ndf.Survived.value_counts().plot(kind='bar')\nplt.show()\n\nimport pixiedust\n\ndisplay(df)",
"Widgets",
"import numpy as np\n\nπ = np.pi\n\ndef show_wave(A, f, φ):\n ω = 2*π*f\n t = np.linspace(0, 1, 10000)\n f = A*np.sin(ω*t+φ)\n \n plt.grid(True)\n plt.plot(t, f)\n plt.show()\n\nshow_wave(A=5, f=5, φ=2)\n\nimport ipywidgets as widgets\nfrom IPython.display import display\n\nparams = dict(value=1, min=1, max=100, step=1, continuous_update=False)\n\nwA = widgets.IntSlider(**params)\nwf = widgets.IntSlider(**params)\nwφ = widgets.IntSlider(value=0, min=0, max=10, step=1, continuous_update=False)\n\nwidgets.interact(show_wave, A=wA, f=wf, φ=wφ);",
"Para más informaciones sobre ipywidgets, consulte el manual de usuario [6].\nHelp\nPara ver la documentación de una determinada función o clase puedes ejecutar el comando:\n?str.replace()\nEste comando abrirá una sección en la página con la documentación deseada.\nOtro modo de ver la documentación es usando la función help, ej.:\nhelp(str.replace)",
"?str.replace()\n\nhelp(str.replace)",
"LaTeX\nCon Jupyter Notebook, también se puede escribir en las celdas de texto en formato LaTeX.\nPor ejemplo:\n```latex\n$$\n\\begin{equation}\n\\omega = \\alpha + \\beta + \\sum_{n=1}^{\\infty} 2^{-n}\n\\end{equation}\n$$\n```\nY su resutado es:\n$$\n\\begin{equation}\n\\omega = \\alpha + \\beta + \\sum_{n=1}^{\\infty} 2^{-n}\n\\end{equation}\n$$\nPara más informaciones sobre el uso de LaTeX en Jupyter, pueden ver en [6].\nInstalación\nPara utilizar Jupyter notebook, necesitamos instalarlo. \nLa mejor manera para instalar librerías es hacerlo en un entorno separado \n(del entorno python del sistema operativo). Una distribución Python que está\nenfocada en temas científicos es Anaconda y se puede bajar su instalador en https://www.continuum.io/downloads [7].\nEn el entorno principal (root) de Anaconda, Jupyter ya viene instalado. Caso empiecemos un entorno nuevo, podemos instalar Jupyter con el comando:\nsh\nconda install jupyter\nReferencias\n\n[1] http://ipython.org/notebook.html\n[2] https://es.wikipedia.org/wiki/Open_notebook_science\n[3] https://es.slideshare.net/jileon/introduccion-a-jupyter-antes-i-python-notebook\n[4] https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/\n[5] http://jupyter.readthedocs.io/en/latest/install.html\n[6] http://ipywidgets.readthedocs.io/en/latest/user_guide.html\n[7] http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/latex_envs/README.html\n\nTutoriales\n\nhttps://www.youtube.com/watch?v=nktmFUFWpO0\nhttps://www.youtube.com/watch?v=fTRkm3d6ebw\nhttps://pybonacci.es/2013/05/16/entrevista-a-fernando-perez-creador-de-ipython/\nhttps://jupyter-notebook-beginner-guide.readthedocs.io"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yl565/statsmodels | examples/notebooks/ols.ipynb | bsd-3-clause | [
"Ordinary Least Squares",
"%matplotlib inline\n\nfrom __future__ import print_function\nimport numpy as np\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std\n\nnp.random.seed(9876789)",
"OLS estimation\nArtificial data:",
"nsample = 100\nx = np.linspace(0, 10, 100)\nX = np.column_stack((x, x**2))\nbeta = np.array([1, 0.1, 10])\ne = np.random.normal(size=nsample)",
"Our model needs an intercept so we add a column of 1s:",
"X = sm.add_constant(X)\ny = np.dot(X, beta) + e",
"Fit and summary:",
"model = sm.OLS(y, X)\nresults = model.fit()\nprint(results.summary())",
"Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:",
"print('Parameters: ', results.params)\nprint('R2: ', results.rsquared)",
"OLS non-linear curve but linear in parameters\nWe simulate artificial data with a non-linear relationship between x and y:",
"nsample = 50\nsig = 0.5\nx = np.linspace(0, 20, nsample)\nX = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))\nbeta = [0.5, 0.5, -0.02, 5.]\n\ny_true = np.dot(X, beta)\ny = y_true + sig * np.random.normal(size=nsample)",
"Fit and summary:",
"res = sm.OLS(y, X).fit()\nprint(res.summary())",
"Extract other quantities of interest:",
"print('Parameters: ', res.params)\nprint('Standard errors: ', res.bse)\nprint('Predicted values: ', res.predict())",
"Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.",
"prstd, iv_l, iv_u = wls_prediction_std(res)\n\nfig, ax = plt.subplots(figsize=(8,6))\n\nax.plot(x, y, 'o', label=\"data\")\nax.plot(x, y_true, 'b-', label=\"True\")\nax.plot(x, res.fittedvalues, 'r--.', label=\"OLS\")\nax.plot(x, iv_u, 'r--')\nax.plot(x, iv_l, 'r--')\nax.legend(loc='best');",
"OLS with dummy variables\nWe generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.",
"nsample = 50\ngroups = np.zeros(nsample, int)\ngroups[20:40] = 1\ngroups[40:] = 2\n#dummy = (groups[:,None] == np.unique(groups)).astype(float)\n\ndummy = sm.categorical(groups, drop=True)\nx = np.linspace(0, 20, nsample)\n# drop reference category\nX = np.column_stack((x, dummy[:,1:]))\nX = sm.add_constant(X, prepend=False)\n\nbeta = [1., 3, -3, 10]\ny_true = np.dot(X, beta)\ne = np.random.normal(size=nsample)\ny = y_true + e",
"Inspect the data:",
"print(X[:5,:])\nprint(y[:5])\nprint(groups)\nprint(dummy[:5,:])",
"Fit and summary:",
"res2 = sm.OLS(y, X).fit()\nprint(res2.summary())",
"Draw a plot to compare the true relationship to OLS predictions:",
"prstd, iv_l, iv_u = wls_prediction_std(res2)\n\nfig, ax = plt.subplots(figsize=(8,6))\n\nax.plot(x, y, 'o', label=\"Data\")\nax.plot(x, y_true, 'b-', label=\"True\")\nax.plot(x, res2.fittedvalues, 'r--.', label=\"Predicted\")\nax.plot(x, iv_u, 'r--')\nax.plot(x, iv_l, 'r--')\nlegend = ax.legend(loc=\"best\")",
"Joint hypothesis test\nF test\nWe want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \\times \\beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:",
"R = [[0, 1, 0, 0], [0, 0, 1, 0]]\nprint(np.array(R))\nprint(res2.f_test(R))",
"You can also use formula-like syntax to test hypotheses",
"print(res2.f_test(\"x2 = x3 = 0\"))",
"Small group effects\nIf we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:",
"beta = [1., 0.3, -0.0, 10]\ny_true = np.dot(X, beta)\ny = y_true + np.random.normal(size=nsample)\n\nres3 = sm.OLS(y, X).fit()\n\nprint(res3.f_test(R))\n\nprint(res3.f_test(\"x2 = x3 = 0\"))",
"Multicollinearity\nThe Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.",
"from statsmodels.datasets.longley import load_pandas\ny = load_pandas().endog\nX = load_pandas().exog\nX = sm.add_constant(X)",
"Fit and summary:",
"ols_model = sm.OLS(y, X)\nols_results = ols_model.fit()\nprint(ols_results.summary())",
"Condition number\nOne way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:",
"norm_x = X.values\nfor i, name in enumerate(X):\n if name == \"const\":\n continue\n norm_x[:,i] = X[name]/np.linalg.norm(X[name])\nnorm_xtx = np.dot(norm_x.T,norm_x)",
"Then, we take the square root of the ratio of the biggest to the smallest eigen values.",
"eigs = np.linalg.eigvals(norm_xtx)\ncondition_number = np.sqrt(eigs.max() / eigs.min())\nprint(condition_number)",
"Dropping an observation\nGreene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:",
"ols_results2 = sm.OLS(y.ix[:14], X.ix[:14]).fit()\nprint(\"Percentage change %4.2f%%\\n\"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))",
"We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.",
"infl = ols_results.get_influence()",
"In general we may consider DBETAS in absolute value greater than $2/\\sqrt{N}$ to be influential observations",
"2./len(X)**.5\n\nprint(infl.summary_frame().filter(regex=\"dfb\"))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/cloudml-samples | notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb | apache-2.0 | [
"XGBoost Training on AI Platform\nThis notebook uses the Census Income Data Set to demonstrate how to train a model on Ai Platform.\nHow to bring your model to AI Platform\nGetting your model ready for training can be done in 3 steps:\n1. Create your python model file\n 1. Add code to download your data from Google Cloud Storage so that AI Platform can use it\n 1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model\n1. Prepare a package\n1. Submit the training job\nPrerequisites\nBefore you jump in, let’s cover some of the different tools you’ll be using to get online prediction up and running on AI Platform. \nGoogle Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.\nAI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.\nGoogle Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.\nCloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.\nPart 0: Setup\n\nCreate a project on GCP\nCreate a Google Cloud Storage Bucket\nEnable AI Platform Training and Prediction and Compute Engine APIs\nInstall Cloud SDK\n[Optional] Install XGBoost\n[Optional] Install scikit-learn\n[Optional] Install pandas\n[Optional] Install Google API Python Client\n\nThese variables will be needed for the following steps.\n* TRAINER_PACKAGE_PATH <./census_training> - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.\n* MAIN_TRAINER_MODULE <census_training.train> - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>\n* JOB_DIR <gs://$BUCKET_ID/xgb_job_dir> - The path to a Google Cloud Storage location to use for job output.\n* RUNTIME_VERSION <1.9> - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.\n* PYTHON_VERSION <3.5> - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.\n Replace: \n* PROJECT_ID <YOUR_PROJECT_ID> - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.\n* BUCKET_ID <YOUR_BUCKET_ID> - with the bucket id you created above.\n* JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir> - with the bucket id you created above.\n* REGION <REGION> - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.",
"%env PROJECT_ID <YOUR_PROJECT_ID>\n%env BUCKET_ID <YOUR_BUCKET_ID>\n%env REGION <REGION>\n%env TRAINER_PACKAGE_PATH ./census_training\n%env MAIN_TRAINER_MODULE census_training.train\n%env JOB_DIR <gs://YOUR_BUCKET_ID/xgb_job_dir>\n%env RUNTIME_VERSION 1.9\n%env PYTHON_VERSION 3.5\n! mkdir census_training",
"The data\nThe Census Income Data Set that this sample\nuses for training is provided by the UC Irvine Machine Learning\nRepository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/census/data/. \n\nTraining file is adult.data.csv\nEvaluation file is adult.test.csv (not used in this notebook)\n\nNote: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.\nDisclaimer\nThis dataset is provided by a third party. Google provides no representation,\nwarranty, or other guarantees about the validity or any other aspects of this dataset.\nPart 1: Create your python model file\nFirst, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are two key differences:\n1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.\n1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.\nThe code in this file loads the data into a pandas DataFrame and pre-processes the data with scikit-learn. This data is then loaded into a DMatrix and used to train a model. Lastly, the model is saved to a file that can be uploaded to AI Platform's prediction service.\nREPLACE Line 18: BUCKET_ID = 'true-ability-192918' with your GCS BUCKET_ID\nNote: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.",
"%%writefile ./census_training/train.py\n# [START setup]\nimport datetime\nimport os\nimport subprocess\n\nfrom sklearn.preprocessing import LabelEncoder\nimport pandas as pd\nfrom google.cloud import storage\nimport xgboost as xgb\n\n\n# TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID\nBUCKET_ID = 'torryyang-xgb-models'\n# [END setup]\n\n# ---------------------------------------\n# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).\n# AI Platform will then be able to use the data when training your model.\n# ---------------------------------------\n# [START download-data]\ncensus_data_filename = 'adult.data.csv'\n\n# Public bucket holding the census data\nbucket = storage.Client().bucket('cloud-samples-data')\n\n# Path to the data inside the public bucket\ndata_dir = 'ml-engine/census/data/'\n\n# Download the data\nblob = bucket.blob(''.join([data_dir, census_data_filename]))\nblob.download_to_filename(census_data_filename)\n# [END download-data]\n\n# ---------------------------------------\n# This is where your model code would go. Below is an example model using the census dataset.\n# ---------------------------------------\n\n# [START define-and-load-data]\n\n# these are the column labels from the census data files\nCOLUMNS = (\n 'age',\n 'workclass',\n 'fnlwgt',\n 'education',\n 'education-num',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'capital-gain',\n 'capital-loss',\n 'hours-per-week',\n 'native-country',\n 'income-level'\n)\n# categorical columns contain data that need to be turned into numerical values before being used by XGBoost\nCATEGORICAL_COLUMNS = (\n 'workclass',\n 'education',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'native-country'\n)\n\n# Load the training census dataset\nwith open(census_data_filename, 'r') as train_data:\n raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)\n# remove column we are trying to predict ('income-level') from features list\ntrain_features = raw_training_data.drop('income-level', axis=1)\n# create training labels list\ntrain_labels = (raw_training_data['income-level'] == ' >50K')\n\n# [END define-and-load-data]\n\n# [START categorical-feature-conversion]\n# Since the census data set has categorical features, we need to convert\n# them to numerical values. \n# convert data in categorical columns to numerical values\nencoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS}\nfor col in CATEGORICAL_COLUMNS:\n train_features[col] = encoders[col].fit_transform(train_features[col])\n# [END categorical-feature-conversion]\n\n# [START load-into-dmatrix-and-train]\n# load data into DMatrix object\ndtrain = xgb.DMatrix(train_features, train_labels)\n# train model\nbst = xgb.train({}, dtrain, 20)\n# [END load-into-dmatrix-and-train]\n\n# ---------------------------------------\n# 2. Export and save the model to GCS\n# ---------------------------------------\n# [START export-to-gcs]\n# Export the model to a file\nmodel = 'model.bst'\nbst.save_model(model)\n\n# Upload the model to GCS\nbucket = storage.Client().bucket(BUCKET_ID)\nblob = bucket.blob('{}/{}'.format(\n datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),\n model))\nblob.upload_from_filename(model)\n# [END export-to-gcs]",
"Part 2: Create Trainer Package\nBefore you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here",
"%%writefile ./census_training/__init__.py\n# Note that __init__.py can be an empty file.\n",
"Part 3: Submit Training Job\nNext we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:\n\njob-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +\"%Y%m%d_%H%M%S\")\njob-dir - The path to a Google Cloud Storage location to use for job output.\npackage-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.\nmodule-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.\nregion - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.\nruntime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.\npython-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.\nscale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.\n\nNote: Check to make sure gcloud is set to the current PROJECT_ID",
"! gcloud config set project $PROJECT_ID",
"Submit the training job.",
"! gcloud ml-engine jobs submit training census_training_$(date +\"%Y%m%d_%H%M%S\") \\\n --job-dir $JOB_DIR \\\n --package-path $TRAINER_PACKAGE_PATH \\\n --module-name $MAIN_TRAINER_MODULE \\\n --region $REGION \\\n --runtime-version=$RUNTIME_VERSION \\\n --python-version=$PYTHON_VERSION \\\n --scale-tier BASIC",
"[Optional] StackDriver Logging\nYou can view the logs for your training job:\n1. Go to https://console.cloud.google.com/\n1. Select \"Logging\" in left-hand pane\n1. Select \"Cloud ML Job\" resource from the drop-down\n1. In filter by prefix, use the value of $JOB_NAME to view the logs\n[Optional] Verify Model File in GCS\nView the contents of the destination model folder to verify that model file has indeed been uploaded to GCS.\nNote: The model can take a few minutes to train and show up in GCS.",
"! gsutil ls gs://$BUCKET_ID/census_*",
"Next Steps:\nThe AI Platform online prediction service manages computing resources in the cloud to run your models. Check out the documentation pages that describe the process to get online predictions from these exported models using AI Platform."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/jax-cfd | notebooks/ml_model_inference_demo.ipynb | apache-2.0 | [
"! pip install -U -q jax-cfd[complete]==0.1.0\n\ndataset_name = 'kolmogorov_re_1000' #@param ['kolmogorov_re_1000', 'decaying', 'kolmogorov_re_4000'] {type: \"string\"}\n\n%time ! gsutil -m cp gs://gresearch/jax-cfd/public_eval_datasets/{dataset_name}/eval_*.nc /content\n\n%time ! gsutil -m cp gs://gresearch/jax-cfd/public_models/*.pkl /content\n\n! ls /content\n\n#@title Imports { form-width: \"30%\" }\n\nimport warnings\nwarnings.simplefilter('ignore')\n\nimport os\nimport functools\nimport pickle\n\nimport gin\nimport jax\nimport jax.numpy as jnp\nimport numpy as np\nimport haiku as hk\n\nimport xarray\nimport seaborn\nimport matplotlib.pyplot as plt\n\nimport jax_cfd.base as cfd\nimport jax_cfd.data as cfd_data\nimport jax_cfd.ml as ml\n\nmodel_builder = ml.model_builder\nmodel_utils = ml.model_utils\noptimizer_modules = ml.optimizer_modules\nphysics_specifications = ml.physics_specifications\n\n#@title Helper functions\n\nshape_structure = lambda tree: jax.tree_map(lambda x: x.shape, tree)\n\n\ndef xarray_open(path):\n return xarray.open_dataset(path, chunks={'time': '100MB'})\n\n\ndef strip_imports(s):\n out_lines = []\n for line in s.splitlines():\n if not line.startswith('import'):\n out_lines.append(line)\n return '\\n'.join(out_lines)",
"Selecting evaluation dataset",
"#@title Paths to evaluation datasets\n\nbase_path = '/content/'\n\nkolmogorov_re_1000 = {\n f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc')\n for i in [64, 128, 256, 512, 1024, 2048]\n}\ndecaying = {\n f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc')\n for i in [64, 128, 256, 512, 1024, 2048]\n}\nkolmogorov_re_4000 = {\n f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_128x128.nc')\n for i in [128, 256, 512, 1024, 2048, 4096]\n}\n\nall_datasets = {\n 'kolmogorov_re_1000': kolmogorov_re_1000,\n 'decaying': decaying,\n 'kolmogorov_re_4000': kolmogorov_re_4000,\n}\n\nreference_names = {\n 'kolmogorov_re_1000': 'baseline_2048x2048',\n 'decaying': 'baseline_2048x2048',\n 'kolmogorov_re_4000': 'baseline_4096x4096',\n}\n\n! ls /content/\n\n#@title Loading evaluation dataset {run: \"auto\"}\n\ndataset_paths = all_datasets[dataset_name]\ndatasets = {k: xarray_open(v) for k, v in dataset_paths.items()}\nreference_ds = datasets[reference_names[dataset_name]]\n\ngrid = cfd_data.xarray_utils.grid_from_attrs(reference_ds.attrs)\n\n#@title Selecting initial conditions and baseline trajectories.\n\nsample_id = 0\ntime_id = 0\nlength = 200 # length of the trajectory.\ninner_steps = 10 # since we deal with subsampled datasets\n\ninitial_conditions = tuple(\n reference_ds[velocity_name].isel(\n sample=slice(sample_id, sample_id + 1),\n time=slice(time_id, time_id + 1)\n ).values\n for velocity_name in cfd_data.xarray_utils.XR_VELOCITY_NAMES[:grid.ndim]\n)\n\ntarget_ds = reference_ds.isel(\n sample=slice(sample_id, sample_id + 1),\n time=slice(time_id, time_id + length))\n\n\ndatasets = {\n k: v.isel(sample=slice(sample_id, sample_id + 1),\n time=slice(time_id, time_id + length))\n for k, v in datasets.items()\n}",
"Selecting model checkpoint to load",
"class CheckpointState:\n \"\"\"Object to package up the state we load and restore.\"\"\"\n\n def __init__(self, **kwargs):\n for name, value in kwargs.items():\n setattr(self, name, value)\n\ncheckpoint_paths = {\n 'LI': \"/content/LI_ckpt.pkl\",\n 'LC': \"/content/LC_ckpt.pkl\",\n 'EPD': \"/content/EPD_ckpt.pkl\",\n}\n\n#@title selecting model to evaluate {run: \"auto\"}\n\nmodel_name = \"LI\" #@param ['LI', 'LC', 'EPD',] {type: \"string\"}\n\n#@title Loading the checkpoint\n\nckpt_path = checkpoint_paths[model_name]\nwith open(ckpt_path, 'rb') as f:\n ckpt = pickle.load(f)\nparams = ckpt.eval_params\n\nshape_structure(params)",
"Model inference",
"#@title Setting up model configuration from the checkpoint;\n\ngin.clear_config()\ngin.parse_config(ckpt.model_config_str)\ngin.parse_config(strip_imports(reference_ds.attrs['physics_config_str']))\ndt = ckpt.model_time_step\nphysics_specs = physics_specifications.get_physics_specs()\nmodel_cls = model_builder.get_model_cls(grid, dt, physics_specs)\n\n\ndef compute_trajectory_fwd(x):\n solver = model_cls()\n x = solver.encode(x)\n final, trajectory = solver.trajectory(\n x, length, inner_steps, start_with_input=True, post_process_fn=solver.decode)\n return trajectory\n\n\nmodel = hk.without_apply_rng(hk.transform(compute_trajectory_fwd))\ntrajectory_fn = functools.partial(model.apply, params)\ntrajectory_fn = jax.vmap(trajectory_fn) # predict a batch of trajectories;\n\n#@title Running inference;\n\nprediction = trajectory_fn(initial_conditions)\nprediction_ds = cfd_data.xarray_utils.velocity_trajectory_to_xarray(\n prediction, grid, samples=True)\n\n# roundoff error in coordinates sometimes leads to wrong alignment results;\nprediction_ds.coords['x'] = target_ds.coords['x']\nprediction_ds.coords['y'] = target_ds.coords['y']\nprediction_ds.coords['time'] = target_ds.coords['time']\n\ndatasets[model_name] = prediction_ds",
"Computing summaries\nNote: Evaluations in this notebook are demonstrative and performed over a single sample and shorter times than those used in the paper;",
"summary = xarray.concat([\n cfd_data.evaluation.compute_summary_dataset(ds, target_ds)\n for ds in datasets.values()\n], dim='model')\nsummary.coords['model'] = list(datasets.keys())\n\ncorrelation = summary.vorticity_correlation.compute()\nspectrum = summary.energy_spectrum_mean.mean('time').compute()\n\nbaseline_palette = seaborn.color_palette('YlGnBu', n_colors=7)[1:]\nmodels_color = seaborn.xkcd_palette(['burnt orange', 'taupe', 'greenish blue'])\npalette = baseline_palette + models_color[:(len(datasets.keys()) - 6)]\n\n#@title Vorticity correlation as a function of time\n\nplt.figure(figsize=(7, 6))\nfor color, model in zip(palette, summary['model'].data):\n style = '-' if 'baseline' in model else '--'\n correlation.sel(model=model).plot.line(\n color=color, linestyle=style, label=model, linewidth=3);\nplt.axhline(y=0.95, xmin=0, xmax=20, color='gray')\nplt.legend();\nplt.title('')\nplt.xlim(0, 15)\n\n#@title Energy spectrum\n\nplt.figure(figsize=(10, 6))\nfor color, model in zip(palette, summary['model'].data):\n style = '-' if 'baseline' in model else '--'\n (spectrum.k ** 5 * spectrum).sel(model=model).plot.line(\n color=color, linestyle=style, label=model, linewidth=3);\nplt.legend();\nplt.yscale('log')\nplt.xscale('log')\nplt.title('')\nplt.xlim(3.5, None)\nif dataset_name == 'kolmogorov_re_4000':\n plt.ylim(5e8, None)\nelif dataset_name == 'kolmogorov_re_1000':\n plt.ylim(1e9, None)\nelif dataset_name == 'decaying':\n plt.ylim(2e8, None)\nelse:\n raise ValueError('Unrecognized dataset')\n\nvorticities = xarray.concat(\n [cfd_data.xarray_utils.vorticity_2d(ds) for ds in datasets.values()],\n dim='model'\n).to_dataset()\nvorticities.coords['model'] = list(datasets.keys())\n\n#@title Visualizing model unrolls { form-width: \"30%\", run: \"auto\"}\ntime_range = {'min': 0, 'max': vorticities.sizes['time'], 'step': 1}\n\nlast_step_to_plot = 200 #@param {type: \"slider\", min: 1, max: 200 , step: 5}\nnum_to_show = 5 #@param {type: \"slider\", min: 1, max: 10, step: 1}\ntime_slice = slice(None, last_step_to_plot, last_step_to_plot // num_to_show)\n\n(vorticities.isel({'time': time_slice, 'sample': 0})['vorticity']\n .plot.imshow(row='model', col='time', cmap=seaborn.cm.icefire, robust=True))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb | apache-2.0 | [
"Extract Datasets and Establish Benchmark\nLearning Objectives\n- Divide into Train, Evaluation and Test datasets\n- Understand why we need each\n- Pull data out of BigQuery and into CSV\n- Establish Rules Based Benchmark\nIntroduction\nIn the previous notebook we demonstrated how to do ML in BigQuery. However BQML is limited to linear models.\nFor advanced ML we need to pull the data out of BigQuery and load it into a ML Framework, in our case TensorFlow.\nWhile TensorFlow can read from BigQuery directly, the performance is slow. The best practice is to first stage the BigQuery files as .csv files, and then read the .csv files into TensorFlow. \nThe .csv files can reside on local disk if we're training locally, but if we're training in the cloud we'll need to move the .csv files to the cloud, in our case Google Cloud Storage.\nSet up environment variables and load necessary libraries",
"PROJECT = \"cloud-training-demos\" # Replace with your PROJECT\nREGION = \"us-central1\" # Choose an available region for Cloud MLE\n\nimport os\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\n\n!pip freeze | grep google-cloud-bigquery==1.21.0 || pip install google-cloud-bigquery==1.21.0\n\n%load_ext google.cloud.bigquery",
"Review\nIn the a_sample_explore_clean notebook we came up with the following query to extract a repeatable and clean sample: \n<pre>\n#standardSQL\nSELECT\n (tolls_amount + fare_amount) AS fare_amount, -- label\n pickup_datetime,\n pickup_longitude, \n pickup_latitude, \n dropoff_longitude, \n dropoff_latitude\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n -- Clean Data\n trip_distance > 0\n AND passenger_count > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n -- repeatable 1/5000th sample\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1\n </pre>\n\nWe will use the same query with one change. Instead of using pickup_datetime as is, we will extract dayofweek and hourofday from it. This is to give us some categorical features in our dataset so we can illustrate how to deal with them when we get to feature engineering. The new query will be:\n<pre>\nSELECT\n (tolls_amount + fare_amount) AS fare_amount, -- label\n EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,\n EXTRACT(HOUR from pickup_datetime) AS hourofday,\n pickup_longitude, \n pickup_latitude, \n dropoff_longitude, \n dropoff_latitude\n-- rest same as before\n</pre>\n\nSplit into train, evaluation, and test sets\nFor ML modeling we need not just one, but three datasets.\nTrain: This is what our model learns on\nEvaluation (aka Validation): We shouldn't evaluate our model on the same data we trained on because then we couldn't know whether it was memorizing the input data or whether it was generalizing. Therefore we evaluate on the evaluation dataset, aka validation dataset.\nTest: We use our evaluation dataset to tune our hyperparameters (we'll cover hyperparameter tuning in a future lesson). We need to know that our chosen set of hyperparameters will work well for data we haven't seen before because in production, that will be the case. For this reason, we create a third dataset that we never use during the model development process. We only evaluate on this once our model development is finished. Data scientists don't always create a test dataset (aka holdout dataset), but to be thorough you should.\nWe can divide our existing 1/5000th sample three ways 70%/15%/15% (or whatever split we like) with some modulo math demonstrated below.\nBecause we are using a hash function these results are deterministic, we'll get the same exact split every time the query is run (assuming the underlying data hasn't changed)\nExercise 1\nThe create_query function below returns a query string that we will pass to BigQuery to collect our data. It takes as arguments the phase (TRAIN, VALID, or TEST) and the sample_size (relating to the fraction of the data we wish to sample). Complete the code below so that when the phase is set as VALID or TEST a new 15% split of the data will be created.",
"def create_query(phase, sample_size):\n basequery = \"\"\"\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,\n EXTRACT(HOUR from pickup_datetime) AS hourofday,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1\n \"\"\"\n\n if phase == \"TRAIN\":\n subsample = \"\"\"\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)\n AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)\n \"\"\"\n elif phase == \"VALID\":\n subsample = \"\"\"\n # TODO: Your code goes here\n \"\"\"\n elif phase == \"TEST\":\n subsample = \"\"\"\n # TODO: Your code goes here\n \"\"\"\n\n query = basequery + subsample\n return query.replace(\"EVERY_N\", sample_size)",
"Write to CSV\nNow let's execute a query for train/valid/test and write the results to disk in csv format. We use Pandas's .to_csv() method to do so.\nExercise 2\nThe for loop below will generate the TRAIN/VALID/TEST sampled subsets of our dataset. Complete the code in the cell below to 1) create the BigQuery query_string using the create_query function you completed above, taking our original 1/5000th of the dataset and 2) load the BigQuery results of that query_string to a DataFrame labeled df. \nThe remaining lines of code write that DataFrame to a csv file with the appropriate naming.",
"from google.cloud import bigquery\nbq = bigquery.Client(project=PROJECT)\n\nfor phase in [\"TRAIN\", \"VALID\", \"TEST\"]:\n # 1. Create query string\n query_string = # TODO: Your code goes here\n\n # 2. Load results into DataFrame\n df = # TODO: Your code goes here\n\n # 3. Write DataFrame to CSV\n df.to_csv(\"taxi-{}.csv\".format(phase.lower()), index_label = False, index = False)\n print(\"Wrote {} lines to {}\".format(len(df), \"taxi-{}.csv\".format(phase.lower())))",
"Note that even with a 1/5000th sample we have a good amount of data for ML. 150K training examples and 30K validation.\n<h3> Verify that datasets exist </h3>",
"!ls -l *.csv",
"Preview one of the files",
"!head taxi-train.csv",
"Looks good! We now have our ML datasets and are ready to train ML models, validate them and test them.\nEstablish rules-based benchmark\nBefore we start building complex ML models, it is a good idea to come up with a simple rules based model and use that as a benchmark. After all, there's no point using ML if it can't beat the traditional rules based approach!\nOur rule is going to be to divide the mean fare_amount by the mean estimated distance to come up with a rate and use that to predict. \nRecall we can't use the actual trip_distance because we won't have that available at prediction time (depends on the route taken), however we do know the users pick up and drop off location so we can use euclidean distance between those coordinates.\nExercise 3\nIn the code below, we create a rules-based benchmark and measure the Root Mean Squared Error against the label. The function euclidean_distance takes as input a Pandas dataframe and should measure the straight line distance between the pickup location and the dropoff location. Complete the code so that the function returns Euclidean distance between the pickup and dropoff location. \nThe compute_rmse funciton takes the actual (label) value and the predicted value and computes the Root Mean Squared Error between the the two. Complete the code below for the compute_rmse function.",
"import pandas as pd\n\ndef euclidean_distance(df):\n return # TODO: Your code goes here\n\ndef compute_rmse(actual, predicted):\n return # TODO: Your code goes here\n\ndef print_rmse(df, rate, name):\n print(\"{} RMSE = {}\".format(compute_rmse(df[\"fare_amount\"], rate * euclidean_distance(df)), name))\n\ndf_train = pd.read_csv(\"taxi-train.csv\")\ndf_valid = pd.read_csv(\"taxi-valid.csv\")\n\nrate = df_train[\"fare_amount\"].mean() / euclidean_distance(df_train).mean()\n\nprint_rmse(df_train, rate, \"Train\")\nprint_rmse(df_valid, rate, \"Valid\") ",
"The simple distance-based rule gives us an RMSE of <b>$7.70</b> on the validation dataset. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. \nYou don't want to set a goal on the test dataset because you'll want to tweak your hyperparameters and model architecture to get the best validation error. Then, you can evaluate ONCE on the test data.\nChallenge exercise\nLet's say that you want to predict whether a Stackoverflow question will be acceptably answered. Using this public dataset of questions, create a machine learning dataset that you can use for classification.\n<p>\nWhat is a reasonable benchmark for this problem?\nWhat features might be useful?\n<p>\nIf you got the above easily, try this harder problem: you want to predict whether a question will be acceptably answered within 2 days. How would you create the dataset?\n<p>\nHint (highlight to see):\n<p style='color:white' linkstyle='color:white'> \nYou will need to do a SQL join with the table of [answers]( https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_answers) to determine whether the answer was within 2 days.\n</p>\n\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_sensors_decoding.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Decoding sensor space data\nDecoding, a.k.a MVPA or supervised machine learning applied to MEG\ndata in sensor space. Here the classifier is applied to every time\npoint.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.cross_validation import StratifiedKFold\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import TimeDecoding, GeneralizationAcrossTime\n\ndata_path = sample.data_path()\n\nplt.close('all')",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(2, None) # replace baselining with high-pass\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=None, preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))\n\nepochs_list = [epochs[k] for k in event_id]\nmne.epochs.equalize_epoch_counts(epochs_list)\ndata_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')",
"Temporal decoding\nWe'll use the default classifer for a binary classification problem\nwhich is a linear Support Vector Machine (SVM).",
"td = TimeDecoding(predict_mode='cross-validation', n_jobs=1)\n\n# Fit\ntd.fit(epochs)\n\n# Compute accuracy\ntd.score(epochs)\n\n# Plot scores across time\ntd.plot(title='Sensor space decoding')",
"Generalization Across Time\nThis runs the analysis used in [1] and further detailed in [2]\nHere we'll use a stratified cross-validation scheme.",
"# make response vector\ny = np.zeros(len(epochs.events), dtype=int)\ny[epochs.events[:, 2] == 3] = 1\ncv = StratifiedKFold(y=y) # do a stratified cross-validation\n\n# define the GeneralizationAcrossTime object\ngat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1,\n cv=cv, scorer=roc_auc_score)\n\n# fit and score\ngat.fit(epochs, y=y)\ngat.score(epochs)\n\n# let's visualize now\ngat.plot()\ngat.plot_diagonal()",
"Exercise\n\nCan you improve the performance using full epochs and a common spatial\n pattern (CSP) used by most BCI systems?\nExplore other datasets from MNE (e.g. Face dataset from SPM to predict\n Face vs. Scrambled)\n\nHave a look at the example\nsphx_glr_auto_examples_decoding_plot_decoding_csp_space.py\nReferences\n.. [1] Jean-Remi King, Alexandre Gramfort, Aaron Schurger, Lionel Naccache\n and Stanislas Dehaene, \"Two distinct dynamic modes subtend the\n detection of unexpected sounds\", PLOS ONE, 2013,\n http://www.ncbi.nlm.nih.gov/pubmed/24475052\n.. [2] King & Dehaene (2014) 'Characterizing the dynamics of mental\n representations: the temporal generalization method', Trends In\n Cognitive Sciences, 18(4), 203-210.\n http://www.ncbi.nlm.nih.gov/pubmed/24593982"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
karlstroetmann/Artificial-Intelligence | Python/6 Classification/Polynomial-Logistic-Regression.ipynb | gpl-2.0 | [
"from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)",
"Polynomial Logistic Regression",
"import numpy as np\nimport pandas as pd",
"The data we want to investigate is stored in the file 'fake-data.csv'. It is data that I have found somewhere. I am not sure whether this data is real or fake. Therefore, I won't discuss the attributes of the data. The point of the data is that it is a classification problem that can not be solved with \nordinary logistic regression. We will introduce <em style=\"color:blue;\">polynomial logistic regression</em> to solve this problem.",
"DF = pd.read_csv('fake-data.csv')\nDF.head()",
"We extract the features from the data frame and convert it into a NumPy <em style=\"color:blue;\">feature matrix</em>.",
"X = np.array(DF[['x','y']])",
"We extract the target column and convert it into a NumPy array.",
"Y = np.array(DF['class'])",
"In order to plot the instances according to their class we divide the feature matrix $X$ into two parts. $\\texttt{X_pass}$ contains those examples that have class $1$, while $\\texttt{X_fail}$ contains those examples that have class $0$.",
"X_pass = X[Y == 1.0]\nX_fail = X[Y == 0.0]",
"Let us plot the data.",
"import matplotlib.pyplot as plt\nimport seaborn as sns\n\nplt.figure(figsize=(15, 10))\nsns.set(style='darkgrid')\nplt.title('A Classification Problem')\nplt.axvline(x=0.0, c='k')\nplt.axhline(y=0.0, c='k')\nplt.xlabel('x axis')\nplt.ylabel('y axis')\nplt.xticks(np.arange(-0.9, 1.1, step=0.1))\nplt.yticks(np.arange(-0.8, 1.2, step=0.1))\nplt.scatter(X_pass[:,0], X_pass[:,1], color='b') \nplt.scatter(X_fail[:,0], X_fail[:,1], color='r') ",
"We want to split the data into a training set and a test set.\nThe training set will be used to compute the parameters of our model, while the\ntesting set is only used to check the accuracy. SciKit-Learn has a predefined method\ntrain_test_split that can be used to randomly split data into a training set and a test set.",
"from sklearn.model_selection import train_test_split",
"We will split the data at a ratio of $4:1$, i.e. $80\\%$ of the data will be used for training, while the remaining $20\\%$ is used to test the accuracy.",
"X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1)",
"In order to build a <em style=\"color:blue;\">logistic regression</em> classifier, we import the module linear_model from SciKit-Learn.",
"import sklearn.linear_model as lm",
"The function $\\texttt{logistic_regression}(\\texttt{X_train}, \\texttt{Y_train}, \\texttt{X_test}, \\texttt{Y_test})$ takes a feature matrix $\\texttt{X_train}$ and a corresponding vector $\\texttt{Y_train}$ and computes a logistic regression model $M$ that best fits these data. Then, the accuracy of the model is computed using the test data $\\texttt{X_test}$ and $\\texttt{Y_test}$.",
"def logistic_regression(X_train, Y_train, X_test, Y_test, reg=10000):\n M = lm.LogisticRegression(C=reg, tol=1e-6)\n M.fit(X_train, Y_train)\n train_score = M.score(X_train, Y_train)\n yPredict = M.predict(X_test)\n accuracy = np.sum(yPredict == Y_test) / len(Y_test)\n return M, train_score, accuracy",
"We use this function to build a model for our data. Initially, we will take all the available data to create the model.",
"M, score, accuracy = logistic_regression(X, Y, X, Y)\nscore, accuracy",
"Given that there are only two classes, the accuracy of our first model is quite poor.\nLet us extract the coefficients so we can plot the <em style=\"color:blue;\">decision boundary</em>.",
"ϑ0 = M.intercept_[0]\nϑ1, ϑ2 = M.coef_[0]\n\nplt.figure(figsize=(15, 10))\nsns.set(style='darkgrid')\nplt.title('A Classification Problem')\nplt.axvline(x=0.0, c='k')\nplt.axhline(y=0.0, c='k')\nplt.xlabel('x axis')\nplt.ylabel('y axis')\nplt.xticks(np.arange(-0.9, 1.1, step=0.1))\nplt.yticks(np.arange(-0.8, 1.2, step=0.1))\nplt.scatter(X_pass[:,0], X_pass[:,1], color='b') \nplt.scatter(X_fail[:,0], X_fail[:,1], color='r') \nH = np.arange(-0.8, 1.0, 0.05)\nP = -(ϑ0 + ϑ1 * H)/ϑ2\nplt.plot(H, P, color='green')",
"Clearly, pure logistic regression is not working for this example. The reason is, that a linear decision boundary is not able to separate the positive examples from the negative examples. Let us add polynomial features. This enables us to create more complex decision boundaries.\nThe function $\\texttt{extend}(X)$ takes a feature matrix $X$ that is supposed to contain two features $x$ and $y$. It creates the new features $x^2$, $y^2$ and $x\\cdot y$ and returns a new feature matrix that also contains these additional features.",
"def extend(X):\n n = len(X)\n fx = np.reshape(X[:,0], (n, 1)) # extract first column\n fy = np.reshape(X[:,1], (n, 1)) # extract second column\n return np.hstack([fx, fy, fx*fx, fy*fy, fx*fy]) # stack everthing horizontally\n\nX_train_quadratic = extend(X_train)\nX_test_quadratic = extend(X_test)\n\nM, score, accuracy = logistic_regression(X_train_quadratic, Y_train, X_test_quadratic, Y_test)\nscore, accuracy",
"This seems to work better. Let us compute the decision boundary and plot it.",
"ϑ0 = M.intercept_[0]\nϑ1, ϑ2, ϑ3, ϑ4, ϑ5 = M.coef_[0]",
"The decision boundary is now given by the following equation:\n$$ \\vartheta_0 + \\vartheta_1 \\cdot x + \\vartheta_2 \\cdot y + \\vartheta_3 \\cdot x^2 + \\vartheta_4 \\cdot y^2 + \\vartheta_5 \\cdot x \\cdot y = 0$$\nThis is the equation of an ellipse. Let us plot the decision boundary with the data.",
"a = np.arange(-1.0, 1.0, 0.005)\nb = np.arange(-1.0, 1.0, 0.005)\nA, B = np.meshgrid(a,b)\nA\n\nB\n\nZ = ϑ0 + ϑ1 * A + ϑ2 * B + ϑ3 * A * A + ϑ4 * B * B + ϑ5 * A * B \nZ\n\nplt.figure(figsize=(15, 10))\nsns.set(style='darkgrid')\nplt.title('A Classification Problem')\nplt.axvline(x=0.0, c='k')\nplt.axhline(y=0.0, c='k')\nplt.xlabel('x axis')\nplt.ylabel('y axis')\nplt.xticks(np.arange(-0.9, 1.1, step=0.1))\nplt.yticks(np.arange(-0.8, 1.2, step=0.1))\nplt.scatter(X_pass[:,0], X_pass[:,1], color='b') \nplt.scatter(X_fail[:,0], X_fail[:,1], color='r') \nCS = plt.contour(A, B, Z, 0, colors='green')",
"Let us try to add <em style=\"color:blue;\">quartic features</em> next. These are features like $x^4$, $x^2\\cdot y^2$, etc.\nLuckily, SciKit-Learn has function that can automize this process.",
"from sklearn.preprocessing import PolynomialFeatures\n\nquartic = PolynomialFeatures(4, include_bias=False)\nX_train_quartic = quartic.fit_transform(X_train)\nX_test_quartic = quartic.fit_transform(X_test)\nprint(quartic.get_feature_names(['x', 'y']))",
"Let us fit the quartic model.",
"M, score, accuracy = logistic_regression(X_train_quartic, Y_train, X_test_quartic, Y_test)\nscore, accuracy",
"The accuracy on the training set has increased, but we observe that the accuracy on the training set is actually not improving. Again, we proceed to plot the decision boundary.",
"ϑ0 = M.intercept_[0]\nϑ1, ϑ2, ϑ3, ϑ4, ϑ5, ϑ6, ϑ7, ϑ8, ϑ9, ϑ10, ϑ11, ϑ12, ϑ13, ϑ14 = M.coef_[0]",
"Plotting the decision boundary starts to get tedious.",
"a = np.arange(-1.0, 1.0, 0.005)\nb = np.arange(-1.0, 1.0, 0.005)\nA, B = np.meshgrid(a,b)\nZ = ϑ0 + ϑ1 * A + ϑ2 * B + \\\n ϑ3 * A**2 + ϑ4 * A * B + ϑ5 * B**2 + \\\n ϑ6 * A**3 + ϑ7 * A**2 * B + ϑ8 * A * B**2 + ϑ9 * B**3 + \\\n ϑ10 * A**4 + ϑ11 * A**3 * B + ϑ12 * A**2 * B**2 + ϑ13 * A * B**3 + ϑ14 * B**4 \n\nplt.figure(figsize=(15, 10))\nsns.set(style='darkgrid')\nplt.title('A Classification Problem')\nplt.axvline(x=0.0, c='k')\nplt.axhline(y=0.0, c='k')\nplt.xlabel('x axis')\nplt.ylabel('y axis')\nplt.xticks(np.arange(-0.9, 1.1, step=0.1))\nplt.yticks(np.arange(-0.8, 1.2, step=0.1))\nplt.scatter(X_pass[:,0], X_pass[:,1], color='b') \nplt.scatter(X_fail[:,0], X_fail[:,1], color='r') \nCS = plt.contour(A, B, Z, 0, colors='green')",
"The decision boundary looks strange. Let's get bold and try to add features of a higher power.\nHowever, in order to understand what is happening, we will only plot the training data.",
"X_pass_train = X_train[Y_train == 1.0]\nX_fail_train = X_train[Y_train == 0.0]",
"In order to automatize the process, we define some auxiliary functions.\n$\\texttt{polynomial}(n)$ creates a polynomial in the variables A and B that contains all terms of the form $\\Theta[k] \\cdot A^i \\cdot B^j$ where $i+j \\leq n$.",
"def polynomial(n):\n sum = 'Θ[0]' \n cnt = 0\n for k in range(1, n+1):\n for i in range(0, k+1):\n cnt += 1\n sum += f' + Θ[{cnt}] * A**{k-i} * B**{i}'\n print('number of features:', cnt)\n return sum",
"Let's check this out for $n=4$.",
"polynomial(4)",
"The function $\\texttt{polynomial_grid}(n, M)$ takes a number $n$ and a model $M$. It returns a meshgrid that can be used to plot the decision boundary of the model.",
"def polynomial_grid(n, M):\n Θ = [M.intercept_[0]] + list(M.coef_[0])\n a = np.arange(-1.0, 1.0, 0.005)\n b = np.arange(-1.0, 1.0, 0.005)\n A, B = np.meshgrid(a,b)\n return eval(polynomial(n))",
"The function $\\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial logistic regression model of degree $n$. It plots both the training data and the decision boundary.",
"def plot_nth_degree_boundary(n, C=10000):\n poly = PolynomialFeatures(n, include_bias=False)\n X_train_poly = poly.fit_transform(X_train)\n X_test_poly = poly.fit_transform(X_test)\n M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C)\n print('The accuracy on the training set is:', score)\n print('The accuracy on the test set is:', accuracy)\n Z = polynomial_grid(n, M)\n plt.figure(figsize=(15, 10))\n sns.set(style='darkgrid')\n plt.title('A Classification Problem')\n plt.axvline(x=0.0, c='k')\n plt.axhline(y=0.0, c='k')\n plt.xlabel('x axis')\n plt.ylabel('y axis')\n plt.xticks(np.arange(-0.9, 1.11, step=0.1))\n plt.yticks(np.arange(-0.8, 1.21, step=0.1))\n plt.scatter(X_pass_train[:,0], X_pass_train[:,1], color='b') \n plt.scatter(X_fail_train[:,0], X_fail_train[:,1], color='r') \n CS = plt.contour(A, B, Z, 0, colors='green')",
"Let us test this for the polynomial logistic regression model of degree $4$.",
"plot_nth_degree_boundary(4)",
"This seems to be the same shape that we have seen earlier. It looks like the function $\\texttt{plot_nth_degree_boundary}(n)$ is working. Let's try higher degree polynomials.",
"plot_nth_degree_boundary(5)",
"The score on the training set has improved. What happens if we try still higher degrees?",
"plot_nth_degree_boundary(6)",
"We captured one more of the training examples. Let's get bold, we want a $100\\%$ training accuracy.",
"plot_nth_degree_boundary(14)",
"The model is getting more complicated, but it is not getting better, as the accuracy on the test set has not improved.",
"X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=2)\nX_pass_train = X_train[Y_train == 1.0]\nX_fail_train = X_train[Y_train == 0.0]",
"Let us check whether regularization can help. Below, the regularization parameter prevents the decision boundary from becoming to wiggly and thus the accuracy on the test set can increase. The function below plots all the data.",
"def plot_nth_degree_boundary_all(n, C):\n poly = PolynomialFeatures(n, include_bias=False)\n X_train_poly = poly.fit_transform(X_train)\n X_test_poly = poly.fit_transform(X_test)\n M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C)\n print('The accuracy on the training set is:', score)\n print('The accuracy on the test set is:', accuracy)\n Z = polynomial_grid(n, M)\n plt.figure(figsize=(15, 10))\n sns.set(style='darkgrid')\n plt.title('A Classification Problem')\n plt.axvline(x=0.0, c='k')\n plt.axhline(y=0.0, c='k')\n plt.xlabel('x axis')\n plt.ylabel('y axis')\n plt.xticks(np.arange(-0.9, 1.11, step=0.1))\n plt.yticks(np.arange(-0.8, 1.21, step=0.1))\n plt.scatter(X_pass[:,0], X_pass[:,1], color='b') \n plt.scatter(X_fail[:,0], X_fail[:,1], color='r') \n CS = plt.contour(A, B, Z, 0, colors='green')\n\nplot_nth_degree_boundary_all(14, 100.0)\n\nplot_nth_degree_boundary_all(20, 100000.0)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | machine-learning/.ipynb_checkpoints/loading_scikit-learns_digits-dataset-checkpoint.ipynb | mit | [
"Title: Loading Scikit-Learn's Digits Dataset\nSlug: loading_scikit-learns_digits-dataset\nSummary: Loading the built-in digits datasets of Scikit-Learn. \nDate: 2016-08-31 12:00\nCategory: Machine Learning\nTags: Basics\nAuthors: Chris Albon \nPreliminaries",
"# Load libraries\nfrom sklearn import datasets\nimport matplotlib.pyplot as plt ",
"Load Digits Dataset\nDigits is a dataset of handwritten digits. Each feature is the intensity of one pixel of an 8 x 8 image.",
"# Load digits dataset\ndigits = datasets.load_digits()\n\n# Create feature matrix\nX = digits.data\n\n# Create target vector\ny = digits.target\n\n# View the first observation's feature values\nX[0]",
"The observation's feature values are presented as a vector. However, by using the images method we can load the the same feature values as a matrix and then visualize the actual handwritten character:",
"# View the first observation's feature values as a matrix\ndigits.images[0]\n\n# Visualize the first observation's feature values as an image\nplt.gray() \nplt.matshow(digits.images[0]) \nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb | apache-2.0 | [
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Simple TFX Pipeline Tutorial using Penguin dataset\nA Short tutorial to run a simple TFX pipeline.\nNote: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"/>View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_simple.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_simple.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_simple.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a></td>\n</table></div>\n\nIn this notebook-based tutorial, we will create and run a TFX pipeline\nfor a simple classification model.\nThe pipeline will consist of three essential TFX components: ExampleGen,\nTrainer and Pusher. The pipeline includes the most minimal ML workflow like\nimporting data, training a model and exporting the trained model.\nPlease see\nUnderstanding TFX Pipelines\nto learn more about various concepts in TFX.\nSet Up\nWe first need to install the TFX Python package and download\nthe dataset which we will use for our model.\nUpgrade Pip\nTo avoid upgrading Pip in a system when running locally,\ncheck to make sure that we are running in Colab.\nLocal systems can of course be upgraded separately.",
"try:\n import colab\n !pip install --upgrade pip\nexcept:\n pass",
"Install TFX",
"!pip install -U tfx",
"Did you restart the runtime?\nIf you are using Google Colab, the first time that you run\nthe cell above, you must restart the runtime by clicking\nabove \"RESTART RUNTIME\" button or using \"Runtime > Restart\nruntime ...\" menu. This is because of the way that Colab\nloads packages.\nCheck the TensorFlow and TFX versions.",
"import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))",
"Set up variables\nThere are some variables used to define a pipeline. You can customize these\nvariables as you want. By default all output from the pipeline will be\ngenerated under the current directory.",
"import os\n\nPIPELINE_NAME = \"penguin-simple\"\n\n# Output directory to store artifacts generated from the pipeline.\nPIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)\n# Path to a SQLite DB file to use as an MLMD storage.\nMETADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')\n# Output directory where created models from the pipeline will be exported.\nSERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)\n\nfrom absl import logging\nlogging.set_verbosity(logging.INFO) # Set default logging level.",
"Prepare example data\nWe will download the example dataset for use in our TFX pipeline. The dataset we\nare using is\nPalmer Penguins dataset\nwhich is also used in other\nTFX examples.\nThere are four numeric features in this dataset:\n\nculmen_length_mm\nculmen_depth_mm\nflipper_length_mm\nbody_mass_g\n\nAll features were already normalized to have range [0,1]. We will build a\nclassification model which predicts the species of penguins.\nBecause TFX ExampleGen reads inputs from a directory, we need to create a\ndirectory and copy dataset to it.",
"import urllib.request\nimport tempfile\n\nDATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.\n_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'\n_data_filepath = os.path.join(DATA_ROOT, \"data.csv\")\nurllib.request.urlretrieve(_data_url, _data_filepath)",
"Take a quick look at the CSV file.",
"!head {_data_filepath}",
"You should be able to see five values. species is one of 0, 1 or 2, and all\nother features should have values between 0 and 1.\nCreate a pipeline\nTFX pipelines are defined using Python APIs. We will define a pipeline which\nconsists of following three components.\n- CsvExampleGen: Reads in data files and convert them to TFX internal format\nfor further processing. There are multiple\nExampleGens for various\nformats. In this tutorial, we will use CsvExampleGen which takes CSV file input.\n- Trainer: Trains an ML model.\nTrainer component requires a\nmodel definition code from users. You can use TensorFlow APIs to specify how to\ntrain a model and save it in a saved_model format.\n- Pusher: Copies the trained model outside of the TFX pipeline.\nPusher component can be thought\nof as a deployment process of the trained ML model.\nBefore actually define the pipeline, we need to write a model code for the\nTrainer component first.\nWrite model training code\nWe will create a simple DNN model for classification using TensorFlow Keras\nAPI. This model training code will be saved to a separate file.\nIn this tutorial we will use\nGeneric Trainer\nof TFX which support Keras-based models. You need to write a Python file\ncontaining run_fn function, which is the entrypoint for the Trainer\ncomponent.",
"_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_transform.tf_metadata import schema_utils\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since we're not generating or creating a schema, we will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n },\n _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int = 200) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _build_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n model = _build_keras_model()\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')",
"Now you have completed all preparation steps to build a TFX pipeline.\nWrite a pipeline definition\nWe define a function to create a TFX pipeline. A Pipeline object\nrepresents a TFX pipeline which can be run using one of the pipeline\norchestration systems that TFX supports.",
"def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,\n module_file: str, serving_model_dir: str,\n metadata_path: str) -> tfx.dsl.Pipeline:\n \"\"\"Creates a three component penguin pipeline with TFX.\"\"\"\n # Brings data into the pipeline.\n example_gen = tfx.components.CsvExampleGen(input_base=data_root)\n\n # Uses user-provided Python function that trains a model.\n trainer = tfx.components.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5))\n\n # Pushes the model to a filesystem destination.\n pusher = tfx.components.Pusher(\n model=trainer.outputs['model'],\n push_destination=tfx.proto.PushDestination(\n filesystem=tfx.proto.PushDestination.Filesystem(\n base_directory=serving_model_dir)))\n\n # Following three components will be included in the pipeline.\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n metadata_connection_config=tfx.orchestration.metadata\n .sqlite_metadata_connection_config(metadata_path),\n components=components)",
"Run the pipeline\nTFX supports multiple orchestrators to run pipelines.\nIn this tutorial we will use LocalDagRunner which is included in the TFX\nPython package and runs pipelines on local environment.\nWe often call TFX pipelines \"DAGs\" which stands for directed acyclic graph.\nLocalDagRunner provides fast iterations for development and debugging.\nTFX also supports other orchestrators including Kubeflow Pipelines and Apache\nAirflow which are suitable for production use cases.\nSee\nTFX on Cloud AI Platform Pipelines\nor\nTFX Airflow Tutorial\nto learn more about other orchestration systems.\nNow we create a LocalDagRunner and pass a Pipeline object created from the\nfunction we already defined.\nThe pipeline runs directly and you can see logs for the progress of the pipeline including ML model training.",
"tfx.orchestration.LocalDagRunner().run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=_trainer_module_file,\n serving_model_dir=SERVING_MODEL_DIR,\n metadata_path=METADATA_PATH))",
"You should see \"INFO:absl:Component Pusher is finished.\" at the end of the\nlogs if the pipeline finished successfully. Because Pusher component is the\nlast component of the pipeline.\nThe pusher component pushes the trained model to the SERVING_MODEL_DIR which\nis the serving_model/penguin-simple directory if you did not change the\nvariables in the previous steps. You can see the result from the file browser\nin the left-side panel in Colab, or using the following command:",
"# List files in created model directory.\n!find {SERVING_MODEL_DIR}",
"Next steps\nYou can find more resources on https://www.tensorflow.org/tfx/tutorials.\nPlease see\nUnderstanding TFX Pipelines\nto learn more about various concepts in TFX."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | 0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Source localization with MNE/dSPM/sLORETA/eLORETA\nThe aim of this tutorial is to teach you how to compute and apply a linear\ninverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.",
"# sphinx_gallery_thumbnail_number = 10\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse",
"Process MEG data",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname) # already has an average reference\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nevent_id = dict(aud_l=1) # event trigger and conditions\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n exclude='bads')\nbaseline = (None, 0) # means from the first instant to t = 0\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,\n baseline=baseline, reject=reject)",
"Compute regularized noise covariance\nFor more details see tut_compute_covariance.",
"noise_cov = mne.compute_covariance(\n epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)\n\nfig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)",
"Compute the evoked response\nLet's just use MEG channels for simplicity.",
"evoked = epochs.average().pick_types(meg=True)\nevoked.plot(time_unit='s')\nevoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',\n time_unit='s')\n\n# Show whitening\nevoked.plot_white(noise_cov, time_unit='s')\n\ndel epochs # to save memory",
"Inverse modeling: MNE/dSPM on evoked and raw data",
"# Read the forward solution and compute the inverse operator\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfwd = mne.read_forward_solution(fname_fwd)\n\n# make an MEG inverse operator\ninfo = evoked.info\ninverse_operator = make_inverse_operator(info, fwd, noise_cov,\n loose=0.2, depth=0.8)\ndel fwd\n\n# You can write it to disk with::\n#\n# >>> from mne.minimum_norm import write_inverse_operator\n# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',\n# inverse_operator)",
"Compute inverse solution",
"method = \"dSPM\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc, residual = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None,\n return_residual=True, verbose=True)",
"Visualization\nView activation time-series",
"plt.figure()\nplt.plot(1e3 * stc.times, stc.data[::100, :].T)\nplt.xlabel('time (ms)')\nplt.ylabel('%s value' % method)\nplt.show()",
"Examine the original data and the residual after fitting:",
"fig, axes = plt.subplots(2, 1)\nevoked.plot(axes=axes)\nfor ax in axes:\n ax.texts = []\n for line in ax.lines:\n line.set_color('#98df81')\nresidual.plot(axes=axes)",
"Here we use peak getter to move visualization to the time point of the peak\nand draw a marker at the maximum peak vertex.",
"vertno_max, time_max = stc.get_peak(hemi='rh')\n\nsubjects_dir = data_path + '/subjects'\nsurfer_kwargs = dict(\n hemi='rh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5)\nbrain = stc.plot(**surfer_kwargs)\nbrain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',\n scale_factor=0.6, alpha=0.5)\nbrain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',\n font_size=14)",
"Morph data to average brain",
"# setup source morph\nmorph = mne.compute_source_morph(\n src=inverse_operator['src'], subject_from=stc.subject,\n subject_to='fsaverage', spacing=5, # to ico-5\n subjects_dir=subjects_dir)\n# morph data\nstc_fsaverage = morph.apply(stc)\n\nbrain = stc_fsaverage.plot(**surfer_kwargs)\nbrain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=20)\ndel stc_fsaverage",
"Dipole orientations\nThe pick_ori parameter of the\n:func:mne.minimum_norm.apply_inverse function controls\nthe orientation of the dipoles. One useful setting is pick_ori='vector',\nwhich will return an estimate that does not only contain the source power at\neach dipole, but also the orientation of the dipoles.",
"stc_vec = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori='vector')\nbrain = stc_vec.plot(**surfer_kwargs)\nbrain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20)\ndel stc_vec",
"Note that there is a relationship between the orientation of the dipoles and\nthe surface of the cortex. For this reason, we do not use an inflated\ncortical surface for visualization, but the original surface used to define\nthe source space.\nFor more information about dipole orientations, see\nsphx_glr_auto_tutorials_plot_dipole_orientations.py.\nNow let's look at each solver:",
"for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]),\n ('sLORETA', [3, 5, 7]),\n ('eLORETA', [0.75, 1.25, 1.75]),)):\n surfer_kwargs['clim']['lims'] = lims\n stc = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None)\n brain = stc.plot(figure=mi, **surfer_kwargs)\n brain.add_text(0.1, 0.9, method, 'title', font_size=20)\n del stc"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyPSA/PyPSA | examples/notebooks/scigrid-sclopf.ipynb | mit | [
"Security-Constrained Optimisation\nIn this example, the dispatch of generators is optimised using the security-constrained linear OPF, to guaranteed that no branches are overloaded by certain branch outages.",
"import pypsa, os\nimport numpy as np\n\nnetwork = pypsa.examples.scigrid_de(from_master=True)",
"There are some infeasibilities without line extensions.",
"for line_name in [\"316\", \"527\", \"602\"]:\n network.lines.loc[line_name, \"s_nom\"] = 1200\n\nnow = network.snapshots[0]",
"Performing security-constrained linear OPF",
"branch_outages = network.lines.index[:15]\nnetwork.sclopf(now, branch_outages=branch_outages, solver_name=\"cbc\")",
"For the PF, set the P to the optimised P.",
"network.generators_t.p_set = network.generators_t.p_set.reindex(\n columns=network.generators.index\n)\nnetwork.generators_t.p_set.loc[now] = network.generators_t.p.loc[now]\n\nnetwork.storage_units_t.p_set = network.storage_units_t.p_set.reindex(\n columns=network.storage_units.index\n)\nnetwork.storage_units_t.p_set.loc[now] = network.storage_units_t.p.loc[now]",
"Check no lines are overloaded with the linear contingency analysis",
"p0_test = network.lpf_contingency(now, branch_outages=branch_outages)\np0_test",
"Check loading as per unit of s_nom in each contingency",
"max_loading = (\n abs(p0_test.divide(network.passive_branches().s_nom, axis=0)).describe().loc[\"max\"]\n)\nmax_loading\n\nnp.allclose(max_loading, np.ones((len(max_loading))))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dominikgrimm/ridge_and_svm | Toy-Example-Solution.ipynb | mit | [
"Toy Example: Ridge Regression vs. SVM\n<p></p>\n\n<div style=\"text-align:justify\">\n In this toy example we will compare two machine learning models: <em>Ridge Regression</em> and <em>C-SVM</em>. The data is generated <em>in silico</em> and is only used to illustrate how to use <em>Ridge Regression</em> and <em>C-SVM</em>.\n</div>\n\nProblem Description of the Toy Example\n<p></p>\n\n<div style=\"text-align:justify\">\nA new cancer drug was developed for therapy. During the clinical trail the researchers releaized that the drug had a faster response for a certain subgroup of the patients, while it was less responsive in the others. In addition, the researchers recognized that the drug leads to severe side-effects the longer the patient is treated with the drug. The goal should be to reduce the side effects by treating only those patients that are predicted to have a fast response when taking the drug.\n</div>\n<br>\n<div style=\"text-align:justify\">\nThe researches believe that different genetic mutations in the genomes of the individual patients might play a role for the differences in response times.\n</div>\n<br>\n<div style=\"text-align:justify\">\n The researches contacted the <em>machine learning</em> lab to build a predictive model. The model should predict the individual response time of the drug based on the individual genetic backgrounds of a patient.\n</div>\n<br>\n<div style=\"text-align:justify\">\nFor this purpose, we get a dataset of 400 patients. For each patient a panel of 600 genetic mutations was measured. In addition, the researchers measured how many days it took until the drug showed a positive response.\n</div>\n\n1. Using Ridge Regression to predict the response time\n<div style=\"text-align:justify\">\n To predict the response time of the drug for new patients, we will train a <em>Ridge Regression</em> model. The target variable for this task is the response time in days. The features are the 600 genetic mutations measured for each of the 400 patients. To avoid overfitting we will use a nested-crossvalidation to determine the optimal hyperparamter.\n</div>\n1.1 Data Preprocessing",
"%matplotlib inline\nimport scipy as sp\nimport matplotlib\nimport pylab as pl\nmatplotlib.rcParams.update({'font.size': 15})\n\nfrom sklearn.linear_model import Ridge\nfrom sklearn.svm import SVC\nfrom sklearn.model_selection import KFold, StratifiedKFold, GridSearchCV,StratifiedShuffleSplit\nfrom sklearn.model_selection import cross_val_score, train_test_split\nfrom sklearn.metrics import accuracy_score, mean_squared_error, mean_absolute_error\nfrom sklearn.metrics import roc_curve, auc\n\ndef visualized_variance_bias_tradeoff(hyperp, line_search, optimal_hyperp,classification=False):\n pl.figure(figsize=(18,7))\n if classification:\n factor=1\n else:\n factor=-1\n pl.plot(hyperp,line_search.cv_results_['mean_train_score']*factor,label=\"Training Error\",color=\"#e67e22\")\n pl.fill_between(hyperp,\n line_search.cv_results_['mean_train_score']*factor-line_search.cv_results_['std_train_score'],\n line_search.cv_results_['mean_train_score']*factor+line_search.cv_results_['std_train_score'],\n alpha=0.3,color=\"#e67e22\")\n pl.plot(hyperp,line_search.cv_results_['mean_test_score']*factor,label=\"Validation Error\",color=\"#2980b9\")\n pl.fill_between(hyperp,\n line_search.cv_results_['mean_test_score']*factor-line_search.cv_results_['std_test_score'],\n line_search.cv_results_['mean_test_score']*factor+line_search.cv_results_['std_test_score'],\n alpha=0.3,color=\"#2980b9\")\n pl.xscale(\"log\")\n if classification:\n pl.ylabel(\"Accuracy\")\n else:\n pl.ylabel(\"Mean Squared Error\")\n pl.xlabel(\"Hyperparameter\")\n pl.legend(frameon=True)\n pl.grid(True)\n pl.axvline(x=optimal_hyperp,color='r',linestyle=\"--\")\n pl.title(\"Training- vs. Validation-Error (Optimal Hyperparameter = %.1e)\"%optimal_hyperp);\n\nrandom_state = 42\n\n#Load Data\ndata = sp.loadtxt(\"data/X.txt\")\nbinary_target = sp.loadtxt(\"data/y_binary.txt\")\ncontinuous_target = sp.loadtxt(\"data/y.txt\")\n\n#Summary of the Data\nprint(\"Orginal Data\")\nprint(\"Number Patients:\\t%d\"%data.shape[0])\nprint(\"Number Features:\\t%d\"%data.shape[1])\nprint()\n\n#Split Data into Training and Testing data\ntrain_test_data = train_test_split(data,\n continuous_target,\n test_size=0.2,\n random_state=random_state)\ntraining_data = train_test_data[0]\ntesting_data = train_test_data[1]\ntraining_target = train_test_data[2]\ntesting_target = train_test_data[3]\n\nprint(\"Training Data\")\nprint(\"Number Patients:\\t%d\"%training_data.shape[0])\nprint(\"Number Features:\\t%d\"%training_data.shape[1])\nprint()\nprint(\"Testing Data\")\nprint(\"Number Patients:\\t%d\"%testing_data.shape[0])\nprint(\"Number Features:\\t%d\"%testing_data.shape[1])",
"1.2 Train Ridge Regression on training data\nThe first step is to train the ridge regression model on the training data with a 5-fold cross-validation with an internal line-search to find the optimal hyperparameter $\\alpha$. We will plot the training errors against the validation errors, to illustrate the effect of different $\\alpha$ values.",
"#Initialize different alpha values for the Ridge Regression model\nalphas = sp.logspace(-2,8,11)\nparam_grid = dict(alpha=alphas)\n\n#5-fold cross-validation (outer-loop)\nouter_cv = KFold(n_splits=5,shuffle=True,random_state=random_state)\n\n#Line-search to find the optimal alpha value (internal-loop)\n#Model performance is measured with the negative mean squared error\nline_search = GridSearchCV(Ridge(random_state=random_state,solver=\"cholesky\"),\n param_grid=param_grid,\n scoring=\"neg_mean_squared_error\",\n return_train_score=True)\n\n#Execute nested cross-validation and compute mean squared error\nscore = cross_val_score(line_search,X=training_data,y=training_target,cv=outer_cv,scoring=\"neg_mean_squared_error\")\n\nprint(\"5-fold nested cross-validation\")\nprint(\"Mean-Squared-Error:\\t\\t%.2f (-+ %.2f)\"%(score.mean()*(-1),score.std()))\nprint()\n\n#Estimate optimal alpha on the full training data\nline_search.fit(training_data,training_target)\noptimal_alpha = line_search.best_params_['alpha']\n\n#Visualize training and validation error for different alphas\nvisualized_variance_bias_tradeoff(alphas, line_search, optimal_alpha)",
"1.3 Train Ridge Regression with optimal $\\alpha$ and evaluate model in test data\nNext we retrain the ridge regresssion model with the optimal $\\alpha$ (from the last section). After re-training we will test the model on the not used test data to evaluate the model performance on unseen data.",
"#Train Ridge Regression on the full training data with optimal alpha\nmodel = Ridge(alpha=optimal_alpha,solver=\"cholesky\")\nmodel.fit(training_data,training_target)\n\n#Use trained model the predict new instances in test data\npredictions = model.predict(testing_data)\nprint(\"Prediction results on test data\")\nprint(\"MSE (test data, alpha=optimal):\\t%.2f \"%(mean_squared_error(testing_target,predictions)))\nprint(\"Optimal Alpha:\\t\\t\\t%.2f\"%optimal_alpha)\nprint()",
"<div style=\"text-align:justify\">\n Using 5-fold cross-validation on the training data leads to a mean squared error (MSE) of $MSE=587.09 \\pm 53.54$. On the test data we get an error of $MSE=699.56$ ($\\sim 26.5$ days). That indicates that the ridge regression model performs rather mediocre (even with hyperparameter optimization).\n One reason might be that the target variable (number of days until the drug shows a positive response) is insufficently described by the given features (genetic mutations).\n</div>\n\n2. Prediction of patients with slow and fast response times using a Support-Vector-Machine\n<div style=\"text-align:justify\">\n Due to the rather bad results with the ridge regession model the machine learning lab returned to the researchers to discuss potential issues. The researches than mentioned that it might not be necessarily important to predict the exact number of days. It might be even better to only predict if a patient reacts fast or slowly on the drug. Based on some prior experiments the researchers observed, that most of the patients showed severe side-effects after 50 days of treatment. Thus we can binarise the data, such that all patients below 50 days are put into class 0 and all others into class 1. This leads to a classical classification problem for which a support vector machine could be used. \n</div>\n\n2.1 Data Preprocessing",
"#Split data into training and testing splits, stratified by class-ratios\nstratiefied_splitter = StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=42)\nfor train_index,test_index in stratiefied_splitter.split(data,binary_target):\n training_data = data[train_index,:]\n training_target = binary_target[train_index]\n testing_data = data[test_index,:]\n testing_target = binary_target[test_index]\n\nprint(\"Training Data\")\nprint(\"Number Patients:\\t\\t%d\"%training_data.shape[0])\nprint(\"Number Features:\\t\\t%d\"%training_data.shape[1])\nprint(\"Number Patients Class 0:\\t%d\"%(training_target==0).sum())\nprint(\"Number Patients Class 1:\\t%d\"%(training_target==1).sum())\nprint()\nprint(\"Testing Data\")\nprint(\"Number Patients:\\t\\t%d\"%testing_data.shape[0])\nprint(\"Number Features:\\t\\t%d\"%testing_data.shape[1])\nprint(\"Number Patients Class 0:\\t%d\"%(testing_target==0).sum())\nprint(\"Number Patients Class 1:\\t%d\"%(testing_target==1).sum())",
"2.2 Classification with a linear SVM",
"Cs = sp.logspace(-7, 1, 9)\nparam_grid = dict(C=Cs)\n\ngrid = GridSearchCV(SVC(kernel=\"linear\",random_state=random_state),\n param_grid=param_grid,\n scoring=\"accuracy\",\n n_jobs=4,\n return_train_score=True)\nouter_cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state)\n\n#Perform 5 Fold cross-validation with internal line-search and report average Accuracy\nscore = cross_val_score(grid,X=training_data,y=training_target,cv=outer_cv,scoring=\"accuracy\")\n\nprint(\"5-fold nested cross-validation on training data\")\nprint(\"Average(Accuracy):\\t\\t\\t%.2f (-+ %.2f)\"%(score.mean(),score.std()))\nprint()\ngrid.fit(training_data,training_target)\noptimal_C = grid.best_params_['C']\n\n#Plot variance bias tradeoff\nvisualized_variance_bias_tradeoff(Cs, grid, optimal_C,classification=True)\n\n#retrain model with optimal C and evaluate on test data\nmodel = SVC(C=optimal_C,random_state=random_state,kernel=\"linear\")\nmodel.fit(training_data,training_target)\npredictions = model.predict(testing_data)\nprint(\"Prediction with optimal C\")\nprint(\"Accuracy (Test data, C=Optimal):\\t%.2f \"%(accuracy_score(testing_target,predictions)))\nprint(\"Optimal C:\\t\\t\\t\\t%.2e\"%optimal_C)\nprint()\n\n#Compute ROC FPR, TPR and AUC\nfpr, tpr, _ = roc_curve(testing_target, model.decision_function(testing_data))\nroc_auc = auc(fpr, tpr)\n\n#Plot ROC Curve\npl.figure(figsize=(8,8))\npl.plot(fpr, tpr, color='darkorange',\n lw=3, label='ROC curve (AUC = %0.2f)' % roc_auc)\npl.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')\npl.xlim([-0.01, 1.0])\npl.ylim([0.0, 1.05])\npl.xlabel('False Positive Rate (1-Specificity)',fontsize=18)\npl.ylabel('True Positive Rate (Sensitivity)',fontsize=18)\npl.title('Receiver Operating Characteristic (ROC) Curve',fontsize=18)\npl.legend(loc=\"lower right\",fontsize=18)",
"2.3 Classification with SVM and RBF kernel",
"Cs = sp.logspace(-4, 4, 9)\ngammas = sp.logspace(-7, 1, 9)\nparam_grid = dict(C=Cs,gamma=gammas)\n\ngrid = GridSearchCV(SVC(kernel=\"rbf\",random_state=42),\n param_grid=param_grid,\n scoring=\"accuracy\",\n n_jobs=4,\n return_train_score=True)\n\nouter_cv = StratifiedKFold(n_splits=5,shuffle=True,random_state=random_state)\n\n#Perform 5 Fold cross-validation with internal line-search and report average Accuracy\nscore = cross_val_score(grid,X=training_data,y=training_target,cv=outer_cv,scoring=\"accuracy\")\n\nprint(\"5-fold nested cross-validation on training data\")\nprint(\"Average(Accuracy):\\t\\t\\t%.2f (-+ %.2f)\"%(score.mean(),score.std()))\nprint()\n\ngrid.fit(training_data,training_target)\noptimal_C = grid.best_params_['C']\noptimal_gamma = grid.best_params_['gamma']\n\n#Retrain and test\nmodel = SVC(C=optimal_C,gamma=optimal_gamma,random_state=42,kernel=\"rbf\")\nmodel.fit(training_data,training_target)\npredictions = model.predict(testing_data)\nprint(\"Prediction with optimal C and Gamma\")\nprint(\"Accuracy (Test Data, C=Optimal):\\t%.2f \"%(accuracy_score(testing_target,predictions)))\nprint(\"Optimal C:\\t\\t\\t\\t%.2e\"%optimal_C)\nprint(\"Optimal Gamma:\\t\\t\\t\\t%.2e\"%optimal_gamma)\nprint()\n\n#Compute ROC FPR, TPR and AUC\nfpr, tpr, _ = roc_curve(testing_target, model.decision_function(testing_data))\nroc_auc = auc(fpr, tpr)\n\n#Plot ROC Curve\npl.figure(figsize=(8,8))\npl.plot(fpr, tpr, color='darkorange',\n lw=3, label='ROC curve (AUC = %0.2f)' % roc_auc)\npl.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')\npl.xlim([-0.01, 1.0])\npl.ylim([0.0, 1.05])\npl.xlabel('False Positive Rate (1-Specificity)',fontsize=18)\npl.ylabel('True Positive Rate (Sensitivity)',fontsize=18)\npl.title('Receiver Operating Characteristic (ROC) Curve',fontsize=18)\npl.legend(loc=\"lower right\",fontsize=18)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
catalystcomputing/DSIoT-Python-sessions | Session3/code/03 Supervised Learning - 00 Python basics and Logistic Regression.ipynb | apache-2.0 | [
"# Here we introduce Data science by starting with a common regression model(logistic regression). The example uses the Iris Dataset\n# We also introduce Python as we develop the model. (The Iris dataset section is adatped from an example from Analyics Vidhya) \n# Python uses some libraries which we load first. \n# numpy is used for Array operations\n# mathplotlib is used for visualization\n\nimport numpy as np\nimport matplotlib as mp\nfrom sklearn import datasets\nfrom sklearn import metrics\nfrom sklearn.linear_model import LogisticRegression\n\ndataset = datasets.load_iris()\n\n# Display the data\ndataset\n\n# first we need to understand the data\n\nfrom IPython.display import Image\nfrom IPython.core.display import HTML\nImage(\"https://upload.wikimedia.org/wikipedia/commons/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg\")\n\nImage(\"http://www.opengardensblog.futuretext.com/wp-content/uploads/2016/01/iris-dataset-sample.jpg\")\n\n# In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y \n# and one or more explanatory variables (or independent variables) denoted X. There are differnt types of regressions that model the\n# relationship between the independent and the dependent variables \n\n# In linear regression, the relationships are modeled using linear predictor functions whose unknown model \n# parameters are estimated from the data. Such models are called linear models.\n\n# In mathematics, a linear combination is an expression constructed from a set of terms by multiplying \n# each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the \n# form ax + by, where a and b are constants)\n\n# Linear regression\nImage(\"https://www.biomedware.com/files/documentation/spacestat/Statistics/Multivariate_Modeling/Regression/regression_line.png\")\n\nImage(url=\"http://31.media.tumblr.com/e00b481257fac723638b32271e611a2f/tumblr_inline_ntui2ohGy41sfzcxh_500.gif\")",
"We use the <b> Iris dataset </b> \nhttps://en.m.wikipedia.org/wiki/Iris_flower_data_set\nThe data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other.\nlogistic regression\nWhile logistic regression gives each predictor (independent variable) a coefficient ‘b’ which measures its independent contribution to variations in the dependent variable, the dependent variable can only take on one of the two values: 0 or 1. What we want to predict from knowledge of relevant independent variables and coefficients is therefore not a numerical value of a dependent variable as in linear regression, but rather the probability (p) that it is 1 rather than 0 (belonging to one group rather than the other). \nThe outcome of the regression is not a prediction of a Y value, as in linear regression, but a probability of belonging to one of two conditions of Y, which can take on any value between 0 and 1 rather than just 0 and 1. \nThe crucial limitation of linear regression is that it cannot deal with dependent variable’s that are dichotomous and categorical. Many interesting variables are dichotomous: for example, consumers make a decision to buy or not buy, a product may pass or fail quality control, there are good or poor credit risks, an employee may be promoted or not. A range of regression techniques have been developed for analysing data with categorical dependent variables, including logistic regression and discriminant analysis.",
"model = LogisticRegression()\nmodel.fit(dataset.data, dataset.target)\n\nexpected = dataset.target\npredicted = model.predict(dataset.data)\n\n# classification metrics report builds a text report showing the main classification metrics\n# In pattern recognition and information retrieval with binary classification, \n# precision (also called positive predictive value) is the fraction of retrieved instances that are relevant, \n# while recall (also known as sensitivity) is the fraction of relevant instances that are retrieved. \n# Both precision and recall are therefore based on an understanding and measure of relevance. \n# Suppose a computer program for recognizing dogs in scenes from a video identifies 7 dogs in a scene containing 9 dogs \n# and some cats. If 4 of the identifications are correct, but 3 are actually cats, the program's precision is 4/7 \n# while its recall is 4/9.\n\n# In statistical analysis of binary classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. \n# It considers both the precision p and the recall r of the test to compute the score: \n# p is the number of correct positive results divided by the number of all positive results, \n# and r is the number of correct positive results divided by the number of positive results that should have been returned. \n# The F1 score can be interpreted as a weighted average of the precision and recall\n\nprint(metrics.classification_report(expected, predicted))\n\n# Confusion matrix \n# https://en.wikipedia.org/wiki/Confusion_matrix\n# In the field of machine learning, a confusion matrix is a table layout that allows visualization of the performance \n# of an algorithm, typically a supervised learning one. \n# Each column of the matrix represents the instances in a predicted class \n# while each row represents the instances in an actual class (or vice-versa)\n\n\n# If a classification system has been trained to distinguish between cats, dogs and rabbits, \n# a confusion matrix will summarize the results of testing the algorithm for further inspection. \n# Assuming a sample of 27 animals — 8 cats, 6 dogs, and 13 rabbits, the resulting confusion matrix \n# could look like the table below:\n\nImage(\"http://www.opengardensblog.futuretext.com/wp-content/uploads/2016/01/confusion-matrix.jpg\")\n\n# In this confusion matrix, of the 8 actual cats, the system predicted that three were dogs, \n# and of the six dogs, it predicted that one was a rabbit and two were cats. \n# We can see from the matrix that the system in question has trouble distinguishing between cats and dogs, \n# but can make the distinction between rabbits and other types of animals pretty well. \n# All correct guesses are located in the diagonal of the table, so it's easy to visually \n# inspect the table for errors, as they will be represented by values outside the diagonal.\n\nprint (metrics.confusion_matrix(expected, predicted))\n\nimport pandas as pd",
"We typically need the following libraries:\n<b> NumPy </b> Numerical Python - mainly used for n-dimensional array(which is absent in traditional Python).\nAlso contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fortran, C and C++\n<b>SciPy</b> Scientific Python (built on NumPy). Contains a variety of high level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization and Sparse matrices.\n<b> Matplotlib </b> for plotting vast variety of graphs ex histograms, line plots and heat maps.\n<b> Pandas </b> for structured data operations and data manipulation. It is extensively used for pre processing. \n<b> Scikit Learn </b> for machine learning. Built on NumPy, SciPy and matplotlib, this library contains a lot of effiecient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction.\nStatsmodels for statistical modeling. Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator.\nSeaborn for statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib. Seaborn aims to make visualization a central part of exploring and understanding data.\n<b>Additional libraries, you might need:</b>\nurllib for web based operations like opening URLs and performing operations\nos for Operating system and file operations\nnetworkx and igraph for graph based data manipulations\nregular expressions for finding patterns in text data\nBeautifulSoup for scrapping web",
"integers_list = [1,3,5,7,9] # lists are seperated by square brackets\nprint(integers_list)\ntuple_integers = 1,3,5,7,9 #tuples are seperated by commas and are immutable\nprint(tuple_integers)\ntuple_integers[0] = 11\n\n#Python strings can be in single or double quotes\nstring_ds = \"Data Science\"\n\nstring_iot = \"Internet of Things\"\n\nstring_dsiot = string_ds + \" for \" + string_iot\n\nprint (string_dsiot)\n\nlen(string_dsiot)\n\n# sets are unordered collections with no duplicate elements\nprog_languages = set(['Python', 'Java', 'Scala'])\nprog_languages\n\n# Dictionaies are comma seperated key value pairs seperated by braces\ndict_marks = {'John':95, 'Mark': 100, 'Anna': 99}\n\ndict_marks['John']"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
kmunve/APS | aps/notebooks/ml_varsom/linear_regression.ipynb | mit | [
"LINEAR REGRESSION\n\nis the simplest machine learning model\nis used for finding linear relationship between target and one or more predictors\nthere are two types of linear regression:\nSimple (one feature)\nMultiple (two or more features) \n\n\nThe main idea of linear regression is to obtain a line that best fits the data. \nThat means finding the one line for which total prediction error (for all data points) are as small as possible. (Error is the distance between actual values and values predicted using regression line.)\n\nFirst linear regression model\nFirst we'll create a simple linear regression model - we saw that LSTAT and RM are two variables that are highly correlated with target. We will see how good predicteions we can get with just one feature - and how to decide which one of these features is better for estimating median house price? \nStep one is to divide our dataset into training and testing part - it is important to test our model against data that has never been used for training – that tells us how the model might perform against data that it has not yet seen and it is meant to be representative of how the model might perform in the real world.\nThat's why we will use only 70% of our data to train the model and then we'll use the rest of data (30%) to evaluate our model.",
"import pandas as pd\nimport numpy as np\nimport json\nimport graphviz\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model\n\npd.set_option(\"display.max_rows\",6)\n\n%matplotlib inline\n\ndf_data = pd.read_csv('varsom_ml_preproc.csv', index_col=0)\n\nX = df_data.filter(['mountain_weather_wind_speed_num', 'mountain_weather_precip_most_exposed'])#, 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT'])\ny = df_data['danger_level']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3) # split the data\n\nlm = linear_model.LinearRegression()\nmodel_lr = lm.fit(X_train, y_train) # train the model\n\npredictions_lr = model_lr.predict(X_test) # predict values for test dataset\n\nprint(f'{model_lr.intercept_:.2f}, {model_lr.coef_}')\n\nplt.scatter(y, X['mountain_weather_precip_most_exposed'], c=X['mountain_weather_wind_speed_num'])\n\nprint(\"Our third model: \\n \\ny = {0:.2f}\".format(model_lr.intercept_) + \" {0:.2f}\".format(model_lr.coef_[0]) + \" * CRIM\"\n + \" + {0:.2f}\".format(model_lr.coef_[1]) + \" * ZN\" + \" + {0:.2f}\".format(model_lr.coef_[2]) + \" * INDUS\"\n + \" + {0:.2f}\".format(model_lr.coef_[3]) + \" + * CHAS\" + \" {0:.2f}\".format(model_lr.coef_[4]) + \" * RM\" \n + \" + {0:.2f}\".format(model_lr.coef_[5]) + \" * AGE\" + \" + {0:.2f}\".format(model_lr.coef_[6]) + \" * RAD\"\n + \"\\n {0:.2f}\".format(model_lr.coef_[7]) + \" * TAX\" + \" {0:.2f}\".format(model_lr.coef_[8]) + \" * PTRATIO\"\n + \" + {0:.2f}\".format(model_lr.coef_[9]) + \" * B\" + \" {0:.2f}\".format(model_lr.coef_[10]) + \" * LSTAT\")\n\nfrom sklearn.model_selection import train_test_split\n\nX_train_1, X_test_1, y_train_1, y_test_1 = train_test_split(df_data, random_state = 222, test_size = 0.3)\n\n # we are importing machine learning model we'll use\n\nlm1 = linear_model.LinearRegression()\n\nmodel_1 = lm1.fit(X_train_1, y_train_1) # we have just created a model! :) \n\n# as we said before, the model in this simple case is a line that has two parameters\n\n# so we ask: what are our estimated parameters? (alpha and beta?)\n\nprint(\"Our first model: y = {0:.2f}\".format(model_1.intercept_) + \" {0:.2f}\".format(model_1.coef_[0]) + \" * x\")\n\nprint(\"Intercept: {0:.2f}\".format(model_1.intercept_))\nprint(\"Extra price per extra unit of LSTAT: {0:.2f}\".format(model_1.coef_[0]))\n\n# now we'd like is to predict house price for test data (data that model hasn't seen yet)\n\npredictions_1 = model_1.predict(X_test_1)\n\npredictions_1[0:5]\n\n# let's visualize our regression line\n\nplt.plot(X_test_1, y_test_1, 'o')\nplt.plot(X_test_1, predictions_1, color = 'red')\nplt.xlabel('% of lower status of the population')\nplt.ylabel('Median home value in $1000s')",
"Evaluation of your model",
"# let's try to visualize the estimated and real house values for all data points in test dataset\n\n\nfig, ax = plt.subplots(figsize=(15, 5))\n\nplt.subplot(1, 2, 1)\nplt.plot(X_test_1,predictions_1, 'o')\nplt.xlabel('% of lower status of the population')\nplt.ylabel('Estimated home value in $1000s')\n\n\nplt.subplot(1, 2, 2)\nplt.plot(X_test_1,y_test_1, 'o')\nplt.xlabel('% of lower status of the population')\nplt.ylabel('Median home value in $1000s')\n\nplt.tight_layout()\n\nplt.show()",
"To evaulate the performance of the model, we can compute the error between the real house value (y_test_1) and the predicted values we got form our model (predictions_1).\nOne such metric is called the residual sum of squares (RSS):",
"# first we define our RSS function\n\ndef RSS(y, p):\n return sum((y - p)**2)\n\n# then we calculate RSS: \n\nRSS_model_1 = RSS(y_test_1, predictions_1)\n\nRSS_model_1",
"This number doesn't tell us much - is 7027 good? Is it bad? \nUnfortunatelly, there is no right answer - it depends on the data. Sometimes RSS of 7000 indicates very bad model, and sometimes 7000 is as good as it gets. \nThat's why we use RSS when comparing models - the model with lowest RSS is the best. \nThe other metrics we can use to evaluate our model is called coefficient of determination. \nIt's denoted as $R^{2}$ and it is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).\nTo calculate it, we use .score function in Python.",
"lm1.score(X_test_1,y_test_1)",
"This means that only 51% of variability is explained by our model. \nIn general, $R^{2}$ is a number between 0 and 1 - the closer it is to 1, the better the model is. \nSince we got only 0.51, we can conclude that this is not a very good model. \nBut we can try to build a model with second variable - RM - and check if we can get better result. \nMore linear regression models",
"# we just repeat everything as before \n\nX_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(boston_data[['RM']], boston_data.MEDV, \n random_state = 222, test_size = 0.3) # split the data\n\nlm = linear_model.LinearRegression()\nmodel_2 = lm.fit(X_train_2, y_train_2) # train the model\n\npredictions_2 = model_2.predict(X_test_2) # predict values for test dataset\n\nprint(\"Our second model: y = {0:.2f}\".format(model_2.intercept_) + \" + {0:.2f}\".format(model_2.coef_[0]) + \" * x\")\n\n# let's visualize our regression line\n\nplt.plot(X_test_2, y_test_2, 'o')\nplt.plot(X_test_2, predictions_2, color = 'red')\nplt.xlabel('Average number of rooms')\nplt.ylabel('Median home value in $1000s')\n\n# let's calculate RSS and R^2\n\nprint (RSS(y_test_2, predictions_2)) \n\nprint (lm.score(X_test_2, y_test_2))\n\n# now we can compare our models \n\nprint(\"RSS for first model is {0:.2f}\".format(RSS(y_test_1, predictions_1)) \n + \", and RSS for second model is {0:.2f}\".format(RSS(y_test_2, predictions_2)) + '\\n' + '\\n' \n + \"R^2 for first model is {0:.2f}\".format(lm1.score(X_test_1, y_test_1)) \n + \", and R^2 for second model is {0:.2f}\".format(lm.score(X_test_2, y_test_2)))",
"Since RSS is lower for second modell (and lower the RSS, better the model) and $R^{2}$ is higher for second modell (and we want $R^{2}$ as close to 1 as possible), both measures tells us that second model is better.\nHowever, difference is not big - out second model performs slightly better, but we still can't say it fits our data well. \nNext thing we can try is to build a model with all features we have available and see if using multiple features improves performace of the model.",
"X = boston_data[['CRIM', 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']]\ny = boston_data[\"MEDV\"]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3) # split the data\n\nlm = linear_model.LinearRegression()\nmodel_lr = lm.fit(X_train, y_train) # train the model\n\npredictions_lr = model_lr.predict(X_test) # predict values for test dataset\n\nprint(\"Our third model: \\n \\ny = {0:.2f}\".format(model_lr.intercept_) + \" {0:.2f}\".format(model_lr.coef_[0]) + \" * CRIM\"\n + \" + {0:.2f}\".format(model_lr.coef_[1]) + \" * ZN\" + \" + {0:.2f}\".format(model_lr.coef_[2]) + \" * INDUS\"\n + \" + {0:.2f}\".format(model_lr.coef_[3]) + \" + * CHAS\" + \" {0:.2f}\".format(model_lr.coef_[4]) + \" * RM\" \n + \" + {0:.2f}\".format(model_lr.coef_[5]) + \" * AGE\" + \" + {0:.2f}\".format(model_lr.coef_[6]) + \" * RAD\"\n + \"\\n {0:.2f}\".format(model_lr.coef_[7]) + \" * TAX\" + \" {0:.2f}\".format(model_lr.coef_[8]) + \" * PTRATIO\"\n + \" + {0:.2f}\".format(model_lr.coef_[9]) + \" * B\" + \" {0:.2f}\".format(model_lr.coef_[10]) + \" * LSTAT\")\n\n# let's evaluate the model\n\nprint(\"RSS for the third model is {0:.2f}\".format(RSS(y_test, predictions_lr)) + '\\n' + '\\n' \n + \"R^2 for the third model is {0:.2f}\".format(lm.score(X_test, y_test)) )",
"Now we can see improvement - RSS is 2000 less than for second model, and $R^{2}$ is 0.24 higher than for second model.\nSo out of the three models we tested, we can see that third one (with multiple features) is performing the best. \nOf course, linear regression is not the only method we can use to solve this problems - there are more advanced methods like decision trees, random forests and gradient boosted trees."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/cas/cmip6/models/sandbox-3/ocean.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CAS\nSource ID: SANDBOX-3\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cas', 'sandbox-3', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
EmuKit/emukit | notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb | apache-2.0 | [
"External objective function evaluation in Bayesian optimization with Emukit\nOverview\nThe Bayesian optimization component of Emukit allows for objective functions to be evaluated externally. If users opt for this approach, they can use Emukit to suggest the next point for evaluation, and then evaluate the objective function themselves as well as decide on the stopping criteria of the evaluation loop. This notebook shall demonstrate how to carry out this procedure. The main benefit of using Emukit in this manner is that you can externally manage issues such as parallelizing the computation of the objective function, which is convenient in many scenarios.",
"### General imports\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors as mcolors\n%pylab inline\n\n### --- Figure config\ncolors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)\nLEGEND_SIZE = 15\nTITLE_SIZE = 25\nAXIS_SIZE = 15",
"Navigation\n\n\nHandling the loop yourself\n\n\nComparing with the high level API\n\n\n1. Handling the loop yourself\nFor the purposes of this notebook we are going to use one of the predefined objective functions that come with GPyOpt. However, the key thing to realize is that the function could be anything (e.g., the results of a physical experiment). As long as users are able to externally evaluate the suggested points and provide GPyOpt with the results, the library has options for setting the objective function's origin.",
"from emukit.test_functions import forrester_function\nfrom emukit.core.loop import UserFunctionWrapper\n\ntarget_function, space = forrester_function()",
"First we are going to run the optimization loop outside of Emukit, and only use the library to get the next point at which to evaluate our function.\nThere are two things to pay attention to when creating the main optimization object:\n\n\nSince we recreate the object anew for each iteration, we need to pass data about all previous iterations to it.\n\n\nThe model gets optimized from the scratch in every iteration but the parameters of the model could be saved and used to update the state (TODO).\n\n\nWe start with three initial points at which we evaluate the objective function.",
"X = np.array([[0.1],[0.6],[0.9]])\nY = target_function(X)",
"And we run the loop externally.",
"from emukit.examples.gp_bayesian_optimization.single_objective_bayesian_optimization import GPBayesianOptimization\nfrom emukit.core.loop import UserFunctionResult\n\nnum_iterations = 10\n\nbo = GPBayesianOptimization(variables_list=space.parameters, X=X, Y=Y)\nresults = None\n\nfor _ in range(num_iterations):\n X_new = bo.get_next_points(results)\n Y_new = target_function(X_new)\n results = [UserFunctionResult(X_new[0], Y_new[0])]\n\nX = bo.loop_state.X\nY = bo.loop_state.Y",
"Let's visualize the results. The size of the marker denotes the order in which the point was evaluated - the bigger the marker the later was the evaluation.",
"x = np.arange(0.0, 1.0, 0.01)\ny = target_function(x)\n\nplt.figure()\nplt.plot(x, y)\nfor i, (xs, ys) in enumerate(zip(X, Y)):\n plt.plot(xs, ys, 'ro', markersize=10 + 10 * (i+1)/len(X))\n\nX",
"2. Comparing with the high level API\nTo compare the results, let's now execute the whole loop with Emukit.",
"X = np.array([[0.1],[0.6],[0.9]])\nY = target_function(X)\n\nbo_loop = GPBayesianOptimization(variables_list=space.parameters, X=X, Y=Y)\nbo_loop.run_optimization(target_function, num_iterations)",
"Now let's print the results of this optimization and compare it to the previous external evaluation run. As before, the size of the marker corresponds to its evaluation order.",
"x = np.arange(0.0, 1.0, 0.01)\ny = target_function(x)\n\nplt.figure()\nplt.plot(x, y)\nfor i, (xs, ys) in enumerate(zip(bo_loop.model.model.X, bo_loop.model.model.Y)):\n plt.plot(xs, ys, 'ro', markersize=10 + 10 * (i+1)/len(bo_loop.model.model.X))",
"It can be observed that we obtain the same result as before but now the objective function is handled internally."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cdt15/lingam | examples/VARMALiNGAM.ipynb | mit | [
"VARMALiNGAM\nImport and settings\nIn this example, we need to import numpy, pandas, and graphviz in addition to lingam.",
"import numpy as np\nimport pandas as pd\nimport graphviz\nimport lingam\nfrom lingam.utils import make_dot, print_causal_directions, print_dagc\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nprint([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])\n\nnp.set_printoptions(precision=3, suppress=True)\nnp.random.seed(0)",
"Test data\nWe create test data consisting of 5 variables.",
"psi0 = np.array([\n [ 0. , 0. , -0.25, 0. , 0. ],\n [-0.38, 0. , 0.14, 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. ],\n [ 0.44, -0.2 , -0.09, 0. , 0. ],\n [ 0.07, -0.06, 0. , 0.07, 0. ]\n])\nphi1 = np.array([\n [-0.04, -0.29, -0.26, 0.14, 0.47],\n [-0.42, 0.2 , 0.1 , 0.24, 0.25],\n [-0.25, 0.18, -0.06, 0.15, 0.18],\n [ 0.22, 0.39, 0.08, 0.12, -0.37],\n [-0.43, 0.09, -0.23, 0.16, 0.25]\n])\ntheta1 = np.array([\n [ 0.15, -0.02, -0.3 , -0.2 , 0.21],\n [ 0.32, 0.12, -0.11, 0.03, 0.42],\n [-0.07, -0.5 , 0.03, -0.27, -0.21],\n [-0.17, 0.35, 0.25, 0.24, -0.25],\n [ 0.09, 0.4 , 0.41, 0.24, -0.31]\n])\ncausal_order = [2, 0, 1, 3, 4]\n\n# data generated from psi0 and phi1 and theta1, causal_order\nX = np.loadtxt('data/sample_data_varma_lingam.csv', delimiter=',')",
"Causal Discovery\nTo run causal discovery, we create a VARMALiNGAM object and call the fit method.",
"model = lingam.VARMALiNGAM(order=(1, 1), criterion=None)\nmodel.fit(X)",
"Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.",
"model.causal_order_",
"Also, using the adjacency_matrices_ properties, we can see the adjacency matrix as a result of the causal discovery.",
"# psi0\nmodel.adjacency_matrices_[0][0]\n\n# psi1\nmodel.adjacency_matrices_[0][1]\n\n# omega0\nmodel.adjacency_matrices_[1][0]",
"Using DirectLiNGAM for the residuals_ properties, we can calculate psi0 matrix.",
"dlingam = lingam.DirectLiNGAM()\ndlingam.fit(model.residuals_)\ndlingam.adjacency_matrix_",
"We can draw a causal graph by utility funciton",
"labels = ['y0(t)', 'y1(t)', 'y2(t)', 'y3(t)', 'y4(t)', 'y0(t-1)', 'y1(t-1)', 'y2(t-1)', 'y3(t-1)', 'y4(t-1)']\nmake_dot(np.hstack(model.adjacency_matrices_[0]), lower_limit=0.3, ignore_shape=True, labels=labels)",
"Independence between error variables\nTo check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.",
"p_values = model.get_error_independence_p_values()\nprint(p_values)",
"Bootstrap\nBootstrapping\nWe call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.",
"model = lingam.VARMALiNGAM()\nresult = model.bootstrap(X, n_sampling=100)",
"Causal Directions\nSince BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.4 or more.",
"cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.4, split_by_causal_effect_sign=True)",
"We can check the result by utility function.",
"labels = ['y0(t)', 'y1(t)', 'y2(t)', 'y3(t)', 'y4(t)', 'y0(t-1)', 'y1(t-1)', 'y2(t-1)', 'y3(t-1)', 'y4(t-1)', 'e0(t-1)', 'e1(t-1)', 'e2(t-1)', 'e3(t-1)', 'e4(t-1)']\nprint_causal_directions(cdc, 100, labels=labels)",
"Directed Acyclic Graphs\nAlso, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.3 or more.",
"dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.3, split_by_causal_effect_sign=True)",
"We can check the result by utility function.",
"print_dagc(dagc, 100, labels=labels)",
"Probability\nUsing the get_probabilities() method, we can get the probability of bootstrapping.",
"prob = result.get_probabilities(min_causal_effect=0.1)\nprint('Probability of psi0:\\n', prob[0])\nprint('Probability of psi1:\\n', prob[1])\nprint('Probability of omega1:\\n', prob[2])",
"Total Causal Effects\nUsing the get_total causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.\nWe can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.",
"causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)\ndf = pd.DataFrame(causal_effects)\n\ndf['from'] = df['from'].apply(lambda x : labels[x])\ndf['to'] = df['to'].apply(lambda x : labels[x])\ndf",
"We can easily perform sorting operations with pandas.DataFrame.",
"df.sort_values('effect', ascending=False).head()",
"And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards y2(t).",
"df[df['to']=='y2(t)'].head()",
"Because it holds the raw data of the causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.",
"import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n%matplotlib inline\n\nfrom_index = 5 # index of y0(t-1). (index:0)+(n_features:5)*(lag:1) = 5\nto_index = 2 # index of y2(t). (index:2)+(n_features:5)*(lag:0) = 2\nplt.hist(result.total_effects_[:, to_index, from_index])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mdiaz236/DeepLearningFoundations | tensorboard/Anna_KaRNNa.ipynb | mit | [
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n \n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n\n\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n \n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n \n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN putputs to a softmax layer and calculate the cost\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n\n preds = tf.nn.softmax(logits, name='predictions')\n \n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n\n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Write out the graph for TensorBoard",
"model = build_rnn(len(vocab),\n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n file_writer = tf.summary.FileWriter('./logs/1', sess.graph)",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 1\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/ja/lite/performance/post_training_integer_quant_16x8.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"int16 アクティベーションによるトレーニング後の整数量子化\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">View on TensorFlow.org</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a> </td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lite/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\"> ノートブックをダウンロード</a></td>\n</table>\n\n概要\n現在、TensorFlow Lite はモデルを TensorFlow から TensorFlow Lite のフラットバッファ形式に変換する際に、アクティベーションを 16 ビット整数値に、重みを 8 ビット整数値に変換することをサポートしています。このモードは「16x8 量子化モード」と呼ばれています。このモードでは、アクティベーションが量子化の影響を受けやすい場合に、量子化モデルの精度を大幅に向上させ、モデルサイズを約 1/3〜1/4 に縮小することができます。また、この完全に量子化されたモデルは、整数のみのハードウェアアクセラレータで使用できます。\n次のようなモデルでは、このモードのトレーニング後の量子化が有用です。\n\n超解像\nノイズキャンセリングやビームフォーミングなどのオーディオ信号処理\n画像ノイズ除去\n単一の画像からの HDR 再構成\n\nこのチュートリアルでは、MNIST モデルを新規にトレーニングし、TensorFlow でその精度を確認してから、このモードを使用してモデルを Tensorflow Lite フラットバッファに変換します。最後に、変換されたモデルの精度を確認し、元の float32 モデルと比較します。この例は、このモードの使用法を示しており、TensorFlow Liteで利用可能な他の量子化手法と比較した場合の利点は示していません。\nMNIST モデルの構築\nセットアップ",
"import logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport pathlib",
"16x8 量子化モードが使用可能であることを確認します",
"tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8",
"モデルをトレーニングしてエクスポートする",
"# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nmodel.fit(\n train_images,\n train_labels,\n epochs=1,\n validation_data=(test_images, test_labels)\n)",
"この例では、モデルを 1 エポックでトレーニングしたので、トレーニングの精度は 96% 以下になります。\nTensorFlow Lite モデルに変換する\nPython TFLiteConverter を使用して、トレーニング済みモデルを TensorFlow Lite モデルに変換できるようになりました。\n次に、TFliteConverterを使用してモデルをデフォルトの float32 形式に変換します。",
"converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = converter.convert()",
".tfliteファイルに書き込みます。",
"tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\ntflite_models_dir.mkdir(exist_ok=True, parents=True)\n\ntflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\ntflite_model_file.write_bytes(tflite_model)",
"モデルを 16x8 量子化モードに量子化するには、最初にoptimizationsフラグを設定してデフォルトの最適化を使用します。次に、16x8 量子化モードがターゲット仕様でサポートされる必要な演算であることを指定します。",
"converter.optimizations = [tf.lite.Optimize.DEFAULT]\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]",
"int8 トレーニング後の量子化の場合と同様に、コンバーターオプションinference_input(output)_typeを tf.int16 に設定することで、完全整数量子化モデルを生成できます。\nキャリブレーションデータを設定します。",
"mnist_train, _ = tf.keras.datasets.mnist.load_data()\nimages = tf.cast(mnist_train[0], tf.float32) / 255.0\nmnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)\ndef representative_data_gen():\n for input_value in mnist_ds.take(100):\n # Model has only one input so each data point has one element.\n yield [input_value]\nconverter.representative_dataset = representative_data_gen",
"最後に、通常どおりにモデルを変換します。デフォルトでは、変換されたモデルは呼び出しの便宜上、浮動小数点の入力と出力を引き続き使用します。",
"tflite_16x8_model = converter.convert()\ntflite_model_16x8_file = tflite_models_dir/\"mnist_model_quant_16x8.tflite\"\ntflite_model_16x8_file.write_bytes(tflite_16x8_model)",
"生成されるファイルのサイズが約1/3であることに注目してください。",
"!ls -lh {tflite_models_dir}",
"TensorFlow Lite モデルを実行する\nPython TensorFlow Lite インタープリタを使用して TensorFlow Lite モデルを実行します。\nモデルをインタープリタに読み込む",
"interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\ninterpreter.allocate_tensors()\n\ninterpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))\ninterpreter_16x8.allocate_tensors()",
"1 つの画像でモデルをテストする",
"test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\ninterpreter.set_tensor(input_index, test_image)\ninterpreter.invoke()\npredictions = interpreter.get_tensor(output_index)\n\nimport matplotlib.pylab as plt\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)\n\ntest_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter_16x8.get_input_details()[0][\"index\"]\noutput_index = interpreter_16x8.get_output_details()[0][\"index\"]\n\ninterpreter_16x8.set_tensor(input_index, test_image)\ninterpreter_16x8.invoke()\npredictions = interpreter_16x8.get_tensor(output_index)\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)",
"モデルを評価する",
"# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\nprint(evaluate_model(interpreter))",
"16x8 量子化モデルで評価を繰り返します。",
"# NOTE: This quantization mode is an experimental post-training mode,\n# it does not have any optimized kernels implementations or\n# specialized machine learning hardware accelerators. Therefore,\n# it could be slower than the float interpreter.\nprint(evaluate_model(interpreter_16x8))",
"この例では、モデルの精度を低下することなく、16x8 に量子化しました。サイズは 1/3 に縮小されました。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb | apache-2.0 | [
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex SDK: Train & deploy a TensorFlow model with hosted runtimes (aka pre-built containers)\nInstallation\nInstall the latest (preview) version of Vertex SDK.",
"! pip3 install -U google-cloud-aiplatform --user",
"Install the Google cloud-storage library as well.",
"! pip3 install google-cloud-storage",
"Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"import os\n\nif not os.getenv(\"AUTORUN\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.",
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.",
"BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION gs://$BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al gs://$BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.",
"import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Value",
"Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.\nAPI_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.\nPARENT: The Vertex AI location root path for dataset, model and endpoint resources.",
"# API Endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex AI location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.\n\nDataset Service for managed datasets.\nModel Service for managed models.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving. Note: Prediction has a different service endpoint.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)",
"Prepare a trainer script\nPackage assembly",
"! rm -rf cifar\n! mkdir cifar\n! touch cifar/README.md\n\nsetup_cfg = \"[egg_info]\\n\\\ntag_build =\\n\\\ntag_date = 0\"\n! echo \"$setup_cfg\" > cifar/setup.cfg\n\nsetup_py = \"import setuptools\\n\\\n# Requires TensorFlow Datasets\\n\\\nsetuptools.setup(\\n\\\n install_requires=[\\n\\\n 'tensorflow_datasets==1.3.0',\\n\\\n ],\\n\\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > cifar/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\\nName: Custom Training CIFAR-10\\n\\\nVersion: 0.0.0\\n\\\nSummary: Demonstration training script\\n\\\nHome-page: www.google.com\\n\\\nAuthor: Google\\n\\\nAuthor-email: [email protected]\\n\\\nLicense: Public\\n\\\nDescription: Demo\\n\\\nPlatform: Vertex AI\"\n! echo \"$pkg_info\" > cifar/PKG-INFO\n\n! mkdir cifar/trainer\n! touch cifar/trainer/__init__.py",
"Task.py contents",
"%%writefile cifar/trainer/task.py\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\n\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default='/tmp/saved_model', type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\nNUM_WORKERS = strategy.num_replicas_in_sync\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(args.model_dir)\n",
"Store training script on your Cloud Storage bucket",
"! rm -f cifar.tar cifar.tar.gz\n! tar cvf cifar.tar cifar\n! gzip cifar.tar\n! gsutil cp cifar.tar.gz gs://$BUCKET_NAME/trainer_cifar.tar.gz",
"Train a model\nprojects.locations.customJobs.create\nRequest",
"JOB_NAME = \"custom_job_TF_\" + TIMESTAMP\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\"\nTRAIN_NGPU = 1\nTRAIN_GPU = aip.AcceleratorType.NVIDIA_TESLA_K80\n\nworker_pool_specs = [\n {\n \"replica_count\": 1,\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-4\",\n \"accelerator_type\": TRAIN_GPU,\n \"accelerator_count\": TRAIN_NGPU,\n },\n \"python_package_spec\": {\n \"executor_image_uri\": TRAIN_IMAGE,\n \"package_uris\": [\"gs://\" + BUCKET_NAME + \"/trainer_cifar.tar.gz\"],\n \"python_module\": \"trainer.task\",\n \"args\": [\n \"--model-dir=\" + \"gs://{}/{}\".format(BUCKET_NAME, JOB_NAME),\n \"--epochs=\" + str(20),\n \"--steps=\" + str(100),\n \"--distribute=\" + \"single\",\n ],\n },\n }\n]\n\ntraining_job = {\n \"display_name\": JOB_NAME,\n \"job_spec\": {\"worker_pool_specs\": worker_pool_specs},\n}\n\nprint(\n MessageToJson(\n aip.CreateCustomJobRequest(parent=PARENT, custom_job=training_job).__dict__[\n \"_pb\"\n ]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"customJob\": {\n \"displayName\": \"custom_job_TF_20210227173057\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\",\n \"acceleratorType\": \"NVIDIA_TESLA_K80\",\n \"acceleratorCount\": 1\n },\n \"replicaCount\": \"1\",\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057\",\n \"--epochs=20\",\n \"--steps=100\",\n \"--distribute=single\"\n ]\n }\n }\n ]\n }\n }\n}\nCall",
"request = clients[\"job\"].create_custom_job(parent=PARENT, custom_job=training_job)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/customJobs/2970106362064797696\",\n \"displayName\": \"custom_job_TF_20210227173057\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\",\n \"acceleratorType\": \"NVIDIA_TESLA_K80\",\n \"acceleratorCount\": 1\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057\",\n \"--epochs=20\",\n \"--steps=100\",\n \"--distribute=single\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-27T17:31:04.494716Z\",\n \"updateTime\": \"2021-02-27T17:31:04.494716Z\"\n}",
"# The full unique ID for the custom training job\ncustom_training_id = request.name\n# The short numeric ID for the custom training job\ncustom_training_short_id = custom_training_id.split(\"/\")[-1]\n\nprint(custom_training_id)",
"projects.locations.customJobs.get\nCall",
"request = clients[\"job\"].get_custom_job(name=custom_training_id)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/customJobs/2970106362064797696\",\n \"displayName\": \"custom_job_TF_20210227173057\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\",\n \"acceleratorType\": \"NVIDIA_TESLA_K80\",\n \"acceleratorCount\": 1\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/trainer_cifar.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057\",\n \"--epochs=20\",\n \"--steps=100\",\n \"--distribute=single\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-27T17:31:04.494716Z\",\n \"updateTime\": \"2021-02-27T17:31:04.494716Z\"\n}",
"while True:\n response = clients[\"job\"].get_custom_job(name=custom_training_id)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(20)\n\n# model artifact output directory on Google Cloud Storage\nmodel_artifact_dir = (\n response.job_spec.worker_pool_specs[0].python_package_spec.args[0].split(\"=\")[-1]\n)\nprint(\"artifact location \" + model_artifact_dir)",
"Deploy the model\nLoad the saved model",
"import tensorflow as tf\n\nmodel = tf.keras.models.load_model(model_artifact_dir)",
"Serving function for image data",
"CONCRETE_INPUT = \"numpy_inputs\"\n\n\ndef _preprocess(bytes_input):\n decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n resized = tf.image.resize(decoded, size=(32, 32))\n rescale = tf.cast(resized / 255.0, tf.float32)\n return rescale\n\n\[email protected](input_signature=[tf.TensorSpec([None], tf.string)])\ndef preprocess_fn(bytes_inputs):\n decoded_images = tf.map_fn(\n _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n )\n return {\n CONCRETE_INPUT: decoded_images\n } # User needs to make sure the key matches model's input\n\n\nm_call = tf.function(model.call).get_concrete_function(\n [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n)\n\n\[email protected](input_signature=[tf.TensorSpec([None], tf.string)])\ndef serving_fn(bytes_inputs):\n images = preprocess_fn(bytes_inputs)\n prob = m_call(**images)\n return prob\n\n\ntf.saved_model.save(\n model,\n model_artifact_dir,\n signatures={\n \"serving_default\": serving_fn,\n },\n)",
"Get the serving function signature",
"loaded = tf.saved_model.load(model_artifact_dir)\n\ninput_name = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\n\nprint(\"Serving function input:\", input_name)",
"Example output:\nServing function input: bytes_inputs\nprojects.locations.models.upload\nRequest",
"model = {\n \"display_name\": \"custom_job_TF\" + TIMESTAMP,\n \"metadata_schema_uri\": \"\",\n \"artifact_uri\": model_artifact_dir,\n \"container_spec\": {\n \"image_uri\": \"gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\"\n },\n}\n\nprint(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__[\"_pb\"]))",
"Example output:\n```\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"model\": {\n \"displayName\": \"custom_job_TF20210227173057\",\n \"containerSpec\": {\n \"imageUri\": \"gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\"\n },\n \"artifactUri\": \"gs://migration-ucaip-trainingaip-20210227173057/custom_job_TF_20210227173057\"\n }\n}\n```\nCall",
"request = clients[\"model\"].upload_model(parent=PARENT, model=model)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"model\": \"projects/116273516712/locations/us-central1/models/8844102097923211264\"\n}",
"# The full unique ID for the model\nmodel_id = result.model\n\nprint(model_id)",
"Make batch predictions\nMake the batch input file\nLet's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be JSONL.",
"import base64\nimport json\n\nimport cv2\nimport numpy as np\nimport tensorflow as tf\n\n(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()\n\n\ntest_image_1, test_label_1 = x_test[0], y_test[0]\ntest_image_2, test_label_2 = x_test[1], y_test[1]\n\ncv2.imwrite(\"tmp1.jpg\", (test_image_1).astype(np.uint8))\ncv2.imwrite(\"tmp2.jpg\", (test_image_2).astype(np.uint8))\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + \"/\" + \"test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n bytes = tf.io.read_file(\"tmp1.jpg\")\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n f.write(json.dumps({input_name: {\"b64\": b64str}}) + \"\\n\")\n\n bytes = tf.io.read_file(\"tmp2.jpg\")\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n f.write(json.dumps({input_name: {\"b64\": b64str}}) + \"\\n\")\n\n! gsutil cat $gcs_input_uri",
"Example output:\n{\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z\"}}\n{\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z\"}}\nprojects.locations.batchPredictionJobs.create\nRequest",
"batch_prediction_job = aip.BatchPredictionJob(\n display_name=\"custom_job_TF\" + TIMESTAMP,\n model=model_id,\n input_config={\n \"instances_format\": \"jsonl\",\n \"gcs_source\": {\"uris\": [gcs_input_uri]},\n },\n model_parameters=ParseDict(\n {\"confidenceThreshold\": 0.5, \"maxPredictions\": 2}, Value()\n ),\n output_config={\n \"predictions_format\": \"jsonl\",\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\"\n },\n },\n dedicated_resources={\n \"machine_spec\": {\"machine_type\": \"n1-standard-2\", \"accelerator_type\": 0},\n \"starting_replica_count\": 1,\n \"max_replica_count\": 1,\n },\n)\n\nprint(\n MessageToJson(\n aip.CreateBatchPredictionJobRequest(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"batchPredictionJob\": {\n \"displayName\": \"custom_job_TF_TF20210227173057\",\n \"model\": \"projects/116273516712/locations/us-central1/models/8844102097923211264\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"maxPredictions\": 10000.0,\n \"confidenceThreshold\": 0.5\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210227173057/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n }\n }\n}\nCall",
"request = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248\",\n \"displayName\": \"custom_job_TF_TF20210227173057\",\n \"model\": \"projects/116273516712/locations/us-central1/models/8844102097923211264\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"maxPredictions\": 10000.0,\n \"confidenceThreshold\": 0.5\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210227173057/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n },\n \"manualBatchTuningParameters\": {},\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-27T18:00:30.887438Z\",\n \"updateTime\": \"2021-02-27T18:00:30.887438Z\"\n}",
"# The fully qualified ID for the batch job\nbatch_job_id = request.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)",
"projects.locations.batchPredictionJobs.get\nCall",
"request = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/659759753223733248\",\n \"displayName\": \"custom_job_TF_TF20210227173057\",\n \"model\": \"projects/116273516712/locations/us-central1/models/8844102097923211264\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210227173057/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 10000.0\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210227173057/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n },\n \"manualBatchTuningParameters\": {},\n \"state\": \"JOB_STATE_RUNNING\",\n \"createTime\": \"2021-02-27T18:00:30.887438Z\",\n \"startTime\": \"2021-02-27T18:00:30.938444Z\",\n \"updateTime\": \"2021-02-27T18:00:30.938444Z\"\n}",
"def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n response = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", response.state)\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(\n response.output_config.gcs_destination.output_uri_prefix\n )\n ! gsutil ls $folder/prediction*\n\n ! gsutil cat $folder/prediction*\n break\n time.sleep(60)",
"Example output:\ngs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.errors_stats-00000-of-00001\ngs://migration-ucaip-trainingaip-20210227173057/batch_output/prediction-custom_job_TF_TF20210227173057-2021_02_27T10_00_30_820Z/prediction.results-00000-of-00001\n{\"instance\": {\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z\"}}, \"prediction\": [0.0407731421, 0.125140116, 0.118551917, 0.100501947, 0.128865793, 0.089787662, 0.157575116, 0.121281914, 0.0312845968, 0.0862377882]}\n{\"instance\": {\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z\"}}, \"prediction\": [0.0406896845, 0.125281364, 0.118567884, 0.100639313, 0.12864624, 0.0898737088, 0.157521054, 0.121037535, 0.0313298739, 0.0864133239]}\nMake online predictions\nprojects.locations.endpoints.create\nRequest",
"endpoint = {\"display_name\": \"custom_job_TF\" + TIMESTAMP}\n\nprint(\n MessageToJson(\n aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"endpoint\": {\n \"displayName\": \"custom_job_TF_TF20210227173057\"\n }\n}\nCall",
"request = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/endpoints/6810814827095654400\"\n}",
"# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)",
"projects.locations.endpoints.deployModel\nRequest",
"deployed_model = {\n \"model\": model_id,\n \"display_name\": \"custom_job_TF\" + TIMESTAMP,\n \"dedicated_resources\": {\n \"min_replica_count\": 1,\n \"machine_spec\": {\"machine_type\": \"n1-standard-4\", \"accelerator_count\": 0},\n },\n}\n\nprint(\n MessageToJson(\n aip.DeployModelRequest(\n endpoint=endpoint_id,\n deployed_model=deployed_model,\n traffic_split={\"0\": 100},\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/6810814827095654400\",\n \"deployedModel\": {\n \"model\": \"projects/116273516712/locations/us-central1/models/8844102097923211264\",\n \"displayName\": \"custom_job_TF_TF20210227173057\",\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"minReplicaCount\": 1\n }\n },\n \"trafficSplit\": {\n \"0\": 100\n }\n}\nCall",
"request = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={\"0\": 100}\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"deployedModel\": {\n \"id\": \"2064302294823862272\"\n }\n}",
"# The unique ID for the deployed model\ndeployed_model_id = result.deployed_model.id\n\nprint(deployed_model_id)",
"projects.locations.endpoints.predict\nPrepare file for online prediction\nRequest",
"import base64\n\nimport cv2\nimport tensorflow as tf\n\n(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()\ntest_image, test_label = x_test[0], y_test[0]\n\ncv2.imwrite(\"tmp.jpg\", (test_image * 255).astype(np.uint8))\nbytes = tf.io.read_file(\"tmp.jpg\")\nb64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n\ninstances_list = [{\"bytes_inputs\": {\"b64\": b64str}}]\n\nprediction_request = aip.PredictRequest(endpoint=endpoint_id)\nprediction_request.instances.append(instances_list)\n\nprint(MessageToJson(prediction_request.__dict__[\"_pb\"]))",
"Example output:\n```\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/6810814827095654400\",\n \"instances\": [\n [\n {\n \"bytes_inputs\": {\n \"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z\"\n }\n }\n ]\n ]\n}\n```\nCall",
"request = clients[\"prediction\"].predict(endpoint=endpoint_id, instances=instances_list)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"predictions\": [\n [\n 0.0406113081,\n 0.125313938,\n 0.118626907,\n 0.100714684,\n 0.128500372,\n 0.0899592042,\n 0.157601,\n 0.121072263,\n 0.0312432405,\n 0.0863570943\n ]\n ],\n \"deployedModelId\": \"2064302294823862272\"\n}\nprojects.locations.endpoints.undeployModel\nCall",
"request = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{}\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.",
"delete_model = True\ndelete_endpoint = True\ndelete_custom_job = True\ndelete_batchjob = True\ndelete_bucket = True\n\n# Delete the model using the Vertex AI fully qualified identifier for the model\ntry:\n if delete_model:\n clients[\"model\"].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint\ntry:\n if delete_endpoint:\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom training using the Vertex AI fully qualified identifier for the custom training\ntry:\n if delete_custom_job:\n clients[\"job\"].delete_custom_job(name=custom_training_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex AI fully qualified identifier for the batch job\ntry:\n if delete_batchjob:\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
telescopeuser/uat_shl | rnd03/shl_sm_NoOCR_v010.ipynb | mit | [
"SHL Project\n\nsimulation module: shl_sm\n\nshl_sm required data feeds:\n\nlive bidding price, per second, time series\n\nprediction module parameters/csv\n\n\nparm_si.csv (seasonality index per second)\n\n\nparm_month.csv (parameter like alpha, beta, gamma, etc. per month)\n\n\nSHL Simulation Module: shl_sm\nImport useful reference packages",
"import pandas as pd",
"Import SHL Prediction Module: shl_pm",
"import shl_pm",
"shl_sm parameters:\nshl_sm simulated real time per second price ata, fetch from csv:",
"# which month to predictsimulate?\n\n# shl_sm_parm_ccyy_mm = '2017-04'\n# shl_sm_parm_ccyy_mm_offset = 1647\n\n# shl_sm_parm_ccyy_mm = '2017-05'\n# shl_sm_parm_ccyy_mm_offset = 1708\n\n# shl_sm_parm_ccyy_mm = '2017-06'\n# shl_sm_parm_ccyy_mm_offset = 1769\n\nshl_sm_parm_ccyy_mm = '2017-07'\nshl_sm_parm_ccyy_mm_offset = 1830\n\n#----------------------------------\n\nshl_sm_data = pd.read_csv('shl_sm_data/history_ts.csv') \nshl_sm_data",
"shl_pm Initialization",
"shl_pm.shl_initialize(shl_sm_parm_ccyy_mm)\n\n# Upon receiving 11:29:00 second price, to predict till 11:29:49 <- one-step forward price forecasting\n\nfor i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_offset+50): # use csv data as simulatino\n# for i in range(shl_sm_parm_ccyy_mm_offset, shl_sm_parm_ccyy_mm_offset+55): # use csv data as simulatino\n print('\\n<<<< Record No.: %5d >>>>' % i)\n print(shl_sm_data['ccyy-mm'][i]) # format: ccyy-mm\n print(shl_sm_data['time'][i]) # format: hh:mm:ss\n print(shl_sm_data['bid-price'][i]) # format: integer\n \n###################################################################################################################### \n# call prediction function, returned result is in 'list' format, i.e. [89400] \n shl_sm_prediction_list_local_1 = shl_pm.shl_predict_price_k_step(shl_sm_data['time'][i], shl_sm_data['bid-price'][i],1) # <- one-step forward price forecasting\n print(shl_sm_prediction_list_local_1)\n###################################################################################################################### \n\n\n# Upon receiving 11:29:50 second price, to predict till 11:30:00 <- ten-step forward price forecasting\n\nfor i in range(shl_sm_parm_ccyy_mm_offset+50, shl_sm_parm_ccyy_mm_offset+51): # use csv data as simulation\n print('\\n<<<< Record No.: %5d >>>>' % i)\n print(shl_sm_data['ccyy-mm'][i]) # format: ccyy-mm\n print(shl_sm_data['time'][i]) # format: hh:mm:ss\n print(shl_sm_data['bid-price'][i]) # format: integer/boost-trap-float\n \n###################################################################################################################### \n# call prediction function, returned result is in 'list' format, i.e. [89400, 89400, 89400, 89500, 89500, 89500, 89500, 89600, 89600, 89600] \n shl_sm_prediction_list_local_k = shl_pm.shl_predict_price_k_step(shl_sm_data['time'][i], shl_sm_data['bid-price'][i],10) # <- ten-step forward price forecasting\n print(shl_sm_prediction_list_local_k)\n###################################################################################################################### \n\n\nshl_pm.shl_data_pm_1_step\n\n\nshl_pm.shl_data_pm_k_step\n\n\nprint(shl_sm_prediction_list_local_1)\n\nprint(shl_sm_prediction_list_local_k)\n\nshl_pm.shl_data_pm_1_step.tail(11)\n\nshl_pm.shl_data_pm_k_step.tail(20)",
"MISC - Validation",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nshl_data_pm_k_step_local = shl_pm.shl_data_pm_k_step.copy()\nshl_data_pm_k_step_local.index = shl_data_pm_k_step_local.index + 1\nshl_data_pm_k_step_local\n\n# bid is predicted bid-price from shl_pm\nplt.figure(figsize=(12,6))\nplt.plot(shl_pm.shl_data_pm_k_step['f_current_bid'])\n# plt.plot(shl_data_pm_1_step_k_step['f_1_step_pred_price'].shift(1))\nplt.plot(shl_data_pm_k_step_local['f_1_step_pred_price'])\n\n# bid is actual bid-price from raw dataset\nshl_data_actual_bid_local = shl_sm_data[shl_sm_parm_ccyy_mm_offset:shl_sm_parm_ccyy_mm_offset+61].copy()\nshl_data_actual_bid_local.reset_index(inplace=True)\nplt.figure(figsize=(12,6))\nplt.plot(shl_data_actual_bid_local['bid-price'])\nplt.plot(shl_data_pm_k_step_local['f_1_step_pred_price'])\n\nplt.figure(figsize=(12,6))\nplt.plot(shl_data_actual_bid_local['bid-price'])\nplt.plot(shl_data_pm_k_step_local['f_1_step_pred_price_rounded'])\n\n\n# pd.concat([shl_data_actual_bid_local['bid-price'], shl_data_pm_k_step_local['f_1_step_pred_price'], shl_data_pm_k_step_local['f_1_step_pred_price'] - shl_data_actual_bid_local['bid-price']], axis=1, join='inner')\npd.concat([shl_data_actual_bid_local['bid-price'].tail(11), shl_data_pm_k_step_local['f_1_step_pred_price'].tail(11), shl_data_pm_k_step_local['f_1_step_pred_price'].tail(11) - shl_data_actual_bid_local['bid-price'].tail(11)], axis=1, join='inner')\n",
"The End"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jnarhan/Breast_Cancer | src/models/JN_BC_Threshold_Diagnosis.ipynb | mit | [
"<hr>\n<h1>Predicting Benign and Malignant Classes in Mammograms Using Thresholded Data</h1>\n\n<p>Jay Narhan</p>\nJune 2017\nThis is an application of the best performing models but using thresholded data instead of differenced data. See JN_DC_Diff_Diagnosis.ipynb for more background and details on the problem.\n<hr>",
"import os\nimport sys\nimport time\nimport numpy as np\n\nfrom tqdm import tqdm\n\nimport sklearn.metrics as skm\nfrom sklearn import metrics\nfrom sklearn.svm import SVC\nfrom sklearn.model_selection import train_test_split\n\nfrom skimage import color\n\nimport keras.callbacks as cb\nimport keras.utils.np_utils as np_utils\nfrom keras import applications\nfrom keras import regularizers\nfrom keras.models import Sequential\nfrom keras.constraints import maxnorm\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.layers.convolutional import Convolution2D, MaxPooling2D\nfrom keras.layers import Activation, Dense, Dropout, Flatten, GaussianNoise\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = (10,10)\nnp.set_printoptions(precision=2)\n\nsys.path.insert(0, '../helper_modules/')\nimport jn_bc_helper as bc",
"<h2>Reproducible Research</h2>",
"%%python\nimport os\nos.system('python -V')\nos.system('python ../helper_modules/Package_Versions.py')\n\nSEED = 7\nnp.random.seed(SEED)\n\nCURR_DIR = os.getcwd()\nDATA_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/ALL_IMGS/'\nAUG_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/AUG_DIAGNOSIS_IMGS/'\nmeta_file = '../../Meta_Data_Files/meta_data_all.csv'\nPATHO_INX = 6 # Column number of pathology label in meta_file\nFILE_INX = 1 # Column number of File name in meta_file\n\nmeta_data, _ = tqdm( bc.load_meta(meta_file, patho_idx=PATHO_INX, file_idx=FILE_INX,\n balanceByRemoval=False, verbose=False) )\n\n# Minor addition to reserve records in meta data for which we actually have images:\nmeta_data = bc.clean_meta(meta_data, DATA_DIR)\n\n# Only work with benign and malignant classes:\nfor k,v in meta_data.items():\n if v not in ['benign', 'malignant']:\n del meta_data[k]\n\nbc.pprint('Loading data')\ncats = bc.bcLabels(['benign', 'malignant'])\n\n# For smaller images supply tuple argument for a parameter 'imgResize':\n# X_data, Y_data = bc.load_data(meta_data, DATA_DIR, cats, imgResize=(150,150)) \nX_data, Y_data = tqdm( bc.load_data(meta_data, DATA_DIR, cats) )\n\ncls_cnts = bc.get_clsCnts(Y_data, cats)\nbc.pprint('Before Balancing')\nfor k in cls_cnts:\n print '{0:10}: {1}'.format(k, cls_cnts[k])",
"Class Balancing\nHere - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories:",
"datagen = ImageDataGenerator(rotation_range=5, width_shift_range=.01, height_shift_range=0.01,\n data_format='channels_first')\n\nX_data, Y_data = bc.balanceViaSmote(cls_cnts, meta_data, DATA_DIR, AUG_DIR, cats, \n datagen, X_data, Y_data, seed=SEED, verbose=True)",
"Create the Training and Test Datasets",
"X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data,\n test_size=0.20, # deviation given small data set\n random_state=SEED,\n stratify=zip(*Y_data)[0])\n\nprint 'Size of X_train: {:>5}'.format(len(X_train))\nprint 'Size of X_test: {:>5}'.format(len(X_test))\nprint 'Size of Y_train: {:>5}'.format(len(Y_train))\nprint 'Size of Y_test: {:>5}'.format(len(Y_test))\n\nprint X_train.shape\nprint X_test.shape\nprint Y_train.shape\nprint Y_test.shape\n\ndata = [X_train, X_test, Y_train, Y_test]",
"<h2>Support Vector Machine Model</h2>",
"X_train_svm = X_train.reshape( (X_train.shape[0], -1)) \nX_test_svm = X_test.reshape( (X_test.shape[0], -1))\n\nSVM_model = SVC(gamma=0.001)\nSVM_model.fit( X_train_svm, Y_train)\n\npredictOutput = SVM_model.predict(X_test_svm)\nsvm_acc = metrics.accuracy_score(y_true=Y_test, y_pred=predictOutput)\n\nprint 'SVM Accuracy: {: >7.2f}%'.format(svm_acc * 100)\nprint 'SVM Error: {: >10.2f}%'.format(100 - svm_acc * 100)\n\nsvm_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)\nnumBC = bc.reverseDict(cats)\nclass_names = numBC.values()\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True, \n title='SVM Normalized Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\nplt.savefig('../../figures/jn_SVM_Diagnosis_CM_Threshold_20170609.png', dpi=100)\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=False, \n title='SVM Normalized Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\n\nbc.cat_stats(svm_matrix)",
"<h2>CNN Modelling Using VGG16 in Transfer Learning</h2>",
"def VGG_Prep(img_data):\n \"\"\"\n :param img_data: training or test images of shape [#images, height, width]\n :return: the array transformed to the correct shape for the VGG network\n shape = [#images, height, width, 3] transforms to rgb and reshapes\n \"\"\"\n images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])\n for i in range(0, len(img_data)):\n im = (img_data[i] * 255) # Original imagenet images were not rescaled\n im = color.gray2rgb(im)\n images[i] = im\n return(images)\n\ndef vgg16_bottleneck(data, modelPath, fn_train_feats, fn_train_lbls, fn_test_feats, fn_test_lbls):\n # Loading data\n X_train, X_test, Y_train, Y_test = data\n \n print('Preparing the Training Data for the VGG_16 Model.')\n X_train = VGG_Prep(X_train)\n print('Preparing the Test Data for the VGG_16 Model')\n X_test = VGG_Prep(X_test)\n \n print('Loading the VGG_16 Model')\n # \"model\" excludes top layer of VGG16:\n model = applications.VGG16(include_top=False, weights='imagenet') \n \n # Generating the bottleneck features for the training data\n print('Evaluating the VGG_16 Model on the Training Data')\n bottleneck_features_train = model.predict(X_train)\n \n # Saving the bottleneck features for the training data\n featuresTrain = os.path.join(modelPath, fn_train_feats)\n labelsTrain = os.path.join(modelPath, fn_train_lbls)\n print('Saving the Training Data Bottleneck Features.')\n np.save(open(featuresTrain, 'wb'), bottleneck_features_train)\n np.save(open(labelsTrain, 'wb'), Y_train)\n\n # Generating the bottleneck features for the test data\n print('Evaluating the VGG_16 Model on the Test Data')\n bottleneck_features_test = model.predict(X_test)\n \n # Saving the bottleneck features for the test data\n featuresTest = os.path.join(modelPath, fn_test_feats)\n labelsTest = os.path.join(modelPath, fn_test_lbls)\n print('Saving the Test Data Bottleneck Feaures.')\n np.save(open(featuresTest, 'wb'), bottleneck_features_test)\n np.save(open(labelsTest, 'wb'), Y_test)\n\n# Locations for the bottleneck and labels files that we need\ntrain_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_train_threshold.npy'\ntrain_labels = '2Class_Lesions_VGG16_labels_train_threshold.npy'\ntest_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_test_threshold.npy'\ntest_labels = '2Class_Lesions_VGG16_labels_test_threshold.npy'\nmodelPath = os.getcwd()\n\ntop_model_weights_path = './weights/'\n\nnp.random.seed(SEED)\nvgg16_bottleneck(data, modelPath, train_bottleneck, train_labels, test_bottleneck, test_labels)\n\ndef train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):\n start_time = time.time()\n \n train_bottleneck = os.path.join(model_path, train_feats)\n train_labels = os.path.join(model_path, train_lab)\n test_bottleneck = os.path.join(model_path, test_feats)\n test_labels = os.path.join(model_path, test_lab)\n \n history = bc.LossHistory()\n \n X_train = np.load(train_bottleneck)\n Y_train = np.load(train_labels)\n Y_train = np_utils.to_categorical(Y_train, num_classes=2)\n \n X_test = np.load(test_bottleneck)\n Y_test = np.load(test_labels)\n Y_test = np_utils.to_categorical(Y_test, num_classes=2)\n\n model = Sequential()\n model.add(Flatten(input_shape=X_train.shape[1:]))\n model.add( Dropout(0.7))\n \n model.add( Dense(256, activation='relu', kernel_constraint= maxnorm(3.)) )\n model.add( Dropout(0.5))\n \n # Softmax for probabilities for each class at the output layer\n model.add( Dense(2, activation='softmax'))\n \n model.compile(optimizer='rmsprop', # adadelta\n loss='binary_crossentropy', \n metrics=['accuracy'])\n\n model.fit(X_train, Y_train,\n epochs=epoch,\n batch_size=batch,\n callbacks=[history],\n validation_data=(X_test, Y_test),\n verbose=2)\n \n print \"Training duration : {0}\".format(time.time() - start_time)\n score = model.evaluate(X_test, Y_test, batch_size=16, verbose=2)\n\n print \"Network's test score [loss, accuracy]: {0}\".format(score)\n print 'CNN Error: {:.2f}%'.format(100 - score[1] * 100)\n \n bc.save_model(model_save, model, \"jn_VGG16_Diagnosis_top_weights_threshold.h5\")\n \n return model, history.losses, history.acc, score\n\nnp.random.seed(SEED)\n(trans_model, loss_cnn, acc_cnn, test_score_cnn) = train_top_model(train_feats=train_bottleneck,\n train_lab=train_labels, \n test_feats=test_bottleneck, \n test_lab=test_labels,\n model_path=modelPath, \n model_save=top_model_weights_path,\n epoch=100)\nplt.figure(figsize=(10,10))\nbc.plot_losses(loss_cnn, acc_cnn)\nplt.savefig('../../figures/epoch_figures/jn_Transfer_Diagnosis_Threshold_20170609.png', dpi=100)\n\nprint 'Transfer Learning CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)\nprint 'Transfer Learning CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)\n\npredictOutput = bc.predict(trans_model, np.load(test_bottleneck))\ntrans_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=True,\n title='Transfer CNN Normalized Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\nplt.savefig('../../figures/TMP_jn_Transfer_Diagnosis_CM_Threshold_20170609.png', dpi=100)\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=False,\n title='Transfer CNN Normalized Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\n\nbc.cat_stats(trans_matrix)",
"<h2>Core CNN Modelling</h2>\n\nPrep and package the data for Keras processing:",
"data = [X_train, X_test, Y_train, Y_test]\nX_train, X_test, Y_train, Y_test = bc.prep_data(data, cats)\ndata = [X_train, X_test, Y_train, Y_test]\n\nprint X_train.shape\nprint X_test.shape\nprint Y_train.shape\nprint Y_test.shape",
"Heavy Regularization",
"def diff_model_v7_reg(numClasses, input_shape=(3, 150,150), add_noise=False, noise=0.01, verbose=False):\n model = Sequential()\n if (add_noise):\n model.add( GaussianNoise(noise, input_shape=input_shape))\n model.add( Convolution2D(filters=16, \n kernel_size=(5,5), \n data_format='channels_first',\n padding='same',\n activation='relu'))\n else:\n model.add( Convolution2D(filters=16, \n kernel_size=(5,5), \n data_format='channels_first',\n padding='same',\n activation='relu',\n input_shape=input_shape))\n # model.add( Dropout(0.7))\n model.add( Dropout(0.5))\n \n model.add( Convolution2D(filters=32, kernel_size=(3,3), \n data_format='channels_first', padding='same', activation='relu'))\n model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))\n # model.add( Dropout(0.4))\n model.add( Dropout(0.25))\n model.add( Convolution2D(filters=32, kernel_size=(3,3), \n data_format='channels_first', activation='relu'))\n \n model.add( Convolution2D(filters=64, kernel_size=(3,3), \n data_format='channels_first', padding='same', activation='relu',\n kernel_regularizer=regularizers.l2(0.01)))\n model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))\n model.add( Convolution2D(filters=64, kernel_size=(3,3), \n data_format='channels_first', activation='relu',\n kernel_regularizer=regularizers.l2(0.01)))\n #model.add( Dropout(0.4))\n model.add( Dropout(0.25))\n \n model.add( Convolution2D(filters=128, kernel_size=(3,3), \n data_format='channels_first', padding='same', activation='relu',\n kernel_regularizer=regularizers.l2(0.01)))\n model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))\n \n model.add( Convolution2D(filters=128, kernel_size=(3,3), \n data_format='channels_first', activation='relu',\n kernel_regularizer=regularizers.l2(0.01)))\n #model.add(Dropout(0.4))\n model.add( Dropout(0.25))\n \n model.add( Flatten())\n \n model.add( Dense(128, activation='relu', kernel_constraint= maxnorm(3.)) )\n # model.add( Dropout(0.4))\n model.add( Dropout(0.25))\n \n model.add( Dense(64, activation='relu', kernel_constraint= maxnorm(3.)) )\n # model.add( Dropout(0.4))\n model.add( Dropout(0.25))\n \n # Softmax for probabilities for each class at the output layer\n model.add( Dense(numClasses, activation='softmax'))\n \n if verbose:\n print( model.summary() )\n \n model.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n return model\n\ndiff_model7_noise_reg = diff_model_v7_reg(len(cats),\n input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]),\n add_noise=True, verbose=True)\n\nnp.random.seed(SEED)\n\n(cnn_model, loss_cnn, acc_cnn, test_score_cnn) = bc.run_network(model=diff_model7_noise_reg, earlyStop=True,\n data=data, \n epochs=50, batch=64)\nplt.figure(figsize=(10,10))\nbc.plot_losses(loss_cnn, acc_cnn)\nplt.savefig('../../figures/epoch_figures/jn_Core_CNN_Diagnosis_Threshold_20170609.png', dpi=100)\n\nbc.save_model(dir_path='./weights/', model=cnn_model, name='jn_Core_CNN_Diagnosis_Threshold_20170609')\n\nprint 'Core CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)\nprint 'Core CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)\n\npredictOutput = bc.predict(cnn_model, X_test)\n\ncnn_matrix = skm.confusion_matrix(y_true=[val.argmax() for val in Y_test], y_pred=predictOutput)\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=True,\n title='CNN Normalized Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\nplt.savefig('../../figures/jn_Core_CNN_Diagnosis_Threshold_201706090.png', dpi=100)\n\nplt.figure(figsize=(8,6))\nbc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=False,\n title='CNN Raw Confusion Matrix Using Thresholded \\n')\nplt.tight_layout()\n\nbc.cat_stats(cnn_matrix)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chapmanbe/RadNLP | notebooks/radnlp_demo.ipynb | apache-2.0 | [
"RadNLP\nRadiology NLP or\nRad (as in cool) NLP or\n[Fill in the Blank] NLP\n© Brian E. Chapman, PhD\nRadNLP is a package that builds upon the pyConTextNLP algorithm's sentence-level text processing to perform simple document-level classification. The package also contains a number of functions for identifying sections of reports, identifing and eliminating boiler-plate text, etc.\nIn this notebook I will demonstrate radnlp's most basic functionality: given a text of interest create an overall document classification. The classifiction uses a simple maximum function: for each concept marked in a report the maximal schema value occurance is selected to characterize the report for that concept.\nReport Schema and maximal value\nThe document classification is based on schema that combines multiple concepts (e.g. existence, certitude, severity) into a single ordinal scale. The RadNLP GitHub repository includes a knowledge base directory (KBs) contains the schema we ahve previously developed for critical findings projects. It is included below:\n```Python\nLines that start with the # symbol are comments and are ignored\nThe schema consists of a numeric value, followed by a label (e.g. \"AMBIVALENT\"),\nfollowed by a Python express that can evaluate to True or False\nThe Python expression uses LABELS from the rules. processReports.py will substitute\nthe LABEL with any matched values identified from\nthe corresponding rules\n1,AMBIVALENT,DISEASE_STATE == 2\n2,Negative/Certain/Acute,DISEASE_STATE == 0 and CERTAINTY_STATE == 1\n3,Negative/Uncertain/Chronic,DISEASE_STATE == 0 and CERTAINTY_STATE == 0 and ACUTE_STATE == 0\n4,Positive/Uncertain/Chronic,DISEASE_STATE == 1 and CERTAINTY_STATE == 0 and ACUTE_STATE == 0 \n5,Positive/Certain/Chronic,DISEASE_STATE == 1 and CERTAINTY_STATE == 1 and ACUTE_STATE == 0 \n6,Negative/Uncertain/Acute,DISEASE_STATE == 0 and CERTAINTY_STATE == 0 \n7,Positive/Uncertain/Acute,DISEASE_STATE == 1 and CERTAINTY_STATE == 0 and ACUTE_STATE == 1 \n8,Positive/Certain/Acute,DISEASE_STATE == 1 and CERTAINTY_STATE == 1 and ACUTE_STATE == 1 \n```\nA key idea is \"a Python express that can evaluate to True or False\".\nThe radnlp.schema subpackage contains functions for reading schema and applying the schema to the pyConTextNLP findings given rules specified by the user.\nThere are two key functions in radnlp.schema:\n```Python\ndef instantiate_schema(values, rule):\n \"\"\"\n evaluates rule by substituting values into rule and evaluating the resulting literal.\n This is currently insecure\n * \"For security the ast.literal_eval() method should be used.\"\n \"\"\"\n r = rule\n for k in values.keys():\n r = r.replace(k, values[k].str())\n #return ast.literal_eval(r)\n return eval(r)\ndef assign_schema(values, rules):\n \"\"\"\n \"\"\"\n for k in rules.keys():\n if instantiate_schema(values, rules[k][1]):\n return k\n```\nFor any given category (e.g. pulmonary_embolism), the maximal schema score encountered in the report is selected to characterize that report.\nradnlp Rules\nradnlp uses rule files to specify rules that define how particular pyConTextNLP findings relate to radnlp concepts. For example, in the classificationRules3.csv provided in KBs, we provide a rules that state:\n\nThe default disease state is 1. \nPROBABLE_EXISTENCE AND DEFINITE_EXISTENCE map to a disease state of 1\n\nRules as currently indicated are not quite general and reflect paraticular use cases we were working on.\nTypes of Rules\nWe currently support three rules:\n\nCLASSIFICAITON_RULE: these are the rules that relate to disease, temporality, and certainty\nCATEGORY_RULE: these are only partially developed concepts that attempt to address combinatorics problems in pyConTextNLP by making default findings more general (e.g. infaract) and then tries to create more specific findings by attaching anatomic locations to the findings (e.g. an infarct becomes a critical finding when attached to an anatomic concept like brain_anatomy or heart_anatomy.\nSEVERITY_RULE: Again, not fully developed but relates to extracting severity concepts (e.g. large or 4.3 cm).\n\n```Python\nLines that start with the # symbol are comments and are ignored,,,,,,,,,,,,,\nprocessReport current has three types of rules: @CLASSIFICATION_RULE, @CATEGORY_RULE, and @SEVERITY_RULE,,,,,,,,,,,\nclassification rules would be for things like disease_state, certainty_state, temporality state,,,,,,,,,,,\nFor each classification_rule set,\" there is a rule label (e.g. \"\"DISEASE_STATE\"\". This must match\",,,,,,,,,,,,\nthe terms used in the schema file,,,,,,,,,,,,,\nEach rule set requires a DEFAULT which is the schema value to be returned if no rule conditions are satisifed,,,,,,,,,,,,,\nEach rule set has zero or more rules consisting of a schema value to be returned if the rule evaluates to true,,,,,,,,,,,,,\nA rule evalutes to true if the target is modified by one or more of the ConText CATEGORIES listed following,,,,,,,,,,,,,\n@CLASSIFICATION_RULE,DISEASE_STATE,RULE,0,DEFINITE_NEGATED_EXISTENCE,PROBABLE_NEGATED_EXISTENCE,FUTURE,INDICATION,PSEUDONEG,,,,,\n@CLASSIFICATION_RULE,DISEASE_STATE,RULE,2,AMBIVALENT_EXISTENCE,,,,,,,,,\n@CLASSIFICATION_RULE,DISEASE_STATE,RULE,1,PROBABLE_EXISTENCE,DEFINITE_EXISTENCE,,,,,,,,\n@CLASSIFICATION_RULE,DISEASE_STATE,DEFAULT,1,,,,,,,,,,\n@CLASSIFICATION_RULE,CERTAINTY_STATE,RULE,0,PROBABLE_NEGATED_EXISTENCE,AMBIVALENT_EXISTENCE,PROBABLE_EXISTENCE,,,,,,,\n@CLASSIFICATION_RULE,CERTAINTY_STATE,DEFAULT,1,,,,,,,,,,\n@CLASSIFICATION_RULE,ACUTE_STATE,RULE,0,HISTORICAL,,,,,,,,,\n@CLASSIFICATION_RULE,ACUTE_STATE,DEFAULT,1,,,,,,,,,,\nCATEGORY_RULE rules specify what Findings (e.g. DVT) can have the category modified by the following ANATOMIC modifies,,,,,,,,,,,,,\n@CATEGORY_RULE,DVT,LOWER_DEEP_VEIN,UPPER_DEEP_VEIN,HEPATIC_VEIN,PORTAL_SYSTEM_VEIN,PULMONARY_VEIN,RENAL_VEIN,SINUS_VEIN,LOWER_SUPERFICIAL_VEIN,UPPER_SUPERFICIAL_VEIN,VARICOCELE,ARTERIAL,NON_VASCULAR\n@CATEGORY_RULE,INFARCT,BRAIN_ANATOMY,HEART_ANATOMY,OTHER_CRITICAL_ANATOMY,,,,,,,,,\n@CATEGORY_RULE,ANEURYSM,AORTIC_ANATOMY,,,,,,,,,,,\nSEVERITY_RUlE specifiy which targets to try to obtain severity measures for,,,,,,,,,,,,,\n@SEVERITY_RULE,AORTIC_ANATOMY_ANEURYSM,SEVERITY,,,,,,,,,,,\n```",
"%matplotlib inline",
"Licensing\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nProgram Description",
"import radnlp.rules as rules\nimport radnlp.schema as schema\nimport radnlp.utils as utils\nimport radnlp.classifier as classifier\nimport radnlp.split as split\n\nfrom IPython.display import clear_output, display, HTML\nfrom IPython.html.widgets import interact, interactive, fixed\nimport io\nfrom IPython.html import widgets # Widget definitions\nimport pyConTextNLP.itemData as itemData\n\nfrom pyConTextNLP.display.html import mark_document_with_html",
"Example Data\nBelow are two example radiology reports pulled from the MIMIC2 demo data set.",
"reports = [\"\"\"1. Pulmonary embolism with filling defects noted within the upper and lower\n lobar branches of the right main pulmonary artery.\n 2. Bilateral pleural effusions, greater on the left.\n 3. Ascites.\n 4. There is edema of the gallbladder wall, without any evidence of\n distention, intra- or extra-hepatic biliary dilatation. This, along with\n stranding within the mesentery, likely represents third spacing of fluid.\n 5. There are several wedge shaped areas of decreased perfusion within the\n spleen, which may represent splenic infarcts.\n \n Results were discussed with Dr. [**First Name8 (NamePattern2) 15561**] [**Last Name (NamePattern1) 13459**] \n at 8 pm on [**3099-11-6**].\"\"\",\n \n \"\"\"1. Filling defects within the subsegmental arteries in the region\n of the left lower lobe and lingula and within the right lower lobe consistent\n with pulmonary emboli.\n \n 2. Small bilateral pleural effusions with associated bibasilar atelectasis.\n \n 3. Left anterior pneumothorax.\n \n 4. No change in the size of the thoracoabdominal aortic aneurysm.\n \n 5. Endotracheal tube 1.8 cm above the carina. NG tube within the stomach,\n although the tip is pointed superiorly toward the fundus.\"\"\",\n \n \"\"\"1. There are no pulmonary emboli observed.\n \n 2. Small bilateral pleural effusions with associated bibasilar atelectasis.\n \n 3. Left anterior pneumothorax.\n \n 4. No change in the size of the thoracoabdominal aortic aneurysm.\n \n 5. Endotracheal tube 1.8 cm above the carina. NG tube within the stomach,\n although the tip is pointed superiorly toward the fundus.\"\"\"\n]\n\n#!python -m textblob.download_corpora",
"Define locations of knowledge, schema, and rules files",
"def getOptions():\n \"\"\"Generates arguments for specifying database and other parameters\"\"\"\n options = {}\n options['lexical_kb'] = [\"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/lexical_kb_04292013.tsv\", \n \"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/criticalfinder_generalized_modifiers.tsv\"]\n options['domain_kb'] = [\"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/pe_kb.tsv\"]#[os.path.join(DATADIR2,\"pe_kb.tsv\")]\n options[\"schema\"] = \"https://raw.githubusercontent.com/chapmanbe/RadNLP/master/KBs/schema2.csv\"#\"file specifying schema\"\n options[\"rules\"] = \"https://raw.githubusercontent.com/chapmanbe/RadNLP/master/KBs/classificationRules3.csv\" # \"file specifying sentence level rules\")\n return options\n",
"Define report analysis\nFor every report we do two steps\n\nMarkup all the sentences in the report based on the provided targets and modifiers\nGiven this markup we apply our rules and schema to generate a document classification.\n\nradnlp provides functions to do both of these steps:\n\nradnlp.utils.mark_report takes lists of modifiers and targets and generates a pyConTextNLP document graph\nradnlp.classify.classify_document_targets takes the document graph, rules, and schema and generates document classification for each identified concept.\n\nBecause pyConTextNLP operates on sentences we split the report into sentences. In this function we use radnlp.split.get_sentences which is simply a wrapper around textblob for splitting the sentences.",
"def analyze_report(report, modifiers, targets, rules, schema):\n \"\"\"\n given an individual radiology report, creates a pyConTextGraph\n object that contains the context markup\n report: a text string containing the radiology reports\n \"\"\"\n \n markup = utils.mark_report(split.get_sentences(report),\n modifiers,\n targets)\n return classifier.classify_document_targets(markup,\n rules[0],\n rules[1],\n rules[2],\n schema)\n\n\n\n\ndef process_report(report):\n \n options = getOptions()\n\n _radnlp_rules = rules.read_rules(options[\"rules\"])\n _schema = schema.read_schema(options[\"schema\"])\n #_schema = readSchema(options[\"schema\"])\n modifiers = itemData.itemData()\n targets = itemData.itemData()\n for kb in options['lexical_kb']:\n modifiers.extend( itemData.instantiateFromCSVtoitemData(kb) )\n for kb in options['domain_kb']:\n targets.extend( itemData.instantiateFromCSVtoitemData(kb) )\n return analyze_report(report, modifiers, targets, _radnlp_rules, _schema)\n\nrslt_0 = process_report(reports[0])",
"radnlp.classifier.classify_document_targets returns a dictionary with keys equal to the target category (e.g. pulmonary_embolism) and the values a 3-tuple with the following values:\n\nThe schema category (e.g. 8 or 2).\nThe XML representation of the maximal schema node\nA list (usually empty (not really implemented yet)) of severity values.",
"for key, value in rslt_0.items():\n print((\"%s\"%key).center(42,\"-\"))\n for v in value:\n print(v)\n\nrslt_1 = main(reports[1])\n\nfor key, value in rslt_1.items():\n print((\"%s\"%key).center(42,\"-\"))\n for v in value:\n print(v)",
"Negative Report\nFor the third report I simply rewrote one of the findings to be negative for PE. We now see a change in the schema classification.",
"rslt_2 = main(reports[2])\n\nfor key, value in rslt_2.items():\n print((\"%s\"%key).center(42,\"-\"))\n for v in value:\n print(v)\n\nkeys = list(pec.markups.keys())\nkeys.sort()\n\npec.reports.insert(pec.reports.columns.get_loc(u'markup')+1,\n \"ConText Coding\",\n [codingKey.get(pec.markups[k][1].get(\"pulmonary_embolism\",[None])[0],\"NA\") for k in keys])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/jax | docs/notebooks/Common_Gotchas_in_JAX.ipynb | apache-2.0 | [
"🔪 JAX - The Sharp Bits 🔪\n\nlevskaya@ mattjj@\nWhen walking about the countryside of Italy, the people will not hesitate to tell you that JAX has \"una anima di pura programmazione funzionale\".\nJAX is a language for expressing and composing transformations of numerical programs. JAX is also able to compile numerical programs for CPU or accelerators (GPU/TPU). \nJAX works great for many numerical and scientific programs, but only if they are written with certain constraints that we describe below.",
"import numpy as np\nfrom jax import grad, jit\nfrom jax import lax\nfrom jax import random\nimport jax\nimport jax.numpy as jnp\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib import rcParams\nrcParams['image.interpolation'] = 'nearest'\nrcParams['image.cmap'] = 'viridis'\nrcParams['axes.grid'] = False",
"🔪 Pure functions\nJAX transformation and compilation are designed to work only on Python functions that are functionally pure: all the input data is passed through the function parameters, all the results are output through the function results. A pure function will always return the same result if invoked with the same inputs. \nHere are some examples of functions that are not functionally pure for which JAX behaves differently than the Python interpreter. Note that these behaviors are not guaranteed by the JAX system; the proper way to use JAX is to use it only on functionally pure Python functions.",
"def impure_print_side_effect(x):\n print(\"Executing function\") # This is a side-effect \n return x\n\n# The side-effects appear during the first run \nprint (\"First call: \", jit(impure_print_side_effect)(4.))\n\n# Subsequent runs with parameters of same type and shape may not show the side-effect\n# This is because JAX now invokes a cached compilation of the function\nprint (\"Second call: \", jit(impure_print_side_effect)(5.))\n\n# JAX re-runs the Python function when the type or shape of the argument changes\nprint (\"Third call, different type: \", jit(impure_print_side_effect)(jnp.array([5.])))\n\ng = 0.\ndef impure_uses_globals(x):\n return x + g\n\n# JAX captures the value of the global during the first run\nprint (\"First call: \", jit(impure_uses_globals)(4.))\ng = 10. # Update the global\n\n# Subsequent runs may silently use the cached value of the globals\nprint (\"Second call: \", jit(impure_uses_globals)(5.))\n\n# JAX re-runs the Python function when the type or shape of the argument changes\n# This will end up reading the latest value of the global\nprint (\"Third call, different type: \", jit(impure_uses_globals)(jnp.array([4.])))\n\ng = 0.\ndef impure_saves_global(x):\n global g\n g = x\n return x\n\n# JAX runs once the transformed function with special Traced values for arguments\nprint (\"First call: \", jit(impure_saves_global)(4.))\nprint (\"Saved global: \", g) # Saved global has an internal JAX value",
"A Python function can be functionally pure even if it actually uses stateful objects internally, as long as it does not read or write external state:",
"def pure_uses_internal_state(x):\n state = dict(even=0, odd=0)\n for i in range(10):\n state['even' if i % 2 == 0 else 'odd'] += x\n return state['even'] + state['odd']\n\nprint(jit(pure_uses_internal_state)(5.))",
"It is not recommended to use iterators in any JAX function you want to jit or in any control-flow primitive. The reason is that an iterator is a python object which introduces state to retrieve the next element. Therefore, it is incompatible with JAX functional programming model. In the code below, there are some examples of incorrect attempts to use iterators with JAX. Most of them return an error, but some give unexpected results.",
"import jax.numpy as jnp\nimport jax.lax as lax\nfrom jax import make_jaxpr\n\n# lax.fori_loop\narray = jnp.arange(10)\nprint(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45\niterator = iter(range(10))\nprint(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0\n\n# lax.scan\ndef func11(arr, extra):\n ones = jnp.ones(arr.shape) \n def body(carry, aelems):\n ae1, ae2 = aelems\n return (carry + ae1 * ae2 + extra, carry)\n return lax.scan(body, 0., (arr, ones)) \nmake_jaxpr(func11)(jnp.arange(16), 5.)\n# make_jaxpr(func11)(iter(range(16)), 5.) # throws error\n\n# lax.cond\narray_operand = jnp.array([0.])\nlax.cond(True, lambda x: x+1, lambda x: x-1, array_operand)\niter_operand = iter(range(10))\n# lax.cond(True, lambda x: next(x)+1, lambda x: next(x)-1, iter_operand) # throws error",
"🔪 In-Place Updates\nIn Numpy you're used to doing this:",
"numpy_array = np.zeros((3,3), dtype=np.float32)\nprint(\"original array:\")\nprint(numpy_array)\n\n# In place, mutating update\nnumpy_array[1, :] = 1.0\nprint(\"updated array:\")\nprint(numpy_array)",
"If we try to update a JAX device array in-place, however, we get an error! (☉_☉)",
"jax_array = jnp.zeros((3,3), dtype=jnp.float32)\n\n# In place update of JAX's array will yield an error!\ntry:\n jax_array[1, :] = 1.0\nexcept Exception as e:\n print(\"Exception {}\".format(e))",
"Allowing mutation of variables in-place makes program analysis and transformation difficult. JAX requires that programs are pure functions.\nInstead, JAX offers a functional array update using the .at property on JAX arrays.\n️⚠️ inside jit'd code and lax.while_loop or lax.fori_loop the size of slices can't be functions of argument values but only functions of argument shapes -- the slice start indices have no such restriction. See the below Control Flow Section for more information on this limitation.\nArray updates: x.at[idx].set(y)\nFor example, the update above can be written as:",
"updated_array = jax_array.at[1, :].set(1.0)\nprint(\"updated array:\\n\", updated_array)",
"JAX's array update functions, unlike their NumPy versions, operate out-of-place. That is, the updated array is returned as a new array and the original array is not modified by the update.",
"print(\"original array unchanged:\\n\", jax_array)",
"However, inside jit-compiled code, if the input value x of x.at[idx].set(y) is not reused, the compiler will optimize the array update to occur in-place.\nArray updates with other operations\nIndexed array updates are not limited simply to overwriting values. For example, we can perform indexed addition as follows:",
"print(\"original array:\")\njax_array = jnp.ones((5, 6))\nprint(jax_array)\n\nnew_jax_array = jax_array.at[::2, 3:].add(7.)\nprint(\"new array post-addition:\")\nprint(new_jax_array)",
"For more details on indexed array updates, see the documentation for the .at property.\n🔪 Out-of-Bounds Indexing\nIn Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:",
"try:\n np.arange(10)[11]\nexcept Exception as e:\n print(\"Exception {}\".format(e))",
"However, raising an error from code running on an accelerator can be difficult or impossible. Therefore, JAX must choose some non-error behavior for out of bounds indexing (akin to how invalid floating point arithmetic results in NaN). When the indexing operation is an array index update (e.g. index_add or scatter-like primitives), updates at out-of-bounds indices will be skipped; when the operation is an array index retrieval (e.g. NumPy indexing or gather-like primitives) the index is clamped to the bounds of the array since something must be returned. For example, the last value of the array will be returned from this indexing operation:",
"jnp.arange(10)[11]",
"Note that due to this behavior for index retrieval, functions like jnp.nanargmin and jnp.nanargmax return -1 for slices consisting of NaNs whereas Numpy would throw an error.\nNote also that, as the two behaviors described above are not inverses of each other, reverse-mode automatic differentiation (which turns index updates into index retrievals and vice versa) will not preserve the semantics of out of bounds indexing. Thus it may be a good idea to think of out-of-bounds indexing in JAX as a case of undefined behavior.\n🔪 Non-array inputs: NumPy vs. JAX\nNumPy is generally happy accepting Python lists or tuples as inputs to its API functions:",
"np.sum([1, 2, 3])",
"JAX departs from this, generally returning a helpful error:",
"try:\n jnp.sum([1, 2, 3])\nexcept TypeError as e:\n print(f\"TypeError: {e}\")",
"This is a deliberate design choice, because passing lists or tuples to traced functions can lead to silent performance degradation that might otherwise be difficult to detect.\nFor example, consider the following permissive version of jnp.sum that allows list inputs:",
"def permissive_sum(x):\n return jnp.sum(jnp.array(x))\n\nx = list(range(10))\npermissive_sum(x)",
"The output is what we would expect, but this hides potential performance issues under the hood. In JAX's tracing and JIT compilation model, each element in a Python list or tuple is treated as a separate JAX variable, and individually processed and pushed to device. This can be seen in the jaxpr for the permissive_sum function above:",
"make_jaxpr(permissive_sum)(x)",
"Each entry of the list is handled as a separate input, resulting in a tracing & compilation overhead that grows linearly with the size of the list. To prevent surprises like this, JAX avoids implicit conversions of lists and tuples to arrays.\nIf you would like to pass a tuple or list to a JAX function, you can do so by first explicitly converting it to an array:",
"jnp.sum(jnp.array(x))",
"🔪 Random Numbers\n\nIf all scientific papers whose results are in doubt because of bad \nrand()s were to disappear from library shelves, there would be a \ngap on each shelf about as big as your fist. - Numerical Recipes\n\nRNGs and State\nYou're used to stateful pseudorandom number generators (PRNGs) from numpy and other libraries, which helpfully hide a lot of details under the hood to give you a ready fountain of pseudorandomness:",
"print(np.random.random())\nprint(np.random.random())\nprint(np.random.random())",
"Underneath the hood, numpy uses the Mersenne Twister PRNG to power its pseudorandom functions. The PRNG has a period of $2^{19937}-1$ and at any point can be described by 624 32bit unsigned ints and a position indicating how much of this \"entropy\" has been used up.",
"np.random.seed(0)\nrng_state = np.random.get_state()\n#print(rng_state)\n# --> ('MT19937', array([0, 1, 1812433255, 1900727105, 1208447044,\n# 2481403966, 4042607538, 337614300, ... 614 more numbers..., \n# 3048484911, 1796872496], dtype=uint32), 624, 0, 0.0)",
"This pseudorandom state vector is automagically updated behind the scenes every time a random number is needed, \"consuming\" 2 of the uint32s in the Mersenne twister state vector:",
"_ = np.random.uniform()\nrng_state = np.random.get_state()\n#print(rng_state) \n# --> ('MT19937', array([2443250962, 1093594115, 1878467924,\n# ..., 2648828502, 1678096082], dtype=uint32), 2, 0, 0.0)\n\n# Let's exhaust the entropy in this PRNG statevector\nfor i in range(311):\n _ = np.random.uniform()\nrng_state = np.random.get_state()\n#print(rng_state) \n# --> ('MT19937', array([2443250962, 1093594115, 1878467924,\n# ..., 2648828502, 1678096082], dtype=uint32), 624, 0, 0.0)\n\n# Next call iterates the RNG state for a new batch of fake \"entropy\".\n_ = np.random.uniform()\nrng_state = np.random.get_state()\n# print(rng_state) \n# --> ('MT19937', array([1499117434, 2949980591, 2242547484, \n# 4162027047, 3277342478], dtype=uint32), 2, 0, 0.0)",
"The problem with magic PRNG state is that it's hard to reason about how it's being used and updated across different threads, processes, and devices, and it's very easy to screw up when the details of entropy production and consumption are hidden from the end user.\nThe Mersenne Twister PRNG is also known to have a number of problems, it has a large 2.5Kb state size, which leads to problematic initialization issues. It fails modern BigCrush tests, and is generally slow.\nJAX PRNG\nJAX instead implements an explicit PRNG where entropy production and consumption are handled by explicitly passing and iterating PRNG state. JAX uses a modern Threefry counter-based PRNG that's splittable. That is, its design allows us to fork the PRNG state into new PRNGs for use with parallel stochastic generation.\nThe random state is described by two unsigned-int32s that we call a key:",
"from jax import random\nkey = random.PRNGKey(0)\nkey",
"JAX's random functions produce pseudorandom numbers from the PRNG state, but do not change the state! \nReusing the same state will cause sadness and monotony, depriving the end user of lifegiving chaos:",
"print(random.normal(key, shape=(1,)))\nprint(key)\n# No no no!\nprint(random.normal(key, shape=(1,)))\nprint(key)",
"Instead, we split the PRNG to get usable subkeys every time we need a new pseudorandom number:",
"print(\"old key\", key)\nkey, subkey = random.split(key)\nnormal_pseudorandom = random.normal(subkey, shape=(1,))\nprint(\" \\---SPLIT --> new key \", key)\nprint(\" \\--> new subkey\", subkey, \"--> normal\", normal_pseudorandom)",
"We propagate the key and make new subkeys whenever we need a new random number:",
"print(\"old key\", key)\nkey, subkey = random.split(key)\nnormal_pseudorandom = random.normal(subkey, shape=(1,))\nprint(\" \\---SPLIT --> new key \", key)\nprint(\" \\--> new subkey\", subkey, \"--> normal\", normal_pseudorandom)",
"We can generate more than one subkey at a time:",
"key, *subkeys = random.split(key, 4)\nfor subkey in subkeys:\n print(random.normal(subkey, shape=(1,)))",
"🔪 Control Flow\n✔ python control_flow + autodiff ✔\nIf you just want to apply grad to your python functions, you can use regular python control-flow constructs with no problems, as if you were using Autograd (or Pytorch or TF Eager).",
"def f(x):\n if x < 3:\n return 3. * x ** 2\n else:\n return -4 * x\n\nprint(grad(f)(2.)) # ok!\nprint(grad(f)(4.)) # ok!",
"python control flow + JIT\nUsing control flow with jit is more complicated, and by default it has more constraints.\nThis works:",
"@jit\ndef f(x):\n for i in range(3):\n x = 2 * x\n return x\n\nprint(f(3))",
"So does this:",
"@jit\ndef g(x):\n y = 0.\n for i in range(x.shape[0]):\n y = y + x[i]\n return y\n\nprint(g(jnp.array([1., 2., 3.])))",
"But this doesn't, at least by default:",
"@jit\ndef f(x):\n if x < 3:\n return 3. * x ** 2\n else:\n return -4 * x\n\n# This will fail!\ntry:\n f(2)\nexcept Exception as e:\n print(\"Exception {}\".format(e))",
"What gives!?\nWhen we jit-compile a function, we usually want to compile a version of the function that works for many different argument values, so that we can cache and reuse the compiled code. That way we don't have to re-compile on each function evaluation.\nFor example, if we evaluate an @jit function on the array jnp.array([1., 2., 3.], jnp.float32), we might want to compile code that we can reuse to evaluate the function on jnp.array([4., 5., 6.], jnp.float32) to save on compile time.\nTo get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels.\nBy default, jit traces your code on the ShapedArray abstraction level, where each abstract value represents the set of all array values with a fixed shape and dtype. For example, if we trace using the abstract value ShapedArray((3,), jnp.float32), we get a view of the function that can be reused for any concrete value in the corresponding set of arrays. That means we can save on compile time.\nBut there's a tradeoff here: if we trace a Python function on a ShapedArray((), jnp.float32) that isn't committed to a specific concrete value, when we hit a line like if x < 3, the expression x < 3 evaluates to an abstract ShapedArray((), jnp.bool_) that represents the set {True, False}. When Python attempts to coerce that to a concrete True or False, we get an error: we don't know which branch to take, and can't continue tracing! The tradeoff is that with higher levels of abstraction we gain a more general view of the Python code (and thus save on re-compilations), but we require more constraints on the Python code to complete the trace.\nThe good news is that you can control this tradeoff yourself. By having jit trace on more refined abstract values, you can relax the traceability constraints. For example, using the static_argnums argument to jit, we can specify to trace on concrete values of some arguments. Here's that example function again:",
"def f(x):\n if x < 3:\n return 3. * x ** 2\n else:\n return -4 * x\n\nf = jit(f, static_argnums=(0,))\n\nprint(f(2.))",
"Here's another example, this time involving a loop:",
"def f(x, n):\n y = 0.\n for i in range(n):\n y = y + x[i]\n return y\n\nf = jit(f, static_argnums=(1,))\n\nf(jnp.array([2., 3., 4.]), 2)",
"In effect, the loop gets statically unrolled. JAX can also trace at higher levels of abstraction, like Unshaped, but that's not currently the default for any transformation\n️⚠️ functions with argument-value dependent shapes\nThese control-flow issues also come up in a more subtle way: numerical functions we want to jit can't specialize the shapes of internal arrays on argument values (specializing on argument shapes is ok). As a trivial example, let's make a function whose output happens to depend on the input variable length.",
"def example_fun(length, val):\n return jnp.ones((length,)) * val\n# un-jit'd works fine\nprint(example_fun(5, 4))\n\nbad_example_jit = jit(example_fun)\n# this will fail:\ntry:\n print(bad_example_jit(10, 4))\nexcept Exception as e:\n print(\"Exception {}\".format(e))\n# static_argnums tells JAX to recompile on changes at these argument positions:\ngood_example_jit = jit(example_fun, static_argnums=(0,))\n# first compile\nprint(good_example_jit(10, 4))\n# recompiles\nprint(good_example_jit(5, 4))",
"static_argnums can be handy if length in our example rarely changes, but it would be disastrous if it changed a lot! \nLastly, if your function has global side-effects, JAX's tracer can cause weird things to happen. A common gotcha is trying to print arrays inside jit'd functions:",
"@jit\ndef f(x):\n print(x)\n y = 2 * x\n print(y)\n return y\nf(2)",
"Structured control flow primitives\nThere are more options for control flow in JAX. Say you want to avoid re-compilations but still want to use control flow that's traceable, and that avoids un-rolling large loops. Then you can use these 4 structured control flow primitives:\n\nlax.cond differentiable\nlax.while_loop fwd-mode-differentiable\nlax.fori_loop fwd-mode-differentiable in general; fwd and rev-mode differentiable if endpoints are static.\nlax.scan differentiable\n\ncond\npython equivalent:\npython\ndef cond(pred, true_fun, false_fun, operand):\n if pred:\n return true_fun(operand)\n else:\n return false_fun(operand)",
"from jax import lax\n\noperand = jnp.array([0.])\nlax.cond(True, lambda x: x+1, lambda x: x-1, operand)\n# --> array([1.], dtype=float32)\nlax.cond(False, lambda x: x+1, lambda x: x-1, operand)\n# --> array([-1.], dtype=float32)",
"while_loop\npython equivalent:\ndef while_loop(cond_fun, body_fun, init_val):\n val = init_val\n while cond_fun(val):\n val = body_fun(val)\n return val",
"init_val = 0\ncond_fun = lambda x: x<10\nbody_fun = lambda x: x+1\nlax.while_loop(cond_fun, body_fun, init_val)\n# --> array(10, dtype=int32)",
"fori_loop\npython equivalent:\ndef fori_loop(start, stop, body_fun, init_val):\n val = init_val\n for i in range(start, stop):\n val = body_fun(i, val)\n return val",
"init_val = 0\nstart = 0\nstop = 10\nbody_fun = lambda i,x: x+i\nlax.fori_loop(start, stop, body_fun, init_val)\n# --> array(45, dtype=int32)",
"Summary\n$$\n\\begin{array} {r|rr} \n\\hline \\\n\\textrm{construct} \n& \\textrm{jit} \n& \\textrm{grad} \\\n\\hline \\\n\\textrm{if} & ❌ & ✔ \\\n\\textrm{for} & ✔ & ✔\\\n\\textrm{while} & ✔ & ✔\\\n\\textrm{lax.cond} & ✔ & ✔\\\n\\textrm{lax.while_loop} & ✔ & \\textrm{fwd}\\\n\\textrm{lax.fori_loop} & ✔ & \\textrm{fwd}\\\n\\textrm{lax.scan} & ✔ & ✔\\\n\\hline\n\\end{array}\n$$\n<center>\n$\\ast$ = argument-<b>value</b>-independent loop condition - unrolls the loop\n</center>\n🔪 NaNs\nDebugging NaNs\nIf you want to trace where NaNs are occurring in your functions or gradients, you can turn on the NaN-checker by:\n\n\nsetting the JAX_DEBUG_NANS=True environment variable;\n\n\nadding from jax.config import config and config.update(\"jax_debug_nans\", True) near the top of your main file;\n\n\nadding from jax.config import config and config.parse_flags_with_absl() to your main file, then set the option using a command-line flag like --jax_debug_nans=True;\n\n\nThis will cause computations to error-out immediately on production of a NaN. Switching this option on adds a nan check to every floating point type value produced by XLA. That means values are pulled back to the host and checked as ndarrays for every primitive operation not under an @jit. For code under an @jit, the output of every @jit function is checked and if a nan is present it will re-run the function in de-optimized op-by-op mode, effectively removing one level of @jit at a time.\nThere could be tricky situations that arise, like nans that only occur under a @jit but don't get produced in de-optimized mode. In that case you'll see a warning message print out but your code will continue to execute.\nIf the nans are being produced in the backward pass of a gradient evaluation, when an exception is raised several frames up in the stack trace you will be in the backward_pass function, which is essentially a simple jaxpr interpreter that walks the sequence of primitive operations in reverse. In the example below, we started an ipython repl with the command line env JAX_DEBUG_NANS=True ipython, then ran this:\n```\nIn [1]: import jax.numpy as jnp\nIn [2]: jnp.divide(0., 0.)\nFloatingPointError Traceback (most recent call last)\n<ipython-input-2-f2e2c413b437> in <module>()\n----> 1 jnp.divide(0., 0.)\n.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)\n 343 return floor_divide(x1, x2)\n 344 else:\n--> 345 return true_divide(x1, x2)\n 346\n 347\n.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)\n 332 x1, x2 = _promote_shapes(x1, x2)\n 333 return lax.div(lax.convert_element_type(x1, result_dtype),\n--> 334 lax.convert_element_type(x2, result_dtype))\n 335\n 336\n.../jax/jax/lax.pyc in div(x, y)\n 244 def div(x, y):\n 245 r\"\"\"Elementwise division: :math:x \\over y.\"\"\"\n--> 246 return div_p.bind(x, y)\n 247\n 248 def rem(x, y):\n... stack trace ...\n.../jax/jax/interpreters/xla.pyc in handle_result(device_buffer)\n 103 py_val = device_buffer.to_py()\n 104 if np.any(np.isnan(py_val)):\n--> 105 raise FloatingPointError(\"invalid value\")\n 106 else:\n 107 return DeviceArray(device_buffer, *result_shape)\nFloatingPointError: invalid value\n```\nThe nan generated was caught. By running %debug, we can get a post-mortem debugger. This also works with functions under @jit, as the example below shows.\n```\nIn [4]: from jax import jit\nIn [5]: @jit\n ...: def f(x, y):\n ...: a = x * y\n ...: b = (x + y) / (x - y)\n ...: c = a + 2\n ...: return a + b * c\n ...:\nIn [6]: x = jnp.array([2., 0.])\nIn [7]: y = jnp.array([3., 0.])\nIn [8]: f(x, y)\nInvalid value encountered in the output of a jit function. Calling the de-optimized version.\n\nFloatingPointError Traceback (most recent call last)\n<ipython-input-8-811b7ddb3300> in <module>()\n----> 1 f(x, y)\n... stack trace ...\n<ipython-input-5-619b39acbaac> in f(x, y)\n 2 def f(x, y):\n 3 a = x * y\n----> 4 b = (x + y) / (x - y)\n 5 c = a + 2\n 6 return a + b * c\n.../jax/jax/numpy/lax_numpy.pyc in divide(x1, x2)\n 343 return floor_divide(x1, x2)\n 344 else:\n--> 345 return true_divide(x1, x2)\n 346\n 347\n.../jax/jax/numpy/lax_numpy.pyc in true_divide(x1, x2)\n 332 x1, x2 = _promote_shapes(x1, x2)\n 333 return lax.div(lax.convert_element_type(x1, result_dtype),\n--> 334 lax.convert_element_type(x2, result_dtype))\n 335\n 336\n.../jax/jax/lax.pyc in div(x, y)\n 244 def div(x, y):\n 245 r\"\"\"Elementwise division: :math:x \\over y.\"\"\"\n--> 246 return div_p.bind(x, y)\n 247\n 248 def rem(x, y):\n... stack trace ...\n```\nWhen this code sees a nan in the output of an @jit function, it calls into the de-optimized code, so we still get a clear stack trace. And we can run a post-mortem debugger with %debug to inspect all the values to figure out the error.\n⚠️ You shouldn't have the NaN-checker on if you're not debugging, as it can introduce lots of device-host round-trips and performance regressions!\n⚠️ The NaN-checker doesn't work with pmap. To debug nans in pmap code, one thing to try is replacing pmap with vmap.\n🔪 Double (64bit) precision\nAt the moment, JAX by default enforces single-precision numbers to mitigate the Numpy API's tendency to aggressively promote operands to double. This is the desired behavior for many machine-learning applications, but it may catch you by surprise!",
"x = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)\nx.dtype",
"To use double-precision numbers, you need to set the jax_enable_x64 configuration variable at startup. \nThere are a few ways to do this:\n\n\nYou can enable 64bit mode by setting the environment variable JAX_ENABLE_X64=True.\n\n\nYou can manually set the jax_enable_x64 configuration flag at startup:\n\n\npython\n # again, this only works on startup!\n from jax.config import config\n config.update(\"jax_enable_x64\", True)\n\nYou can parse command-line flags with absl.app.run(main)\n\npython\n from jax.config import config\n config.config_with_absl()\n\nIf you want JAX to run absl parsing for you, i.e. you don't want to do absl.app.run(main), you can instead use\n\npython\n from jax.config import config\n if __name__ == '__main__':\n # calls config.config_with_absl() *and* runs absl parsing\n config.parse_flags_with_absl()\nNote that #2-#4 work for any of JAX's configuration options.\nWe can then confirm that x64 mode is enabled:",
"import jax.numpy as jnp\nfrom jax import random\nx = random.uniform(random.PRNGKey(0), (1000,), dtype=jnp.float64)\nx.dtype # --> dtype('float64')",
"Caveats\n⚠️ XLA doesn't support 64-bit convolutions on all backends!\n🔪 Miscellaneous Divergences from NumPy\nWhile jax.numpy makes every attempt to replicate the behavior of numpy's API, there do exist corner cases where the behaviors differ.\nMany such cases are discussed in detail in the sections above; here we list several other known places where the APIs diverge.\n\nFor binary operations, JAX's type promotion rules differ somewhat from those used by NumPy. See Type Promotion Semantics for more details.\nWhen performing unsafe type casts (i.e. casts in which the target dtype cannot represent the input value), JAX's behavior may be backend dependent, and in general may diverge from NumPy's behavior. Numpy allows control over the result in these scenarios via the casting argument (see np.ndarray.astype); JAX does not provide any such configuration, instead directly inheriting the behavior of XLA:ConvertElementType.\n\nHere is an example of an unsafe cast with differing results between NumPy and JAX:\n ```python\n\n\n\nnp.arange(254.0, 258.0).astype('uint8') \n array([254, 255, 0, 1], dtype=uint8)\njnp.arange(254.0, 258.0).astype('uint8') \n DeviceArray([254, 255, 255, 255], dtype=uint8)\n ```\n This sort of mismatch would typically arise when casting extreme values from floating to integer types or vice versa.\n\n\n\nFin.\nIf something's not covered here that has caused you weeping and gnashing of teeth, please let us know and we'll extend these introductory advisos!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jhamilius/chain | notebooks/chain-clustering.ipynb | mit | [
"Clustering and outlier detection\n<hr>\n\n<u>Objectives</u>\n- Test clustering methods on the features extracted from the graph for nodes and transactions\n- Test outlier detection methds on the features extracted from the graph for nodes and transactions\n- Detect if the clustering is splitting some publicy known groups of addresses (exchange, pool, smart contracts etc.)\n- Plot Clustering and outlier detection\n<hr>\n\n0 - Data preparation\nImporting librairies",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom time import time\nfrom joblib import Parallel, delayed\nimport multiprocessing\nimport time\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom sklearn.cluster import MiniBatchKMeans, KMeans\nfrom sklearn.metrics.pairwise import pairwise_distances_argmin\nfrom sklearn.datasets.samples_generator import make_blobs\nfrom scipy.spatial.distance import cdist, pdist\nfrom sklearn import metrics\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets import load_digits\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import scale\nfrom sklearn.covariance import EmpiricalCovariance, MinCovDet",
"loading different datasets :\n- Publicy known adresses\n- Features dataframe from the graph features generators",
"known = pd.read_csv('../data/known.csv')\nrogues = pd.read_csv('../data/rogues.csv')\ntransactions = pd.read_csv('../data/edges.csv').drop('Unnamed: 0',1)\n#Dropping features and fill na with 0\ndf = pd.read_csv('../data/features_full.csv').drop('Unnamed: 0',1).fillna(0)\ndf = df.set_index(['nodes'])\n#build normalize values\ndata = scale(df.values)\nn_sample = 10000",
"I - Clustering Nodes\n<hr>\n\nExploring clustering methods on the nodes featured dataset\nA - k-means\nFirst a very simple kmeans method",
"#Define estimator / by default clusters = 6 an init = 10\nkmeans = KMeans(init='k-means++', n_clusters=6, n_init=10)\n\nkmeans.fit(data)",
"1 - Parameters Optimization\na - Finding the best k\ncode from http://www.slideshare.net/SarahGuido/kmeans-clustering-with-scikitlearn#notes-panel)",
"%%time\n#Determine your k range\nk_range = range(1,14)\n# Fit the kmeans model for each n_clusters = k\nk_means_var = [KMeans(n_clusters=k).fit(data) for k in k_range]\n# Pull out the centroids for each model\ncentroids = [X.cluster_centers_ for X in k_means_var]\n\n%%time\n# Caluculate the Euclidean distance from each pont to each centroid\nk_euclid=[cdist(data, cent, 'euclidean') for cent in centroids]\ndist = [np.min(ke,axis=1) for ke in k_euclid]\n\n# Total within-cluster sum of squares\nwcss = [sum(d**2) for d in dist]\n\n# The total sum of squares\ntss = sum(pdist(data)**2)/data.shape[0]\n\n#The between-cluster sum of squares\nbss = tss - wcss\n\n%%time\nplt.plot(k_range,bss/tss,'-bo')\nplt.xlabel('number of cluster')\nplt.ylabel('% of variance explained')\nplt.title('Variance explained vs k')\nplt.grid(True)\nplt.show()",
"Difficult to find an elbow criteria\nOther heuristic criteria k = sqrt(n/2)\n\nb - Other heuristic method\n$k=\\sqrt{\\frac{n}{2}}$",
"np.sqrt(data.shape[0]/2)",
"-> Weird\nc - Silhouette Metrics for supervised ?\n2 - Visualize with PCA reduction\ncode from scikit learn",
"##############################################################################\n# Generate sample data\n\n\nbatch_size = 10\n#centers = [[1, 1], [-1, -1], [1, -1]]\nn_clusters = 6\n#X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)\nX = PCA(n_components=2).fit_transform(data)\n\n##############################################################################\n# Compute clustering with Means\n\nk_means = KMeans(init='k-means++', n_clusters=6, n_init=10,random_state=2)\nt0 = time.time()\nk_means.fit(X)\nt_batch = time.time() - t0\nk_means_labels = k_means.labels_\nk_means_cluster_centers = k_means.cluster_centers_\nk_means_labels_unique = np.unique(k_means_labels)\n\n##############################################################################\n# Compute clustering with MiniBatchKMeans\n\nmbk = MiniBatchKMeans(init='k-means++', n_clusters=6, batch_size=batch_size,\n n_init=10, max_no_improvement=10, verbose=0,random_state=2)\nt0 = time.time()\nmbk.fit(X)\nt_mini_batch = time.time() - t0\nmbk_means_labels = mbk.labels_\nmbk_means_cluster_centers = mbk.cluster_centers_\nmbk_means_labels_unique = np.unique(mbk_means_labels)\n\n##############################################################################\n# Plot result\n\nfig = plt.figure(figsize=(15, 5))\ncolors = ['#4EACC5', '#FF9C34', '#4E9A06','#FF0000','#800000','purple']\n#fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)\n\n# We want to have the same colors for the same cluster from the\n# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per\n# closest one.\n\norder = pairwise_distances_argmin(k_means_cluster_centers,\n mbk_means_cluster_centers)\n\n# KMeans\nax = fig.add_subplot(1, 3, 1)\nfor k, col in zip(range(n_clusters), colors):\n my_members = k_means_labels == k\n cluster_center = k_means_cluster_centers[k]\n ax.plot(X[my_members, 0], X[my_members, 1], 'w',\n markerfacecolor=col, marker='.',markersize=10)\n ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=6)\nax.set_title('KMeans')\nax.set_xticks(())\nax.set_yticks(())\n#plt.text(10,10, 'train time: %.2fs\\ninertia: %f' % (\n #t_batch, k_means.inertia_))\n\n# Plot result\n\n\n# MiniBatchKMeans\nax = fig.add_subplot(1, 3, 2)\nfor k, col in zip(range(n_clusters), colors):\n my_members = mbk_means_labels == order[k]\n cluster_center = mbk_means_cluster_centers[order[k]]\n ax.plot(X[my_members, 0], X[my_members, 1], 'w',\n markerfacecolor=col, marker='.', markersize=10)\n ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=6)\nax.set_title('MiniBatchKMeans')\nax.set_xticks(())\nax.set_yticks(())\n#plt.text(-5, 10, 'train time: %.2fs\\ninertia: %f' %\n #(t_mini_batch, mbk.inertia_))\n\n# Plot result\n\n\n\n# Initialise the different array to all False\ndifferent = (mbk_means_labels == 4)\nax = fig.add_subplot(1, 3, 3)\n\nfor l in range(n_clusters):\n different += ((k_means_labels == k) != (mbk_means_labels == order[k]))\n\nidentic = np.logical_not(different)\nax.plot(X[identic, 0], X[identic, 1], 'w',\n markerfacecolor='#bbbbbb', marker='.')\nax.plot(X[different, 0], X[different, 1], 'w',\n markerfacecolor='m', marker='.')\nax.set_title('Difference')\nax.set_xticks(())\nax.set_yticks(())\n\nplt.show()",
"B - Mini batch\nII - Outlier Detection\n<hr>\n\nObjectives : \n- Perform outlier detection on node data\n- Test different methods (with perf metrics) \n- Plot outlier detection\n- Tag transaction\nExplain : Mahalanobis Distance",
"X = PCA(n_components=2).fit_transform(data)\n# compare estimators learnt from the full data set with true parameters\nemp_cov = EmpiricalCovariance().fit(X)\nrobust_cov = MinCovDet().fit(X)\n\n\n###############################################################################\n# Display results\nfig = plt.figure(figsize=(15, 8))\nplt.subplots_adjust(hspace=-.1, wspace=.4, top=.95, bottom=.05)\n\n# Show data set\nsubfig1 = plt.subplot(1, 1, 1)\ninlier_plot = subfig1.scatter(X[:, 0], X[:, 1],\n color='black', label='points')\nsubfig1.set_xlim(subfig1.get_xlim()[0], 11.)\nsubfig1.set_title(\"Mahalanobis distances of a contaminated data set:\")\n\n# Show contours of the distance functions\nxx, yy = np.meshgrid(np.linspace(plt.xlim()[0], plt.xlim()[1], 100),\n np.linspace(plt.ylim()[0], plt.ylim()[1], 100))\nzz = np.c_[xx.ravel(), yy.ravel()]\n\nmahal_emp_cov = emp_cov.mahalanobis(zz)\nmahal_emp_cov = mahal_emp_cov.reshape(xx.shape)\nemp_cov_contour = subfig1.contour(xx, yy, np.sqrt(mahal_emp_cov),\n cmap=plt.cm.PuBu_r,\n linestyles='dashed')\n\nmahal_robust_cov = robust_cov.mahalanobis(zz)\nmahal_robust_cov = mahal_robust_cov.reshape(xx.shape)\nrobust_contour = subfig1.contour(xx, yy, np.sqrt(mahal_robust_cov),\n cmap=plt.cm.YlOrBr_r, linestyles='dotted')\n\n\nplt.xticks(())\nplt.yticks(())\nplt.show()",
"<hr>\n\nIII - Look at the clusters",
"df.head(3)\n\nk_means = KMeans(init='random', n_clusters=6, n_init=10, random_state=2)\nclusters = k_means.fit_predict(data)\n\ndf['clusters'] = clusters\ndf.groupby('clusters').count()\n\ntagged = pd.merge(known,df,left_on='id',how='inner',right_index=True)\n\ntagged.groupby('clusters').count().apply(lambda x: 100*x/float(x.sum()))['id']\n\ndf.groupby('clusters').count().apply(lambda x: 100*x/float(x.sum()))['total_degree']\n\nrogues_tag = pd.merge(rogues,df,left_on='id',how='inner',right_index=True)\n\nrogues_tag.groupby('clusters').count()['total_degree']",
"Rogues and tagged are overrepresnetated in cluster 1",
"df.groupby('clusters').mean()",
"IV - Tag transactions",
"transactions.head(4)\n\ndf.head(20)\n\n#write function\ndef get_cluster(node,df):\n return df.loc[node].clusters\n\nget_cluster('0x037dd056e7fdbd641db5b6bea2a8780a83fae180',df)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/csiro-bom/cmip6/models/sandbox-1/landice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: CSIRO-BOM\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:55\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
landlab/landlab | notebooks/tutorials/agent_based_modeling/groundwater/landlab_mesa_groundwater_pumping.ipynb | mit | [
"Coupling a Landlab groundwater with a Mesa agent-based model\nThis notebook shows a toy example of how one might couple a simple groundwater model (Landlab's GroundwaterDupuitPercolator, by Litwin et al. (2020)) with an agent-based model (ABM) written using the Mesa Agent-Based Modeling (ABM) package.\nThe purpose of this tutorial is to demonstrate the technical aspects of creating an integrated Landlab-Mesa model. The example is deliberately very simple in terms of the processes and interactions represented, and not meant to be a realistic portrayal of water-resources decision making. But the example does show how one might build a more sophisticated and interesting model using these basic ingredients.\n(Greg Tucker, November 2021; created from earlier notebook example used in May 2020\nworkshop)\nRunning the groundwater model\nThe following section simply illustrates how to create a groundwater model using the GroundwaterDupuitPercolator component.\nImports:",
"from landlab import RasterModelGrid, imshow_grid\nfrom landlab.components import GroundwaterDupuitPercolator\nimport matplotlib.pyplot as plt",
"Set parameters:",
"base_depth = 22.0 # depth of aquifer base below ground level, m\ninitial_water_table_depth = 2.0 # starting depth to water table, m\ndx = 100.0 # cell width, m\npumping_rate = 0.001 # pumping rate, m3/s\nwell_locations = [800, 1200]\nK = 0.001 # hydraulic conductivity, (m/s)\nn = 0.2 # porosity, (-)\ndt = 3600.0 # time-step duration, s\nbackground_recharge = 0.1 / (3600 * 24 * 365.25) # recharge rate from infiltration, m/s",
"Create a grid and add fields:",
"# Raster grid with closed boundaries\n# boundaries = {'top': 'closed','bottom': 'closed','right':'closed','left':'closed'}\ngrid = RasterModelGrid((41, 41), xy_spacing=dx) # , bc=boundaries)\n\n# Topographic elevation field (meters)\nelev = grid.add_zeros(\"topographic__elevation\", at=\"node\")\n\n# Field for the elevation of the top of an impermeable geologic unit that forms\n# the base of the aquifer (meters)\nbase = grid.add_zeros(\"aquifer_base__elevation\", at=\"node\")\nbase[:] = elev - base_depth\n\n# Field for the elevation of the water table (meters)\nwt = grid.add_zeros(\"water_table__elevation\", at=\"node\")\nwt[:] = elev - initial_water_table_depth\n\n# Field for the groundwater recharge rate (meters per second)\nrecharge = grid.add_zeros(\"recharge__rate\", at=\"node\")\nrecharge[:] = background_recharge\nrecharge[well_locations] -= pumping_rate / (\n dx * dx\n) # pumping rate, in terms of recharge",
"Instantiate the component (note use of an array/field instead of a scalar constant for recharge_rate):",
"gdp = GroundwaterDupuitPercolator(\n grid,\n hydraulic_conductivity=K,\n porosity=n,\n recharge_rate=recharge,\n regularization_f=0.01,\n)",
"Define a couple of handy functions to run the model for a day or a year:",
"def run_for_one_day(gdp, dt):\n num_iter = int(3600.0 * 24 / dt)\n for _ in range(num_iter):\n gdp.run_one_step(dt)\n\ndef run_for_one_year(gdp, dt):\n num_iter = int(365.25 * 3600.0 * 24 / dt)\n for _ in range(num_iter):\n gdp.run_one_step(dt)",
"Run for a year and plot the water table:",
"run_for_one_year(gdp, dt)\n\nimshow_grid(grid, wt, colorbar_label=\"Water table elevation (m)\")",
"Aside: calculating a pumping rate in terms of recharge\nThe pumping rate at a particular grid cell (in volume per time, representing pumping from a well at that location) needs to be given in terms of a recharge rate (depth of water equivalent per time) in a given grid cell. Suppose for example you're pumping 16 gallons/minute (horrible units of course). That equates to:\n16 gal/min x 0.00378541 m3/gal x (1/60) min/sec =",
"Qp = 16.0 * 0.00378541 / 60.0\nprint(Qp)",
"...equals about 0.001 m$^3$/s. That's $Q_p$. The corresponding negative recharge in a cell of dimensions $\\Delta x$ by $\\Delta x$ would be\n$R_p = Q_p / \\Delta x^2$",
"Rp = Qp / (dx * dx)\nprint(Rp)",
"A very simple ABM with farmers who drill wells into the aquifer\nFor the sake of illustration, our ABM will be extremely simple. There are $N$ farmers, at random locations, who each pump at a rate $Q_p$ as long as the water table lies above the depth of their well, $d_w$. Once the water table drops below their well, the well runs dry and they switch from crops to pasture.\nCheck that Mesa is installed\nFor the next step, we must verify that Mesa is available. If it is not, use one of the installation commands below to install, then re-start the kernel (Kernel => Restart) and continue.",
"try:\n from mesa import Model\nexcept ModuleNotFoundError:\n print(\n \"\"\"\nMesa needs to be installed in order to run this notebook.\n\nNormally Mesa should be pre-installed alongside the Landlab notebook collection. \nBut it appears that Mesa is not already installed on the system on which you are\nrunning this notebook. You can install Mesa from a command prompt using either:\n\n`conda install -c conda-forge mesa`\n\nor\n\n`pip install mesa`\n \"\"\"\n )\n raise",
"Defining the ABM\nIn Mesa, an ABM is created using a class for each Agent and a class for the Model. Here's the Agent class (a Farmer). Farmers have a grid location and an attribute: whether they are actively pumping their well or not. They also have a well depth: the depth to the bottom of their well. Their action consists of checking whether their well is wet or dry; if wet, they will pump, and if dry, they will not.",
"from mesa import Agent, Model\nfrom mesa.space import MultiGrid\nfrom mesa.time import RandomActivation\n\nclass FarmerAgent(Agent):\n \"\"\"An agent who pumps from a well if it's not dry.\"\"\"\n\n def __init__(self, unique_id, model, well_depth=5.0):\n super().__init__(unique_id, model)\n self.pumping = True\n self.well_depth = well_depth\n\n def step(self):\n x, y = self.pos\n print(f\"Farmer {self.unique_id}, ({x}, {y})\")\n print(f\" Depth to the water table: {self.model.wt_depth_2d[x,y]}\")\n print(f\" Depth to the bottom of the well: {self.well_depth}\")\n if self.model.wt_depth_2d[x, y] >= self.well_depth: # well is dry\n print(\" Well is dry.\")\n self.pumping = False\n else:\n print(\" Well is pumping.\")\n self.pumping = True",
"Next, define the model class. The model will take as a parameter a reference to a 2D array (with the same dimensions as the grid) that contains the depth to water table at each grid location. This allows the Farmer agents to check whether their well has run dry.",
"class FarmerModel(Model):\n \"\"\"A model with several agents on a grid.\"\"\"\n\n def __init__(self, N, width, height, well_depth, depth_to_water_table):\n self.num_agents = N\n self.grid = MultiGrid(width, height, True)\n self.depth_to_water_table = depth_to_water_table\n self.schedule = RandomActivation(self)\n\n # Create agents\n for i in range(self.num_agents):\n a = FarmerAgent(i, self, well_depth)\n self.schedule.add(a)\n # Add the agent to a random grid cell (excluding the perimeter)\n x = self.random.randrange(self.grid.width - 2) + 1\n y = self.random.randrange(self.grid.width - 2) + 1\n self.grid.place_agent(a, (x, y))\n\n def step(self):\n self.wt_depth_2d = self.depth_to_water_table.reshape(\n (self.grid.width, self.grid.height)\n )\n self.schedule.step()",
"Setting up the Landlab grid, fields, and groundwater simulator",
"base_depth = 22.0 # depth of aquifer base below ground level, m\ninitial_water_table_depth = 2.8 # starting depth to water table, m\ndx = 100.0 # cell width, m\npumping_rate = 0.004 # pumping rate, m3/s\nwell_depth = 3 # well depth, m\nbackground_recharge = 0.002 / (365.25 * 24 * 3600) # recharge rate, m/s\nK = 0.001 # hydraulic conductivity, (m/s)\nn = 0.2 # porosity, (-)\ndt = 3600.0 # time-step duration, s\nnum_agents = 12 # number of farmer agents\nrun_duration_yrs = 15 # run duration in years\n\ngrid = RasterModelGrid((41, 41), xy_spacing=dx)\n\nelev = grid.add_zeros(\"topographic__elevation\", at=\"node\")\n\nbase = grid.add_zeros(\"aquifer_base__elevation\", at=\"node\")\nbase[:] = elev - base_depth\n\nwt = grid.add_zeros(\"water_table__elevation\", at=\"node\")\nwt[:] = elev - initial_water_table_depth\n\ndepth_to_wt = grid.add_zeros(\"water_table__depth_below_ground\", at=\"node\")\ndepth_to_wt[:] = elev - wt\n\nrecharge = grid.add_zeros(\"recharge__rate\", at=\"node\")\nrecharge[:] = background_recharge\nrecharge[well_locations] -= pumping_rate / (\n dx * dx\n) # pumping rate, in terms of recharge\n\ngdp = GroundwaterDupuitPercolator(\n grid,\n hydraulic_conductivity=K,\n porosity=n,\n recharge_rate=recharge,\n regularization_f=0.01,\n)",
"Set up the Farmer model",
"nc = grid.number_of_node_columns\nnr = grid.number_of_node_rows\nfarmer_model = FarmerModel(\n num_agents, nc, nr, well_depth, depth_to_wt.reshape((nr, nc))\n)",
"Check the spatial distribution of wells:",
"import numpy as np\n\n\ndef get_well_count(model):\n well_count = np.zeros((nr, nc), dtype=int)\n pumping_well_count = np.zeros((nr, nc), dtype=int)\n for cell in model.grid.coord_iter():\n cell_content, x, y = cell\n well_count[x][y] = len(cell_content)\n for agent in cell_content:\n if agent.pumping:\n pumping_well_count[x][y] += 1\n return well_count, pumping_well_count\n\n\nwell_count, p_well_count = get_well_count(farmer_model)\nimshow_grid(grid, well_count.flatten())",
"Set the initial recharge field",
"recharge[:] = -(pumping_rate / (dx * dx)) * p_well_count.flatten()\nimshow_grid(grid, -recharge * 3600 * 24, colorbar_label=\"Pumping rate (m/day)\")",
"Run the model",
"for i in range(run_duration_yrs):\n\n # Run the groundwater simulator for one year\n run_for_one_year(gdp, dt)\n\n # Update the depth to water table\n depth_to_wt[:] = elev - wt\n\n # Run the farmer model\n farmer_model.step()\n\n # Count the number of pumping wells\n well_count, pumping_well_count = get_well_count(farmer_model)\n total_pumping_wells = np.sum(pumping_well_count)\n print(f\"In year {i + 1} there are {total_pumping_wells} pumping wells\")\n print(f\" and the greatest depth to water table is {np.amax(depth_to_wt)} meters.\")\n\n # Update the recharge field according to current pumping rate\n recharge[:] = (\n background_recharge - (pumping_rate / (dx * dx)) * pumping_well_count.flatten()\n )\n print(f\"Total recharge: {np.sum(recharge)}\")\n print(\"\")\n\n plt.figure()\n imshow_grid(grid, wt)\n\nimshow_grid(grid, wt)\n\n# Display the area of water table that lies below the well depth\ndepth_to_wt[:] = elev - wt\ntoo_deep = depth_to_wt > well_depth\nimshow_grid(grid, too_deep)",
"This foregoing example is very simple, and leaves out many aspects of the complex problem of water extraction as a \"tragedy of the commons\". But it does illustrate how one can build a model that integrates agent-based dynamics with continuum dynamics by combining Landlab grid-based model code with Mesa ABM code."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joshnsolomon/phys202-2015-work | assignments/assignment05/InteractEx04.ipynb | mit | [
"Interact Exercise 4\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
"Line with Gaussian noise\nWrite a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\\sigma^2)$:\n$$\ny = m x + b + N(0,\\sigma^2)\n$$\nBe careful about the sigma=0.0 case.",
"def random_line(m, b, sigma, size=10):\n \"\"\"Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]\n \n Parameters\n ----------\n m : float\n The slope of the line.\n b : float\n The y-intercept of the line.\n sigma : float\n The standard deviation of the y direction normal distribution noise.\n size : int\n The number of points to create for the line.\n \n Returns\n -------\n x : array of floats\n The array of x values for the line with `size` points.\n y : array of floats\n The array of y values for the lines with `size` points.\n \"\"\"\n x = np.linspace(-1, 1, size)\n n = np.random.randn(size)\n y = np.zeros(size)\n for a in range(size):\n y[a] = m*x[a] + b + (sigma * n[a]) \n # formula for normal sitribution found on SciPy.org\n return x, y\n \n \n\n\nm = 0.0; b = 1.0; sigma=0.0; size=3\nx, y = random_line(m, b, sigma, size)\nassert len(x)==len(y)==size\nassert list(x)==[-1.0,0.0,1.0]\nassert list(y)==[1.0,1.0,1.0]\nsigma = 1.0\nm = 0.0; b = 0.0\nsize = 500\nx, y = random_line(m, b, sigma, size)\nassert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)\nassert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)",
"Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:\n\nMake the marker color settable through a color keyword argument with a default of red.\nDisplay the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.\nCustomize your plot to make it effective and beautiful.",
"def ticks_out(ax):\n \"\"\"Move the ticks to the outside of the box.\"\"\"\n ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')\n ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')\n\ndef plot_random_line(m, b, sigma, size=10, color='red'):\n \"\"\"Plot a random line with slope m, intercept b and size points.\"\"\"\n x, y = random_line(m, b, sigma, size)\n plt.scatter(x,y,color=color)\n plt.xlim(-1.1,1.1)\n plt.ylim(-10.0,10.0)\n \n \n\nplot_random_line(5.0, -1.0, 2.0, 50)\n\nassert True # use this cell to grade the plot_random_line function",
"Use interact to explore the plot_random_line function using:\n\nm: a float valued slider from -10.0 to 10.0 with steps of 0.1.\nb: a float valued slider from -5.0 to 5.0 with steps of 0.1.\nsigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.\nsize: an int valued slider from 10 to 100 with steps of 10.\ncolor: a dropdown with options for red, green and blue.",
"interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,.1),sigma=(0.0,5.0,.01),size=(10,100,10),color = ['red','green','blue']);\n\n#### assert True # use this cell to grade the plot_random_line interact"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
oseledets/fastpde | lecture-7.ipynb | cc0-1.0 | [
"Lecture 7: Fast sparse solvers\nSparse matrix\nDEF: Sparse matrix is a matrix that contains $\\mathcal{O}(n)$ nonzero elements.\nSparse matrices are ubiquitous in PDEs\nConsider for example a 3D Poisson equation:\n$$\\Delta T = \\frac{\\partial^2T}{\\partial x^2}+\\frac{\\partial^2T}{\\partial y^2}+\\frac{\\partial^2T}{\\partial z^2}=f.$$\nAfter discretization we obtain five diagonal matrix A:",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport scipy as sp\nimport matplotlib.cm as cm\n%matplotlib inline\nN = 3\nB = np.diag(2*np.ones(N)) + np.diag((-1)*np.ones(N-1),k=-1)+ np.diag((-1)*np.ones(N-1),k = 1)\nId = np.diag(np.ones(N));\n# Assembling a 3D operator:\nA = np.kron(Id,np.kron(Id,B)) + np.kron(Id,np.kron(B,Id)) +np.kron(B,np.kron(Id,Id))\n\nplt.spy(A,markersize=34/N**2)",
"this matrix has $\\mathcal{O}(1)$ elements in a row, therefore it is sparse.\nFinite elements method is also likely to give you a system with a sparse matrix.\nHow to store a sparse matrix\nCoordinate format (coo)\n(i, j, value)\ni.e. store two integer arrays and one real array.\nEasy to add elements.\nBut how to multiply a matrix by a vector?\nCSR format\nA matrix is stored as 3 different arrays:\nsa, ja, ia\nwhere:\n\nnnz is the total number of non-zeros for the matrix\nsa is an real-value array of non-zeros for the matrix (length nnz)\nja is an integer array of column number of the non-zeros (length nnz)\nia is an integer array of locations of the first non-zero element in each row (length n+1)\n\n(Blackboard figure)\nIdea behind CSR\n\nFor each row i we store the column number of the non-zeros (and their) values\nWe stack this all together into ja and sa arrays\nWe save the location of the first non-zero element in each row\n\nCSR helps for matrix-by-vector product as well\nfor i in xrange(n):\n for k in xrange(ia(i):ia(i+1)-1):\n y(i) += sa(k) * x(ja(k))\nLet us do a short timing test",
"import numpy as np\nimport scipy as sp\nimport scipy.sparse\nimport scipy.sparse.linalg\nfrom scipy.sparse import csc_matrix, csr_matrix, coo_matrix, lil_matrix\n \nA = csr_matrix([10,10])\nB = lil_matrix([10,10])\nA[0,0] = 1\n#print A\nB[0,0] = 1\n#print B\n\nimport numpy as np\nimport scipy as sp\nimport scipy.sparse\nimport scipy.sparse.linalg\nfrom scipy.sparse import csc_matrix, csr_matrix, coo_matrix\nimport matplotlib.pyplot as plt\nimport time\n%matplotlib inline\nn = 1000\nex = np.ones(n);\nlp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \ne = sp.sparse.eye(n)\nA = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)\nA = csr_matrix(A)\nrhs = np.ones(n * n)\nB = coo_matrix(A)\n#t0 = time.time()\n%timeit A.dot(rhs)\n#print time.time() - t0\n#t0 = time.time()\n%timeit B.dot(rhs)\n#print time.time() - t0",
"As you see, CSR is faster, and for more unstructured patterns the gain will be larger. \nCSR format has difficulties with adding new elements.\nHow to solve linear systems?\nDirect or iterative solvers\nDirect solvers\nThe direct methods use sparse Gaussian elimination, i.e.\nthey eliminate variables while trying to keep the matrix as sparse as possible. \nAnd often, the inverse of a sparse matrix is not \nsparse:\n(it corresponds to some integral operator, so it has block low-rank structure. Details will be later in this course)",
"N = n = 100\nex = np.ones(n);\na = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \na = a.todense()\nb = np.array(np.linalg.inv(a))\nfig,axes = plt.subplots(1, 2)\naxes[0].spy(a)\naxes[1].spy(b,markersize=2)",
"Looks woefully.",
"N = n = 5\nex = np.ones(n);\nA = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \nA = A.todense()\nB = np.array(np.linalg.inv(A))\n\nprint B",
"But occasionally L and U factors can be sparse.",
"p, l, u = scipy.linalg.lu(a)\nfig,axes = plt.subplots(1, 2)\naxes[0].spy(l)\naxes[1].spy(u)",
"In 1D factors L and U are bidiagonal.\nIn 2D factors L and U looks less optimistic, but still ok.)",
"n = 3\n\nex = np.ones(n);\nlp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \ne = sp.sparse.eye(n)\nA = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)\nA = csc_matrix(A)\nT = scipy.sparse.linalg.splu(A)\nfig,axes = plt.subplots(1, 2)\naxes[0].spy(a, markersize=1)\naxes[1].spy(T.L, marker='.', markersize=0.4)",
"Sparse matrices and graph ordering\nThe number of non-zeros in the LU decomposition has a deep connection to the graph theory.\n(I.e., there is an edge between $(i, j)$ if $a_{ij} \\ne 0$.",
"import networkx as nx\nn = 13\nex = np.ones(n);\nlp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \ne = sp.sparse.eye(n)\nA = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)\nA = csc_matrix(A)\nG = nx.Graph(A)\nnx.draw(G, pos=nx.spring_layout(G), node_size=10)",
"Strategies for elimination\nThe reordering that minimizes the fill-in is important, so we can use graph theory to find one.\n\nMinimum degree ordering - order by the degree of the vertex\nCuthill–McKee algorithm (and reverse Cuthill-McKee) -- order for a small bandwidth\nNested dissection: split the graph into two with minimal number of vertices on the separator",
"import networkx as nx\nfrom networkx.utils import reverse_cuthill_mckee_ordering, cuthill_mckee_ordering\nn = 13\nex = np.ones(n);\nlp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); \ne = sp.sparse.eye(n)\nA = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)\nA = csc_matrix(A)\nG = nx.Graph(A)\n#rcm = list(reverse_cuthill_mckee_ordering(G))\nrcm = list(reverse_cuthill_mckee_ordering(G))\nA1 = A[rcm, :][:, rcm]\nplt.spy(A1, marker='.', markersize=3)\n#p, L, U = scipy.linalg.lu(A1.todense())\n#plt.spy(L, marker='.', markersize=0.8)\n#nx.draw(G, pos=nx.spring_layout(G), node_size=10)",
"Florida sparse matrix collection\nFlorida sparse matrix collection which contains all sorts of matrices for different applications. It also allows for finding test matrices as well! Let's have a look.",
"from IPython.display import HTML\nHTML('<iframe src=http://yifanhu.net/GALLERY/GRAPHS/search.html width=700 height=450></iframe>')",
"Test some\nLet us check some sparse matrix (and its LU).",
"fname = 'crystm02.mat'\n!wget http://www.cise.ufl.edu/research/sparse/mat/Boeing/$fname\n\nfrom scipy.io import loadmat\nimport scipy.sparse\nq = loadmat(fname)\n#print q\nmat = q['Problem']['A'][0, 0]\nT = scipy.sparse.linalg.splu(mat)\n\n#Compute its LU\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.spy(T.L, markersize=0.1)",
"Iterative solvers\nThe main disadvantage of factorization methods is there computational complexity.\nA more efficient solution of linear systems can be obtained by iterative methods.\nThis requires a high convergence rate of the iterative process and low arithmetic cost of each iteration.\nModern iterative methods are mainly based on the idea of iteration on Krylov subspace.\n$$ \\mathcal{K}i = span{b,~Ab,~A^2b,~ ..,~ A^{i-1}b}, ~~ i = 1,2,..$$\n$$ x_i = argmin{ \\|b-Ax\\|{\\text{some norm}}:x\\in \\mathcal{K}_i} $$\nIn fact, to apply iterative solver to a system with matrix $~A$ all you need to know is\n\n\nhow to multiply matrix by vector\n\n\nhow to apply preconditioner\n\n\nPreconditioners\nIf A is ill conditioned then iterative methods give you a lot of iterations.\nYou can reduce number of iterations if you find matrix $~B$ (called preconditioner), such that $~AB$ or $~BA$ matrices has less conditional number.\n$$Ax=y \\Rightarrow BAx= By$$\n$$ABz= y, x= Bz.$$\nTo be a good preconditioner matrix $~B$ must be somehow close to inverse matrix of $~A$\n$$B \\approx A^{-1}.$$\nNote that $B = A^{-1}$ is a perfect preconditioner and gives you 1 iteration to converge.\nBut building this preconditioner requires as much operations as the direct solution requires.\nBuilding a preconditioner requires some compromise between time for building it and iterations time.\nTwo basic strategies for building preconditioner:\n\n\nUse information about elements of matrix $A$\n\n\nUse additional information about problem.\n\n\nThe first strategy, where we use information about elements of matrix $A$\nFor sparse matrices we use only non-zero elements.\nGood example is a method of Incomplete matrix factorization\nThe main idea here is to avoid full factorization by dropping some elements in the factorization.\nDrop rules specify type of incomplete factorization and type of preconditioner.\nStandard ILU preconditioners:\n\nILU($0$)\nILU(k)\nILUt\nILU2\n\nThe second strategy, where we use additional information about a problem\nHere we use additional information about where the matrix came from.\nFor example, Multigrid and Domain Decomposition methods (see next lecture for multigrid)",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nholtz/structural-analysis | Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb | cc0-1.0 | [
"Milestone 2 - this version has all the input completed, individually and each tested.\n2-Dimensional Frame Analysis - Version 04\nThis program performs an elastic analysis of 2-dimensional structural frames. It has the following features:\n1. Input is provided by a set of CSV files (and cell-magics exist so you can specifiy the CSV data\nin a notebook cell). See the example below for an, er, example.\n1. Handles concentrated forces on nodes, and concentrated forces, concentrated moments, and linearly varying distributed loads applied transversely anywhere along the member (i.e., there is as yet no way to handle longitudinal\nload components).\n1. It handles fixed, pinned, roller supports and member end moment releases (internal pins). The former are\nhandled by assigning free or fixed global degrees of freedom, and the latter are handled by adjusting the \nmember stiffness matrix.\n1. It has the ability to handle named sets of loads with factored combinations of these.\n1. The DOF #'s are assigned by the program, with the fixed DOF #'s assigned after the non-fixed. The equilibrium\nequation is then partitioned for solution. Among other advantages, this means that support settlement could be\neasily added (there is no UI for that, yet).\n1. A non-linear analysis can be performed using the P-Delta method (fake shears are computed at column ends due to the vertical load acting through horizontal displacement differences, and these shears are applied as extra loads\nto the nodes).\n1. A full non-linear (2nd order) elastic analysis will soon be available by forming the equilibrium equations \non the deformed structure. This is very easy to add, but it hasn't been done yet. Shouldn't be too long.\n1. There is very little no documentation below, but that will improve, slowly.",
"from __future__ import print_function\n\nimport salib as sl\nsl.import_notebooks()\nfrom Tables import Table\nfrom Nodes import Node\nfrom Members import Member\nfrom LoadSets import LoadSet, LoadCombination\nfrom NodeLoads import makeNodeLoad\nfrom MemberLoads import makeMemberLoad\nfrom collections import OrderedDict, defaultdict\nimport numpy as np\n\nclass Object(object):\n pass\n\nclass Frame2D(object):\n \n def __init__(self,dsname=None):\n self.dsname = dsname\n self.rawdata = Object()\n self.nodes = OrderedDict()\n self.members = OrderedDict()\n self.nodeloads = LoadSet()\n self.memberloads = LoadSet()\n self.loadcombinations = LoadCombination()\n #self.dofdesc = []\n #self.nodeloads = defaultdict(list)\n #self.membloads = defaultdict(list)\n self.ndof = 0\n self.nfree = 0\n self.ncons = 0\n self.R = None\n self.D = None\n self.PDF = None # P-Delta forces\n \n COLUMNS_xxx = [] # list of column names for table 'xxx'\n \n def get_table(self,tablename,extrasok=False,optional=False):\n columns = getattr(self,'COLUMNS_'+tablename)\n t = Table(tablename,columns=columns,optional=optional)\n t.read(optional=optional)\n reqdl= columns\n reqd = set(reqdl)\n prov = set(t.columns)\n if reqd-prov:\n raise Exception('Columns missing {} for table \"{}\". Required columns are: {}'\\\n .format(list(reqd-prov),tablename,reqdl))\n if not extrasok:\n if prov-reqd:\n raise Exception('Extra columns {} for table \"{}\". Required columns are: {}'\\\n .format(list(prov-reqd),tablename,reqdl))\n return t",
"Test Frame\n\nNodes",
"%%Table nodes\nNODEID,X,Y,Z\nA,0,0,5000\nB,0,4000,5000\nC,8000,4000,5000\nD,8000,0,5000\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_nodes = ('NODEID','X','Y')\n \n def install_nodes(self):\n node_table = self.get_table('nodes')\n for ix,r in node_table.data.iterrows():\n if r.NODEID in self.nodes:\n raise Exception('Multiply defined node: {}'.format(r.NODEID))\n n = Node(r.NODEID,r.X,r.Y)\n self.nodes[n.id] = n\n self.rawdata.nodes = node_table\n \n def get_node(self,id):\n try:\n return self.nodes[id]\n except KeyError:\n raise Exception('Node not defined: {}'.format(id))\n\n\n##test:\nf = Frame2D()\n\n##test:\nf.install_nodes()\n\n##test:\nf.nodes\n\n##test:\nf.get_node('C')",
"Supports",
"%%Table supports\nNODEID,C0,C1,C2\nA,FX,FY,MZ\nD,FX,FY\n\ndef isnan(x):\n if x is None:\n return True\n try:\n return np.isnan(x)\n except TypeError:\n return False\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_supports = ('NODEID','C0','C1','C2')\n \n def install_supports(self):\n table = self.get_table('supports')\n for ix,row in table.data.iterrows():\n node = self.get_node(row.NODEID)\n for c in [row.C0,row.C1,row.C2]:\n if not isnan(c):\n node.add_constraint(c)\n self.rawdata.supports = table\n\n##test:\nf.install_supports()\n\nvars(f.get_node('D'))",
"Members",
"%%Table members\nMEMBERID,NODEJ,NODEK\nAB,A,B\nBC,B,C\nDC,D,C\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_members = ('MEMBERID','NODEJ','NODEK')\n \n def install_members(self):\n table = self.get_table('members')\n for ix,m in table.data.iterrows():\n if m.MEMBERID in self.members:\n raise Exception('Multiply defined member: {}'.format(m.MEMBERID))\n memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))\n self.members[memb.id] = memb\n self.rawdata.members = table\n \n def get_member(self,id):\n try:\n return self.members[id]\n except KeyError:\n raise Exception('Member not defined: {}'.format(id))\n\n##test:\nf.install_members()\nf.members\n\n##test:\nm = f.get_member('BC')\nm.id, m.L, m.dcx, m.dcy",
"Releases",
"%%Table releases\nMEMBERID,RELEASE\nAB,MZK\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_releases = ('MEMBERID','RELEASE')\n \n def install_releases(self):\n table = self.get_table('releases',optional=True)\n for ix,r in table.data.iterrows():\n memb = self.get_member(r.MEMBERID)\n memb.add_release(r.RELEASE)\n self.rawdata.releases = table\n\n##test:\nf.install_releases()\n\n##test:\nvars(f.get_member('AB'))",
"Properties\nIf the SST module is loadable, member properties may be specified by giving steel shape designations\n(such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and\n$I_x$ directly (it only tries to lookup the properties if these two are not provided).",
"try:\n from sst import SST\n __SST = SST()\n get_section = __SST.section\nexcept ImportError:\n def get_section(dsg,fields):\n raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))\n ##return [1.] * len(fields.split(',')) # in case you want to do it that way\n\n%%Table properties\nMEMBERID,SIZE,IX,A\nBC,W460x106,,\nAB,W310x97,,\nDC,,\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_properties = ('MEMBERID','SIZE','IX','A')\n \n def install_properties(self):\n table = self.get_table('properties')\n table = self.fill_properties(table)\n for ix,row in table.data.iterrows():\n memb = self.get_member(row.MEMBERID)\n memb.size = row.SIZE\n memb.Ix = row.IX\n memb.A = row.A\n self.rawdata.properties = table\n \n def fill_properties(self,table):\n data = table.data\n for ix,row in data.iterrows():\n if type(row.SIZE) in [type(''),type(u'')]:\n if isnan(row.IX) or isnan(row.A):\n Ix,A = get_section(row.SIZE,'Ix,A')\n if isnan(row.IX):\n data.loc[ix,'IX'] = Ix\n if isnan(row.A):\n data.loc[ix,'A'] = A\n elif isnan(row.SIZE):\n data.loc[ix,'SIZE'] = ''\n table.data = data.fillna(method='ffill')\n return table\n\n##test:\nf.install_properties()\n\n##test:\nvars(f.get_member('DC'))",
"Node Loads",
"%%Table node_loads\nLOAD,NODEID,DIRN,F\nWind,B,FX,-200000.\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')\n \n def install_node_loads(self):\n table = self.get_table('node_loads')\n dirns = ['FX','FY','FZ']\n for ix,row in table.data.iterrows():\n n = self.get_node(row.NODEID)\n if row.DIRN not in dirns:\n raise ValueError(\"Invalid node load direction: {} for load {}, node {}; must be one of '{}'\"\n .format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))\n l = makeNodeLoad({row.DIRN:row.F})\n self.nodeloads.append(row.LOAD,n,l)\n self.rawdata.node_loads = table\n\n##test:\nf.install_node_loads()\n\n##test:\nfor o,l,fact in f.nodeloads.iterloads('Wind'):\n print(o,l,fact,l*fact)",
"Member Loads",
"%%Table member_loads\nLOAD,MEMBERID,TYPE,W1,W2,A,B,C\nLive,BC,UDL,-50,,,,\nLive,BC,PL,-200000,,5000\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')\n \n def install_member_loads(self):\n table = self.get_table('member_loads')\n for ix,row in table.data.iterrows():\n m = self.get_member(row.MEMBERID)\n l = makeMemberLoad(m.L,row)\n self.memberloads.append(row.LOAD,m,l)\n self.rawdata.member_loads = table\n\n##test:\nf.install_member_loads()\n\n##test:\nfor o,l,fact in f.memberloads.iterloads('Live'):\n print(o.id,l,fact,l.fefs()*fact)",
"Load Combinations",
"%%Table load_combinations\nCOMBO,LOAD,FACTOR\nOne,Live,1.5\nOne,Wind,1.75\n\[email protected](Frame2D)\nclass Frame2D:\n \n COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR')\n \n def install_load_combinations(self):\n table = self.get_table('load_combinations')\n for ix,row in table.data.iterrows():\n self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR)\n self.rawdata.load_combinations = table\n\n##test:\nf.install_load_combinations()\n\n##test:\nfor o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):\n print(o.id,l,fact)\nfor o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):\n print(o.id,l,fact,l.fefs()*fact)",
"Load Iterators",
"@sl.extend(Frame2D)\nclass Frame2D:\n\n def iter_nodeloads(self,comboname):\n for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads):\n yield o,l,f\n \n def iter_memberloads(self,comboname):\n for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads):\n yield o,l,f\n\n##test:\nfor o,l,fact in f.iter_nodeloads('One'):\n print(o.id,l,fact)\nfor o,l,fact in f.iter_memberloads('One'):\n print(o.id,l,fact)",
"Support Constraints",
"%%Table supports\nNODEID,C0,C1,C2\nA,FX,FY,MZ\nD,FX,FY",
"Accumulated Cell Data",
"##test:\nTable.CELLDATA"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robblack007/clase-dinamica-robot | Practicas/practica2/numerico.ipynb | mit | [
"Práctica 2 - Cinemática directa y dinámica de manipuladores\nUna vez obtenida la dinámica del manipulador, tenemos la necesidad de construir una función f para poder simular el comportamiento del manipulador, empecemos escribiendo la ecuación:\n$$\n\\tau =\n\\begin{bmatrix}\nJ_1 + J_2 + m_1 l_1^2 + m_2 l_1^2 + \\mu_1 + 2 \\mu_2 c_2 & J_2 + \\mu_1 + \\mu_2 c_2 \\\nJ_2 + \\mu_1 + \\mu_2 c_2 & J_2 + \\mu_1\n\\end{bmatrix}\\ddot{q} - \\mu_2 s_2\n\\begin{bmatrix}\n2 \\dot{q}2 & \\dot{q}_2 \\ -\\dot{q}_1 & 0\n\\end{bmatrix} + g\n\\begin{bmatrix}\nm_1 l_1 c_1 + m_2 l_1 c_1 + m_2 l_2 c{12} \\ m_2 l_2 c_{12}\n\\end{bmatrix}\n$$\nen donde $\\mu_1 = m_2 l_2^2$ y $\\mu_2 = m_2 l_1 l_2$; por lo que de aqui en adelante, podemos caracterizar la dinámica de este manipulador como la siguiente ecuación:\n$$\n\\tau = M(q)\\ddot{q} + C(q, \\dot{q}) + G(q)\n$$\nSi ahora cambiamos nuestra atención al problema de contruir la función\n$$\n\\dot{x} = f(x, t)\n$$\ntenemos que empezar por que representa el estado $x$.\nEn el ejercicio pasado nuestro manipulador tenía un solo grado de libertad, por lo que el estado terminaba siendo:\n$$\nx =\n\\begin{pmatrix}\nq_1 \\ \\dot{q}_1\n\\end{pmatrix}\n$$\nEn este caso, nuestro manipulador tiene dos grados de libertad, por lo que necesitamos que el estado incluya a la posición de ambos grados de libertad, así como su velocidad:\n$$\nx =\n\\begin{pmatrix}\nq_1 \\ q_2 \\ \\dot{q}_1 \\ \\dot{q}_2\n\\end{pmatrix}\n$$\nPor lo que para construir $f(x,t)$, necesitamos calcular los siguientes terminos:\n$$\n\\dot{x} =\n\\begin{pmatrix}\n\\dot{q}_1 \\ \\dot{q}_2 \\ \\ddot{q}_1 \\ \\ddot{q}_2\n\\end{pmatrix}\n$$\nen donde los primeros dos terminos son triviales, ya que son los mismos que obtenemos del estado del sistema ($\\dot{q}_1$, $\\dot{q}_2$), y los segundos dos terminos los podemos obtener de la ecuación de movimiento del manipulador:\n$$\n\\ddot{q} = M^{-1}\\left( \\tau - C(q, \\dot{q})\\dot{q} - G(q) \\right)\n$$",
"def f(t, x):\n # Se importan funciones matematicas necesarias\n from numpy import matrix, sin, cos\n # Se desenvuelven las variables que componen al estado\n q1, q2, q̇1, q̇2 = x\n # Se definen constantes del sistema\n g = 9.81\n m1, m2, J1, J2 = 0.3, 0.2, 0.0005, 0.0002\n l1, l2 = 0.4, 0.3\n τ1, τ2 = 0, 0\n # Se agrupan terminos en vectores\n q̇ = matrix([[q̇1], [q̇2]])\n τ = matrix([[τ1], [τ2]])\n # Se calculan terminos comúnes\n μ1 = m2*l2**2\n μ2 = m2*l1*l2\n c1 = cos(q1)\n c2 = cos(q2)\n s2 = sin(q2)\n c12 = cos(q1 + q2)\n # Se calculan las matrices de la ecuación de movimiento\n # ESCRIBE TU CODIGO AQUI\n raise NotImplementedError\n # Se calculan las variables a devolver por el sistema\n # ESCRIBE TU CODIGO AQUI\n raise NotImplementedError\n q1pp = qpp.item(0)\n q2pp = qpp.item(1)\n # Se devuelve la derivada de las variables de entrada\n return [q1p, q2p, q1pp, q2pp]\n\nfrom numpy.testing import assert_almost_equal\nassert_almost_equal(f(0, [0, 0, 0, 0]), [0,0,-1392.38, 3196.16], 2)\nassert_almost_equal(f(0, [1, 1, 0, 0]), [0,0,-53.07, 104.34], 2)\nprint(\"Sin errores\")",
"Mandamos llamar al simulador",
"from robots.simuladores import simulador\n\n%matplotlib widget\nts, xs = simulador(puerto_zmq=\"5551\", f=f, x0=[0, 0, 0, 0], dt=0.02)",
"El argumento puerto_zmq se refiere al puerto por el cual esta mandando datos, para el visualizador descrito a continuación.\n\nEstos datos se estan actualizando en tiempo real, y si bien es posible ver el comportamiento general con esta gráfica, tambien es posible ver una visualización en tiempo real de este manipulador, para lo cual es necesario mantener este documento abierto, mientras se abre el documento visualizacion.ipynb."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phungkh/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | [
"Optimization Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:",
"def hat(x,a,b):\n return (-a*x**2 + b*x**4)\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0",
"Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:",
"a = 5.0\nb = 1.0\n\nx = np.linspace(-3,3,1000)\nplt.plot(x, hat(x,a,b))\n\nassert True # leave this to grade the plot",
"Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.",
"min1 = opt.minimize(hat, x0 =-1.7,args=(a,b))\nmin2=opt.minimize(hat, x0 =1.7, args=(a,b))\nprint(min1,min2)\n\nprint('Our minimas are x=-1.58113883 and x=1.58113882')\n\nplt.figure(figsize=(7,5))\nplt.plot(x,hat(x,a,b), color = 'b',label='hat potential')\nplt.box(False)\nplt.title('Hat Potential')\nplt.scatter(x=-1.58113883,y=hat(x=-1.58113883,a=5,b=1), color='r', label='min1')\nplt.scatter(x=1.58113883,y=hat(x=-1.58113883,a=5,b=1), color='r',label='min2')\nplt.legend()\n\n\nassert True # leave this for grading the plot",
"To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\n$$ V(x) = -5 x^2 + 1 x^4 $$\nTake the derivative\n$$ \\frac{dV}{dx} = -10x + 4x^3 $$\nset derivative to 0 and solve for x\n$$ 0 = (-10+4x^2)x $$\ncritical points are $$x = 0 $$ and $$ x=\\sqrt\\frac{10}{4} $$ and $$ x=-\\sqrt\\frac{10}{4} $$\nCheck concavity by taking the second derivative\n$$ \\frac{d^2V}{dx^2} = - 10 + 12 x^2 $$\nAt x = 0, concavity is negative so local maxima is at x=0.\nAt x= $$ \\sqrt\\frac{10}{4} $$ concavity is positive, so they are the local minimas."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: THU\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fmaschler/networkit | Doc/uploads/docs/SpectralCentrality.ipynb | mit | [
"Centrality\nThis evaluates the Eigenvector Centrality and PageRank implemented in Python against C++-native EVZ and PageRank. The Python implementation uses SciPy (and thus ARPACK) to compute the eigenvectors, while the C++ method implements a power iteration method itself.",
"cd ../../\n\nimport networkit\n\nG = networkit.graphio.readGraph(\"input/celegans_metabolic.graph\", networkit.Format.METIS)",
"First, we just compute the Python EVZ and display a sample. The \"scores()\" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...",
"evzSciPy = networkit.centrality.SciPyEVZ(G, normalized=True)\nevzSciPy.run()\nevzSciPy.scores()[:10]",
"We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality.",
"evzSciPy.ranking()[:10]",
"Compute the EVZ using the C++ backend and also display the 10 most important vertices, just as above. This should hopefully look similar...\nPlease note: The normalization argument may not be passed as a named argument to the C++-backed centrality measures. This is due to some limitation in the C++ wrapping code.",
"evz = networkit.centrality.EigenvectorCentrality(G, True)\nevz.run()\nevz.ranking()[:10]",
"Now, let's take a look at the PageRank. First, compute the PageRank using the C++ backend and display the 10 most important vertices. The second argument to the algorithm is the dampening factor, i.e. the probability that a random walk just stops at a vertex and instead teleports to some other vertex.",
"pageRank = networkit.centrality.PageRank(G, 0.95, True)\npageRank.run()\npageRank.ranking()[:10]",
"Same in Python...",
"SciPyPageRank = networkit.centrality.SciPyPageRank(G, 0.95, normalized=True)\nSciPyPageRank.run()\nSciPyPageRank.ranking()[:10]",
"If everything went well, these should look similar, too.\nFinally, we take a look at the relative differences between the computed centralities for the vertices:",
"differences = [(max(x[0], x[1]) / min(x[0], x[1])) - 1 for x in zip(evz.scores(), evzSciPy.scores())]\nprint(\"Average relative difference: {}\".format(sum(differences) / len(differences)))\nprint(\"Maximum relative difference: {}\".format(max(differences)))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jrg365/gpytorch | examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb | mit | [
"GP Regression with LOVE for Fast Predictive Variances and Sampling\nOverview\nIn this notebook, we demonstrate that LOVE (the method for fast variances and sampling introduced in this paper https://arxiv.org/abs/1803.06058) can significantly reduce the cost of computing predictive distributions. This can be especially useful in settings like small-scale Bayesian optimization, where predictions need to be made at enormous numbers of candidate points.\nIn this notebook, we will train a KISS-GP model on the skillcraftUCI dataset, and then compare the time required to make predictions with each model.\nNOTE: The timing results reported in the paper compare the time required to compute (co)variances only. Because excluding the mean computations from the timing results requires hacking the internals of GPyTorch, the timing results presented in this notebook include the time required to compute predictive means, which are not accelerated by LOVE. Nevertheless, as we will see, LOVE achieves impressive speed-ups.",
"import math\nimport torch\nimport gpytorch\nimport tqdm\nfrom matplotlib import pyplot as plt\n\n# Make plots inline\n%matplotlib inline",
"Loading Data\nFor this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 40% of the data as training and the last 60% as testing.\nNote: Running the next cell will attempt to download a small dataset file to the current directory.",
"import urllib.request\nimport os\nfrom scipy.io import loadmat\nfrom math import floor\n\n\n# this is for running the notebook in our testing framework\nsmoke_test = ('CI' in os.environ)\n\n\nif not smoke_test and not os.path.isfile('../elevators.mat'):\n print('Downloading \\'elevators\\' UCI dataset...')\n urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')\n\n\nif smoke_test: # this is for running the notebook in our testing framework\n X, y = torch.randn(100, 3), torch.randn(100)\nelse:\n data = torch.Tensor(loadmat('../elevators.mat')['data'])\n X = data[:, :-1]\n X = X - X.min(0)[0]\n X = 2 * (X / X.max(0)[0]) - 1\n y = data[:, -1]\n\n\ntrain_n = int(floor(0.8 * len(X)))\ntrain_x = X[:train_n, :].contiguous()\ntrain_y = y[:train_n].contiguous()\n\ntest_x = X[train_n:, :].contiguous()\ntest_y = y[train_n:].contiguous()\n\nif torch.cuda.is_available():\n train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()",
"LOVE can be used with any type of GP model, including exact GPs, multitask models and scalable approximations. Here we demonstrate LOVE in conjunction with KISS-GP, which has the amazing property of producing constant time variances.\nThe KISS-GP + LOVE GP Model\nWe now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an Deep RBF base kernel. The forward method passes the input data x through the neural network feature extractor defined above, scales the resulting features to be between 0 and 1, and then calls the kernel.\nThe Deep RBF kernel (DKL) uses a neural network as an initial feature extractor. In this case, we use a fully connected network with the architecture d -> 1000 -> 500 -> 50 -> 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.",
"class LargeFeatureExtractor(torch.nn.Sequential): \n def __init__(self, input_dim): \n super(LargeFeatureExtractor, self).__init__() \n self.add_module('linear1', torch.nn.Linear(input_dim, 1000))\n self.add_module('relu1', torch.nn.ReLU()) \n self.add_module('linear2', torch.nn.Linear(1000, 500)) \n self.add_module('relu2', torch.nn.ReLU()) \n self.add_module('linear3', torch.nn.Linear(500, 50)) \n self.add_module('relu3', torch.nn.ReLU()) \n self.add_module('linear4', torch.nn.Linear(50, 2)) \n\n\nclass GPRegressionModel(gpytorch.models.ExactGP):\n def __init__(self, train_x, train_y, likelihood):\n super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)\n \n self.mean_module = gpytorch.means.ConstantMean()\n self.covar_module = gpytorch.kernels.GridInterpolationKernel(\n gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()),\n grid_size=100, num_dims=2,\n )\n \n # Also add the deep net\n self.feature_extractor = LargeFeatureExtractor(input_dim=train_x.size(-1))\n\n def forward(self, x):\n # We're first putting our data through a deep net (feature extractor)\n # We're also scaling the features so that they're nice values\n projected_x = self.feature_extractor(x)\n projected_x = projected_x - projected_x.min(0)[0]\n projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1\n \n # The rest of this looks like what we've seen\n mean_x = self.mean_module(projected_x)\n covar_x = self.covar_module(projected_x)\n return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\n\n \nlikelihood = gpytorch.likelihoods.GaussianLikelihood()\nmodel = GPRegressionModel(train_x, train_y, likelihood)\n\nif torch.cuda.is_available():\n model = model.cuda()\n likelihood = likelihood.cuda()",
"Training the model\nThe cell below trains the GP model, finding optimal hyperparameters using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.",
"training_iterations = 1 if smoke_test else 20\n\n\n# Find optimal model hyperparameters\nmodel.train()\nlikelihood.train()\n\n# Use the adam optimizer\noptimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters\n\n# \"Loss\" for GPs - the marginal log likelihood\nmll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\n\n\ndef train():\n iterator = tqdm.notebook.tqdm(range(training_iterations))\n for i in iterator:\n optimizer.zero_grad()\n output = model(train_x)\n loss = -mll(output, train_y)\n loss.backward()\n iterator.set_postfix(loss=loss.item())\n optimizer.step()\n \n%time train()",
"Computing predictive variances (KISS-GP or Exact GPs)\nUsing standard computaitons (without LOVE)\nThe next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean) using the standard SKI testing code, with no acceleration or precomputation. \nNote: Full predictive covariance matrices (and the computations needed to get them) can be quite memory intensive. Depending on the memory available on your GPU, you may need to reduce the size of the test set for the code below to run. If you run out of memory, try replacing test_x below with something like test_x[:1000] to use the first 1000 test points only, and then restart the notebook.",
"import time\n\n# Set into eval mode\nmodel.eval()\nlikelihood.eval()\n\nwith torch.no_grad():\n start_time = time.time()\n preds = likelihood(model(test_x))\n exact_covar = preds.covariance_matrix\n exact_covar_time = time.time() - start_time\n \nprint(f\"Time to compute exact mean + covariances: {exact_covar_time:.2f}s\")",
"Using LOVE\nNext we compute predictive covariances (and the predictive means) for LOVE, but starting from scratch. That is, we don't yet have access to the precomputed cache discussed in the paper. This should still be faster than the full covariance computation code above.\nTo use LOVE, use the context manager with gpytorch.settings.fast_pred_var():\nYou can also set some of the LOVE settings with context managers as well. For example, gpytorch.settings.max_root_decomposition_size(100) affects the accuracy of the LOVE solves (larger is more accurate, but slower).\nIn this simple example, we allow a rank 100 root decomposition, although increasing this to rank 20-40 should not affect the timing results substantially.",
"# Clear the cache from the previous computations\nmodel.train()\nlikelihood.train()\n\n# Set into eval mode\nmodel.eval()\nlikelihood.eval()\n\nwith torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(100):\n start_time = time.time()\n preds = model(test_x)\n fast_time_no_cache = time.time() - start_time",
"The above cell additionally computed the caches required to get fast predictions. From this point onwards, unless we put the model back in training mode, predictions should be extremely fast. The cell below re-runs the above code, but takes full advantage of both the mean cache and the LOVE cache for variances.",
"with torch.no_grad(), gpytorch.settings.fast_pred_var():\n start_time = time.time()\n preds = likelihood(model(test_x))\n fast_covar = preds.covariance_matrix\n fast_time_with_cache = time.time() - start_time\n\nprint('Time to compute mean + covariances (no cache) {:.2f}s'.format(fast_time_no_cache))\nprint('Time to compute mean + variances (cache): {:.2f}s'.format(fast_time_with_cache))",
"Compute Error between Exact and Fast Variances\nFinally, we compute the mean absolute error between the fast variances computed by LOVE (stored in fast_covar), and the exact variances computed previously. \nNote that these tests were run with a root decomposition of rank 10, which is about the minimum you would realistically ever run with. Despite this, the fast variance estimates are quite good. If more accuracy was needed, increasing max_root_decomposition_size would provide even better estimates.",
"mae = ((exact_covar - fast_covar).abs() / exact_covar.abs()).mean()\nprint(f\"MAE between exact covar matrix and fast covar matrix: {mae:.6f}\")",
"Computing posterior samples (KISS-GP only)\nWith KISS-GP models, LOVE can also be used to draw fast posterior samples. (The same does not apply to exact GP models.)\nDrawing samples the standard way (without LOVE)\nWe now draw samples from the posterior distribution. Without LOVE, we accomlish this by performing Cholesky on the posterior covariance matrix. This can be slow for large covariance matrices.",
"import time\nnum_samples = 20 if smoke_test else 20000\n\n\n# Set into eval mode\nmodel.eval()\nlikelihood.eval()\n\nwith torch.no_grad():\n start_time = time.time()\n exact_samples = model(test_x).rsample(torch.Size([num_samples]))\n exact_sample_time = time.time() - start_time\n \nprint(f\"Time to compute exact samples: {exact_sample_time:.2f}s\")",
"Using LOVE\nNext we compute posterior samples (and the predictive means) using LOVE.\nThis requires the additional context manager with gpytorch.settings.fast_pred_samples():.\nNote that we also need the with gpytorch.settings.fast_pred_var(): flag turned on. Both context managers respond to the gpytorch.settings.max_root_decomposition_size(100) setting.",
"# Clear the cache from the previous computations\nmodel.train()\nlikelihood.train()\n\n# Set into eval mode\nmodel.eval()\nlikelihood.eval()\n\nwith torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(200):\n # NEW FLAG FOR SAMPLING\n with gpytorch.settings.fast_pred_samples():\n start_time = time.time()\n _ = model(test_x).rsample(torch.Size([num_samples]))\n fast_sample_time_no_cache = time.time() - start_time\n \n# Repeat the timing now that the cache is computed\nwith torch.no_grad(), gpytorch.settings.fast_pred_var():\n with gpytorch.settings.fast_pred_samples():\n start_time = time.time()\n love_samples = model(test_x).rsample(torch.Size([num_samples]))\n fast_sample_time_cache = time.time() - start_time\n \nprint('Time to compute LOVE samples (no cache) {:.2f}s'.format(fast_sample_time_no_cache))\nprint('Time to compute LOVE samples (cache) {:.2f}s'.format(fast_sample_time_cache))",
"Compute the empirical covariance matrices\nLet's see how well LOVE samples and exact samples recover the true covariance matrix.",
"# Compute exact posterior covar\nwith torch.no_grad():\n start_time = time.time()\n posterior = model(test_x)\n mean, covar = posterior.mean, posterior.covariance_matrix\n\nexact_empirical_covar = ((exact_samples - mean).t() @ (exact_samples - mean)) / num_samples\nlove_empirical_covar = ((love_samples - mean).t() @ (love_samples - mean)) / num_samples\n\nexact_empirical_error = ((exact_empirical_covar - covar).abs()).mean()\nlove_empirical_error = ((love_empirical_covar - covar).abs()).mean()\n\nprint(f\"Empirical covariance MAE (Exact samples): {exact_empirical_error}\")\nprint(f\"Empirical covariance MAE (LOVE samples): {love_empirical_error}\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
seniosh/StatisticalMethods | notes/InferenceSandbox.ipynb | gpl-2.0 | [
"Inference Sandbox\nIn this notebook, we'll mock up some data from the linear model, as reviewed here. Then it's your job to implement a Metropolis sampler and constrain the posterior distriubtion. The goal is to play with various strategies for accelerating the convergence and acceptance rate of the chain. Remember to check the convergence and stationarity of your chains, and compare them to the known analytic posterior for this problem!\nGenerate a data set:",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 5.0) \n\n# the model parameters\na = np.pi\nb = 1.6818\n\n# my arbitrary constants\nmu_x = np.exp(1.0) # see definitions above\ntau_x = 1.0\ns = 1.0\nN = 50 # number of data points\n\n# get some x's and y's\nx = mu_x + tau_x*np.random.randn(N)\ny = a + b*x + s*np.random.randn(N)\n\nplt.plot(x, y, 'o');",
"Package up a log-posterior function.",
"def lnPost(params, x, y):\n # This is written for clarity rather than numerical efficiency. Feel free to tweak it.\n a = params[0]\n b = params[1]\n lnp = 0.0\n # Using informative priors to achieve faster convergence is cheating in this exercise!\n # But this is where you would add them.\n lnp += -0.5*np.sum((a+b*x - y)**2)\n return lnp",
"Convenience functions encoding the exact posterior:",
"class ExactPosterior:\n def __init__(self, x, y, a0, b0):\n X = np.matrix(np.vstack([np.ones(len(x)), x]).T)\n Y = np.matrix(y).T\n self.invcov = X.T * X\n self.covariance = np.linalg.inv(self.invcov)\n self.mean = self.covariance * X.T * Y\n self.a_array = np.arange(0.0, 6.0, 0.02)\n self.b_array = np.arange(0.0, 3.25, 0.02)\n self.P_of_a = np.array([self.marg_a(a) for a in self.a_array])\n self.P_of_b = np.array([self.marg_b(b) for b in self.b_array])\n self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array])\n self.P_of_ab = np.exp(self.P_of_ab)\n self.renorm = 1.0/np.sum(self.P_of_ab)\n self.P_of_ab = self.P_of_ab * self.renorm\n self.levels = scipy.stats.chi2.cdf(np.arange(1,4)**2, 1) # confidence levels corresponding to contours below\n self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2))\n def lnpost(self, a, b): # the 2D posterior\n z = self.mean - np.matrix([[a],[b]])\n return -0.5 * (z.T * self.invcov * z)[0,0]\n def marg_a(self, a): # marginal posterior of a\n return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0]))\n def marg_b(self, b): # marginal posterior of b\n return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1]))\nexact = ExactPosterior(x, y, a, b)",
"Demo some plots of the exact posterior distribution",
"plt.plot(exact.a_array, exact.P_of_a);\n\nplt.plot(exact.b_array, exact.P_of_b);\n\nplt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels);\nplt.plot(a, b, 'o', color='red');",
"Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to\n\nvisually inspect traces of each parameter to see whether they appear converged \ncompare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution\nNormally, you should always use quantitative tests of convergence in addition to visual inspection, as you saw on Tuesday. For this class (only), let's save some time by relying only on visual impressions and comparison to the exact posterior.\n\n\n\n(see the snippets farther down)\nIf you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class.\nOnce you have a working sampler, the question is: how can we make it converge faster? Experiment! We'll compare notes in a bit.",
"Nsamples = 501**(2)\nsamples = np.zeros((Nsamples, 2))\n# put any more global definitions here\n\n\n\ndef proposal(a_try, b_try, temperature):\n a = a_try + temperature*np.random.randn(1)\n b = b_try + temperature*np.random.randn(1)\n return a, b\n\ndef we_accept_this_proposal(lnp_try, lnp_current):\n return np.exp(lnp_try - lnp_current) > np.random.uniform()\n\ntemperature = 0.1\na_current, b_current = proposal(0, 0, temperature)\nlnp_current = lnPost([a_current, b_current], x, y)\n\nfor i in range(Nsamples):\n a_try, b_try = proposal(a_current, b_current, temperature) # propose new parameter value(s)\n lnp_try = lnPost([a_try,b_try], x, y) # calculate posterior density for the proposal\n if we_accept_this_proposal(lnp_try, lnp_current):\n lnp_current = lnp_try\n a_current, b_current = (a_try, b_try)\n else:\n pass\n samples[i, 0] = a_current\n samples[i, 1] = b_current\n\n\n\nplt.rcParams['figure.figsize'] = (12.0, 3.0)\nplt.plot(samples[:,0])\nplt.plot(samples[:,1]);\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.plot(samples[:,0], samples[:,1]);\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.hist(samples[:,0], 20, normed=True, color='cyan');\nplt.plot(exact.a_array, exact.P_of_a, color='red');\n\nplt.rcParams['figure.figsize'] = (5.0, 5.0)\nplt.hist(samples[:,1], 20, normed=True, color='cyan');\nplt.plot(exact.b_array, exact.P_of_b, color='red');\n\n# If you know how to easily overlay the 2D sample and theoretical confidence regions, by all means do so."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyDataMallorca/WS_Introduction_to_data_science | ml_miguel/Crackeando el guess who.ipynb | gpl-3.0 | [
"Cual es la mejor estrategia para adivinar?\nPor Miguel Escalona",
"import pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\nfrom IPython.display import Image",
"¡Adivina Quién es!\nEl juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo.\nLa dinámica del juego es:\n* Cada jugador elige un personaje al azar \n* Por turnos, cada jugador realiza preguntas de sí o no, e intenta adivinar el personaje del oponente.\n* Las preguntas válidas están basadas en la apariencia de los personajes y deberían ser fáciles de responder.\n* Ejemplo de pregunta válida: ¿Tiene el cabello negro?\n* Ejemplo de pregunta no válida: ¿Luce como un ex-presidiario?\nA continuación, cargamos el tablero con los personajes.",
"Image('data/guess_who_board.jpg', width=700)",
"Cargando los datos\nPara la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.",
"df = pd.read_csv('data/guess_who.csv', index_col='observacion')\ndf.head()",
"¿Cuántos personajes tenemos con cada caracteristica?",
"#Separamos los tipos de variables\ncategorical_var = 'color de cabello'\nbinary_vars = list(set(df.keys()) - set([categorical_var, 'NOMBRE']))\n\n# Para las variables booleanas calculamos la suma\ndf[binary_vars].sum()\n\n# Para las variables categoricas, observamos la frecuencia de cada categoría\ndf[categorical_var].value_counts()\n\nlabels = df['NOMBRE']\ndel df['NOMBRE'] \ndf.head()\n\nlabels",
"Codificación de variables categóricas",
"from sklearn.feature_extraction import DictVectorizer\nvectorizer = DictVectorizer(sparse=False)\nab=vectorizer.fit_transform(df.to_dict('records'))\ndft = pd.DataFrame(ab, columns=vectorizer.get_feature_names())\ndft.head()",
"Entrenando un arbol de decisión",
"from sklearn.tree import DecisionTreeClassifier\n\nclassifier = DecisionTreeClassifier(criterion='entropy', splitter='random', random_state=42)\nclassifier.fit(dft, labels)",
"Obtención de los pesos de cada feature",
"classifier.feature_importances_\n\nfeat = pd.DataFrame(index=dft.keys(), data=classifier.feature_importances_, columns=['score'])\nfeat = feat.sort_values(by='score', ascending=False)\n\nfeat.plot(kind='bar',rot=85,figsize=(10,4),)",
"Bonus: Visualizando el arbol, requiere graphviz\nconda install graphviz",
"from sklearn.tree import export_graphviz\ndotfile = open('guess_who_tree.dot', 'w')\nexport_graphviz(\n classifier, \n out_file = dotfile, \n filled=True, \n feature_names = dft.columns, \n class_names=list(labels), \n rotate=True, \n max_depth=1, \n rounded=True,\n)\ndotfile.close()\n\n!dot -Tpng guess_who_tree.dot -o guess_who_tree.png \n\nImage('guess_who_tree.png', width=1000)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NathanYee/ThinkBayes2 | code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb | gpl-2.0 | [
"Think Bayes: Chapter 7\nThis notebook presents code and exercises from Think Bayes, second edition.\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\n% matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport math\nimport numpy as np\n\nfrom thinkbayes2 import Pmf, Cdf, Suite, Joint\nimport thinkplot",
"Warm-up exercises\nExercise: Suppose that goal scoring in hockey is well modeled by a \nPoisson process, and that the long-run goal-scoring rate of the\nBoston Bruins against the Vancouver Canucks is 2.9 goals per game.\nIn their next game, what is the probability\nthat the Bruins score exactly 3 goals? Plot the PMF of k, the number\nof goals they score in a game.",
"# Solution goes here\n\n# Solution goes here\n\n# Solution goes here",
"Exercise: Assuming again that the goal scoring rate is 2.9, what is the probability of scoring a total of 9 goals in three games? Answer this question two ways:\n\n\nCompute the distribution of goals scored in one game and then add it to itself twice to find the distribution of goals scored in 3 games.\n\n\nUse the Poisson PMF with parameter $\\lambda t$, where $\\lambda$ is the rate in goals per game and $t$ is the duration in games.",
"# Solution goes here\n\n# Solution goes here",
"Exercise: Suppose that the long-run goal-scoring rate of the\nCanucks against the Bruins is 2.6 goals per game. Plot the distribution\nof t, the time until the Canucks score their first goal.\nIn their next game, what is the probability that the Canucks score\nduring the first period (that is, the first third of the game)?\nHint: thinkbayes2 provides MakeExponentialPmf and EvalExponentialCdf.",
"# Solution goes here\n\n# Solution goes here\n\n# Solution goes here",
"Exercise: Assuming again that the goal scoring rate is 2.8, what is the probability that the Canucks get shut out (that is, don't score for an entire game)? Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution.",
"# Solution goes here\n\n# Solution goes here",
"The Boston Bruins problem\nThe Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league.\nThe Likelihood function takes as data the number of goals scored in a game.",
"from thinkbayes2 import MakeNormalPmf\nfrom thinkbayes2 import EvalPoissonPmf\n\nclass Hockey(Suite):\n \"\"\"Represents hypotheses about the scoring rate for a team.\"\"\"\n\n def __init__(self, label=None):\n \"\"\"Initializes the Hockey object.\n\n label: string\n \"\"\"\n mu = 2.8\n sigma = 0.3\n\n pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=101)\n Suite.__init__(self, pmf, label=label)\n \n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n Evaluates the Poisson PMF for lambda and k.\n\n hypo: goal scoring rate in goals per game\n data: goals scored in one game\n \"\"\"\n lam = hypo\n k = data\n like = EvalPoissonPmf(k, lam)\n return like",
"Now we can initialize a suite for each team:",
"suite1 = Hockey('bruins')\nsuite2 = Hockey('canucks')",
"Here's what the priors look like:",
"thinkplot.PrePlot(num=2)\nthinkplot.Pdf(suite1)\nthinkplot.Pdf(suite2)\nthinkplot.Config(xlabel='Goals per game',\n ylabel='Probability')",
"And we can update each suite with the scores from the first 4 games.",
"suite1.UpdateSet([0, 2, 8, 4])\nsuite2.UpdateSet([1, 3, 1, 0])\n\nthinkplot.PrePlot(num=2)\nthinkplot.Pdf(suite1)\nthinkplot.Pdf(suite2)\nthinkplot.Config(xlabel='Goals per game',\n ylabel='Probability')\n\nsuite1.Mean(), suite2.Mean()",
"To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons:",
"from thinkbayes2 import MakeMixture\nfrom thinkbayes2 import MakePoissonPmf\n\ndef MakeGoalPmf(suite, high=10):\n \"\"\"Makes the distribution of goals scored, given distribution of lam.\n\n suite: distribution of goal-scoring rate\n high: upper bound\n\n returns: Pmf of goals per game\n \"\"\"\n metapmf = Pmf()\n\n for lam, prob in suite.Items():\n pmf = MakePoissonPmf(lam, high)\n metapmf.Set(pmf, prob)\n\n mix = MakeMixture(metapmf, label=suite.label)\n return mix",
"Here's what the results look like.",
"goal_dist1 = MakeGoalPmf(suite1)\ngoal_dist2 = MakeGoalPmf(suite2)\n\nthinkplot.PrePlot(num=2)\nthinkplot.Pmf(goal_dist1)\nthinkplot.Pmf(goal_dist2)\nthinkplot.Config(xlabel='Goals',\n ylabel='Probability',\n xlim=[-0.7, 11.5])\n\ngoal_dist1.Mean(), goal_dist2.Mean()",
"Now we can compute the probability that the Bruins win, lose, or tie in regulation time.",
"diff = goal_dist1 - goal_dist2\np_win = diff.ProbGreater(0)\np_loss = diff.ProbLess(0)\np_tie = diff.Prob(0)\n\nprint('Prob win, loss, tie:', p_win, p_loss, p_tie)",
"If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials.",
"from thinkbayes2 import MakeExponentialPmf\n\ndef MakeGoalTimePmf(suite):\n \"\"\"Makes the distribution of time til first goal.\n\n suite: distribution of goal-scoring rate\n\n returns: Pmf of goals per game\n \"\"\"\n metapmf = Pmf()\n\n for lam, prob in suite.Items():\n pmf = MakeExponentialPmf(lam, high=2.5, n=1001)\n metapmf.Set(pmf, prob)\n\n mix = MakeMixture(metapmf, label=suite.label)\n return mix",
"Here's what the predictive distributions for t look like.",
"time_dist1 = MakeGoalTimePmf(suite1) \ntime_dist2 = MakeGoalTimePmf(suite2)\n \nthinkplot.PrePlot(num=2)\nthinkplot.Pmf(time_dist1)\nthinkplot.Pmf(time_dist2) \nthinkplot.Config(xlabel='Games until goal',\n ylabel='Probability')\n\ntime_dist1.Mean(), time_dist2.Mean()",
"In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t:",
"p_win_in_overtime = time_dist1.ProbLess(time_dist2)\np_adjust = time_dist1.ProbEqual(time_dist2)\np_win_in_overtime += p_adjust / 2\nprint('p_win_in_overtime', p_win_in_overtime)",
"Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime.",
"p_win_overall = p_win + p_tie * p_win_in_overtime\nprint('p_win_overall', p_win_overall)",
"Exercises\nExercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it has on the results.",
"# Solution goes here",
"Exercise: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. What is the probability that Germany had the better team? What is the probability that Germany would win a rematch?\nFor a prior distribution on the goal-scoring rate for each team, use a gamma distribution with parameter 1.3.",
"from thinkbayes2 import MakeGammaPmf\n\nxs = np.linspace(0, 8, 101)\npmf = MakeGammaPmf(xs, 1.3)\nthinkplot.Pdf(pmf)\nthinkplot.Config(xlabel='Goals per game')\npmf.Mean()",
"Exercise: In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?\nNote: for this one you will need a new suite that provides a Likelihood function that takes as data the time between goals, rather than the number of goals in a game. \nExercise: Which is a better way to break a tie: overtime or penalty shots?\nExercise: Suppose that you are an ecologist sampling the insect population in a new environment. You deploy 100 traps in a test area and come back the next day to check on them. You find that 37 traps have been triggered, trapping an insect inside. Once a trap triggers, it cannot trap another insect until it has been reset.\nIf you reset the traps and come back in two days, how many traps do you expect to find triggered? Compute a posterior predictive distribution for the number of traps."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GPflow/GPflowOpt | doc/source/notebooks/constrained_bo.ipynb | apache-2.0 | [
"Bayesian Optimization with black-box constraints\nJoachim van der Herten\nIntroduction\nThis notebook demonstrates the optimization of an analytical function using the well known Expected Improvement (EI) function. The problem is constrained by a black-box constraint function. The feasible regions are learnt jointly with the optimal regions by considering a second acquisition function known as the Probability of Feasibility (PoF), following the approach of Gardner et al. (2014)",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport gpflow\nimport gpflowopt\nimport numpy as np",
"Constrained problem\nFirst we set up an objective function (the townsend function) and a constraint function. We further assume both functions are black-box. We also define the optimization domain (2 continuous parameters).",
"# Objective & constraint\ndef townsend(X):\n return -(np.cos((X[:,0]-0.1)*X[:,1])**2 + X[:,0] * np.sin(3*X[:,0]+X[:,1]))[:,None]\n\ndef constraint(X):\n return -(-np.cos(1.5*X[:,0]+np.pi)*np.cos(1.5*X[:,1])+np.sin(1.5*X[:,0]+np.pi)*np.sin(1.5*X[:,1]))[:,None]\n\n# Setup input domain\ndomain = gpflowopt.domain.ContinuousParameter('x1', -2.25, 2.5) + \\\n gpflowopt.domain.ContinuousParameter('x2', -2.5, 1.75)\n\n# Plot\ndef plotfx(): \n X = gpflowopt.design.FactorialDesign(101, domain).generate()\n Zo = townsend(X)\n Zc = constraint(X)\n mask = Zc>=0\n Zc[mask] = np.nan\n Zc[np.logical_not(mask)] = 1\n Z = Zo * Zc\n shape = (101, 101)\n\n f, axes = plt.subplots(1, 1, figsize=(7, 5))\n axes.contourf(X[:,0].reshape(shape), X[:,1].reshape(shape), Z.reshape(shape))\n axes.set_xlabel('x1')\n axes.set_ylabel('x2')\n axes.set_xlim([domain.lower[0], domain.upper[0]])\n axes.set_ylim([domain.lower[1], domain.upper[1]])\n return axes\n\nplotfx();",
"Modeling and joint acquisition function\nWe proceed by assigning the objective and constraint function a GP prior. Both functions are evaluated on a space-filling set of points (here, a Latin Hypercube design). Two GPR models are created.\nThe EI is based on the model of the objective function (townsend), whereas PoF is based on the model of the constraint function. We then define the joint criterioin as the product of the EI and PoF.",
"# Initial evaluations\ndesign = gpflowopt.design.LatinHyperCube(11, domain)\nX = design.generate()\nYo = townsend(X)\nYc = constraint(X)\n\n# Models\nobjective_model = gpflow.gpr.GPR(X, Yo, gpflow.kernels.Matern52(2, ARD=True))\nobjective_model.likelihood.variance = 0.01\nconstraint_model = gpflow.gpr.GPR(np.copy(X), Yc, gpflow.kernels.Matern52(2, ARD=True))\nconstraint_model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)\nconstraint_model.likelihood.variance = 0.01\nconstraint_model.likelihood.variance.prior = gpflow.priors.Gamma(1./4.,1.0)\n\n# Setup\nei = gpflowopt.acquisition.ExpectedImprovement(objective_model)\npof = gpflowopt.acquisition.ProbabilityOfFeasibility(constraint_model)\njoint = ei * pof",
"Initial belief\nWe can now inspect our belief about the optimization problem by plotting the models, the EI, PoF and joint mappings. Both models clearly are not very accurate yet. More specifically, the constraint model does not correctly capture the feasibility yet.",
"def plot():\n Xeval = gpflowopt.design.FactorialDesign(101, domain).generate()\n Yevala,_ = joint.operands[0].models[0].predict_f(Xeval)\n Yevalb,_ = joint.operands[1].models[0].predict_f(Xeval)\n Yevalc = np.maximum(ei.evaluate(Xeval), 0)\n Yevald = pof.evaluate(Xeval)\n Yevale = np.maximum(joint.evaluate(Xeval), 0)\n shape = (101, 101)\n plots = [('Objective model', Yevala), ('Constraint model', Yevalb), \n ('EI', Yevalc), ('PoF', Yevald), \n ('EI * PoF', Yevale)]\n\n plt.figure(figsize=(10,10))\n for i, plot in enumerate(plots):\n if i == 4:\n ax = plt.subplot2grid((3, 4), (2, 1), colspan=2)\n else:\n ax = plt.subplot2grid((3, 2), (int(i/2), i % 2))\n \n ax.contourf(Xeval[:,0].reshape(shape), Xeval[:,1].reshape(shape), plot[1].reshape(shape))\n ax.scatter(joint.data[0][:,0], joint.data[0][:,1], c='w')\n ax.set_title(plot[0])\n ax.set_xlabel('x1')\n ax.set_ylabel('x2')\n ax.set_xlim([domain.lower[0], domain.upper[0]])\n ax.set_ylim([domain.lower[1], domain.upper[1]])\n plt.tight_layout()\n \n# Plot representing the model belief, and the belief mapped to EI and PoF\nplot()\nprint(constraint_model)",
"Running Bayesian Optimizer\nRunning the Bayesian optimization is the next step. For this, we must set up an appropriate strategy to optimize the joint acquisition function. Sometimes this can be a bit challenging as often large non-varying areas may occur. A typical strategy is to apply a Monte Carlo optimization step first, then optimize the point with the best value (several variations exist). This approach is followed here. We then run the Bayesian Optimization and allow it to select up to 50 additional decisions. \nThe joint acquisition function assures the feasibility (w.r.t the constraint) is taken into account while selecting decisions for optimality.",
"# First setup the optimization strategy for the acquisition function\n# Combining MC step followed by L-BFGS-B\nacquisition_opt = gpflowopt.optim.StagedOptimizer([gpflowopt.optim.MCOptimizer(domain, 200), \n gpflowopt.optim.SciPyOptimizer(domain)])\n\n# Then run the BayesianOptimizer for 50 iterations\noptimizer = gpflowopt.BayesianOptimizer(domain, joint, optimizer=acquisition_opt, verbose=True)\nresult = optimizer.optimize([townsend, constraint], n_iter=50)\n \nprint(result)",
"Results\nIf we now plot the belief, we clearly see the constraint model has improved significantly. More specifically, its PoF mapping is an accurate representation of the true constraint function. By multiplying the EI by the PoF, the search is restricted to the feasible regions.",
"# Plotting belief again\nprint(constraint_model)\nplot()",
"If we inspect the sampling distribution, we can see that the amount of samples in the infeasible regions is limited. The optimization has focussed on the feasible areas. In addition, it has been active mostly in two optimal regions.",
"# Plot function, overlayed by the constraint. Also plot the samples\naxes = plotfx()\nvalid = joint.feasible_data_index()\naxes.scatter(joint.data[0][valid,0], joint.data[0][valid,1], label='feasible data', c='w')\naxes.scatter(joint.data[0][np.logical_not(valid),0], joint.data[0][np.logical_not(valid),1], label='data', c='r');\naxes.legend()",
"Finally, the evolution of the best value over the number of iterations clearly shows a very good solution is already found after only a few evaluations.",
"f, axes = plt.subplots(1, 1, figsize=(7, 5))\nf = joint.data[1][:,0]\nf[joint.data[1][:,1] > 0] = np.inf\naxes.plot(np.arange(0, joint.data[0].shape[0]), np.minimum.accumulate(f))\naxes.set_ylabel('fmin')\naxes.set_xlabel('Number of evaluated points');"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mtchem/Twitter-Politics | EDA.ipynb | mit | [
"# imports\nimport pandas as pd\nimport numpy as np\nimport pickle\nimport re\nimport pandas as pd",
"The following data was generated using code that can be found on GitHub\nhttps://github.com/mtchem/Twitter-Politics/blob/master/data_wrangle/Data_Wrangle.ipynb",
"# load federal document data from pickle file\nfed_reg_data = r'data/fed_reg_data.pickle'\nfed_data = pd.read_pickle(fed_reg_data)\n# load twitter data from csv\ntwitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle'\ntwitter_data = pd.read_pickle(twitter_file_path)\n\nlen(fed_data)",
"In order to explore the twitter and executive document data I will look at the following:\n\nDetermine the most used hashtags\nDetermine who President Trump tweeted at(@) the most\nCreate a word frequency plot for the most used words in the twitter data and the presidental documents\nFind words that both data sets have in common, and determine those words document frequency (what percentage of documents those words appear in)",
"# imports\nimport nltk\nnltk.download('stopwords')\nfrom nltk.corpus import stopwords\nimport itertools\nfrom collections import Counter\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nplt.style.use('ggplot')",
"Plot the most used hashtags and @ tags",
"# find the most used hashtags\nhashtag_freq = Counter(list(itertools.chain(*(twitter_data.hash_tags))))\nhashtag_top20 = hashtag_freq.most_common(20)\n# find the most used @ tags\nat_tag_freq = Counter(list(itertools.chain(*(twitter_data['@_tags']))))\nat_tags_top20 = at_tag_freq.most_common(20)\n\nprint(hashtag_top20)\n\n# frequency plot for the most used hashtags\ndf = pd.DataFrame(hashtag_top20, columns=['Hashtag', 'frequency'])\ndf.plot(kind='bar', x='Hashtag',legend=None,fontsize = 15, figsize = (15,5))\nplt.ylabel('Frequency',fontsize = 18)\nplt.xlabel('Hashtag', fontsize=18)\nplt.title('Most Common Hashtags', fontsize = 15)\nplt.show()\n\n# frequency plot for the most used @ tags\ndf = pd.DataFrame(at_tags_top20, columns=['@ Tag', 'frequency'])\ndf.plot(kind='bar', x='@ Tag',legend=None, figsize = (15,5))\nplt.ylabel('Frequency',fontsize = 18)\nplt.xlabel('@ Tags', fontsize=18)\nplt.title('Most Common @ Tags', fontsize = 15)\nplt.show()\n",
"Top used words for the twitter data and the federal document data\nDefine a list of words that have no meaning, such as 'a', 'the', and punctuation",
"# use nltk's list of stopwords\nstop_words = set(stopwords.words('english'))\n# add puncuation to stopwords\nstop_words.update(['.', ',','get','going','one', 'amp','like' '\"','...',\"''\", \"'\",\"n't\", '?', '!', ':', ';', '#','@', '(', ')', 'https', '``',\"'s\", 'rt' ]) ",
"Make a list of hashtags and @entites used in the twitter data",
"# combine the hashtags and @ tags, flatten the list of lists, keep the unique items\nstop_twitter = set(list(itertools.chain(*(twitter_data.hash_tags + twitter_data['@_tags']))))",
"The federal document data also has some words that need to be removed. The words Federal Registry and the date are on the top of every page so they should be removed. Also, words like 'shall', 'order', and 'act' are used quite a bit but don't convay much meaning, so I'm going to remove those words as well.",
"stop_fed_docs = ['united', 'states', '1','2','3','4','5','6','7','8','9','10', '11','12',\n '13','14','15','16','17','18','19','20','21','22','23','24','25','26',\n '27','28','29','30','31','2016', '2015','2014','federal','shall', '4790',\n 'national', '2017', 'order','president', 'presidential', 'sep',\n 'register','po','verdate', 'jkt','00000','frm','fmt','sfmt','vol',\n 'section','donald','act','america', 'executive','secretary', 'law', \n 'proclamation','81','day','including', 'code', '4705','authority', 'agencies', \n '241001','americans','238001','year', 'amp','government','agency','hereby',\n 'people','public','person','state','american','two','nation', '82', 'sec',\n 'laws', 'policy','set','fr','appropriate','doc','new','filed','u.s.c',\n 'department','ii','also','office','country','within','memorandum', \n 'director', 'us', 'sunday','monday', 'tuesday','wednesday','thursday', \n 'friday', 'saturday','title','upon','constitution','support', 'vested',\n 'part', 'month', 'subheading', 'foreign','general','january',\n 'february', 'march', 'april','may','june','july','august', 'september',\n 'october', 'november', 'december', 'council','provide','consistent','pursuant',\n 'thereof','00001','documents','11:15', 'area','management',\n 'following','house','white','week','therefore','amended', 'continue',\n 'chapter','must','years', '00002', 'use','make','date','one',\n 'many','12', 'commission','provisions', 'every','u.s.','functions',\n 'made','hand','necessary', 'witness','time','otherwise', 'proclaim',\n 'follows','thousand','efforts','jan', 'trump','j.',\n 'applicable', '4717','whereof','hereunto', 'subject', 'report',\n '3—', '3295–f7–p']",
"Create functions that removes the stop words for each of the datasets",
"def remove_from_fed_data(token_lst):\n # remove stopwords and one letter words\n filtered_lst = [word for word in token_lst if word.lower() not in stop_fed_docs and len(word) > 1 \n and word.lower() not in stop_words]\n return filtered_lst \n\ndef remove_from_twitter_data(token_lst):\n # remove stopwords and one letter words\n filtered_lst = [word for word in token_lst if word.lower() not in stop_words and len(word) > 1 \n and word.lower() not in stop_twitter]\n return filtered_lst ",
"Remove all of the stop words from the tokenized twitter and document data",
"# apply the remove_stopwords function to all of the tokenized twitter text\ntwitter_words = twitter_data.text_tokenized.apply(lambda x: remove_from_twitter_data(x))\n# apply the remove_stopwords function to all of the tokenized document text\ndocument_words = fed_data.token_text.apply(lambda x: remove_from_fed_data(x))\n\n# flatten each the word lists into one list\nall_twitter_words = list(itertools.chain(*twitter_words))\nall_document_words =list(itertools.chain(*document_words))",
"Count how many times each word is used for both datasets",
"# create a dictionary using the Counter method, where the key is a word and the value is the number of time it was used\ntwitter_freq = Counter(all_twitter_words)\ndoc_freq = Counter(all_document_words)\n# determine the top 30 words used in the twitter data\ntop_30_tweet = twitter_freq.most_common(30)\ntop_30_fed = doc_freq.most_common(30)",
"Plot the most used words for the twitter data and the federal document data",
"# frequency plot for the most used Federal Data\ndf = pd.DataFrame(top_30_fed, columns=['Federal Data', 'frequency'])\ndf.plot(kind='bar', x='Federal Data',legend=None, figsize = (15,5))\nplt.ylabel('Frequency',fontsize = 18)\nplt.xlabel('Words', fontsize=18)\nplt.title('Most Used Words that Occured in the Federal Data', fontsize = 15)\nplt.show()\n\n# frequency plot for the most used words in the twitter data\ndf = pd.DataFrame(top_30_tweet, columns=['Twitter Data', 'frequency'])\ndf.plot(kind='bar', x='Twitter Data',legend=None, figsize = (15,5))\nplt.ylabel('Frequency',fontsize = 18)\nplt.xlabel('Words', fontsize=18)\nplt.title('Most Used Words that Occured in the Twitter Data', fontsize = 15)\nplt.show()",
"Determine all of the words that are used in both datasets",
"# find the unique words in each dataset\njoint_words = list((set(all_document_words)).intersection(all_twitter_words))",
"Create a dictionary with the unique joint words as keys",
"# make array of zeros\nvalues = np.zeros(len(joint_words))\n# create dictionary\njoint_words_dict = dict(zip(joint_words, values))",
"Create dictionaries for both datasets with document frequency for each joint word",
"# create a dictionary with a word as key, and a value = number of documents that contain the word for Twitter\ntwitter_document_freq = joint_words_dict.copy()\nfor word in joint_words:\n for lst in twitter_data.text_tokenized:\n if word in lst:\n twitter_document_freq[word]= twitter_document_freq[word] + 1\n \n# create a dictionary with a word as key, and a value = number of documents that contain the word for Fed Data\nfed_document_freq = joint_words_dict.copy()\nfor word in joint_words:\n for lst in fed_data.token_text:\n if word in lst:\n fed_document_freq[word]= fed_document_freq[word] + 1",
"Create dataframe with the word and the document percentage for each data set",
"df = pd.DataFrame([fed_document_freq, twitter_document_freq]).T\n\ndf.columns = ['Fed', 'Tweet']\ndf['% Fed'] = (df.Fed/len(df.Fed))*100\ndf['% Tweet'] = (df.Tweet/len(df.Tweet))*100\n\ntop_joint_fed = df[['% Fed','% Tweet']].sort_values(by='% Fed', ascending=False)[0:50] \ntop_joint_tweet = df[['% Fed','% Tweet']].sort_values(by='% Tweet', ascending=False)[0:50] \n\ntop_joint_fed.plot.bar(figsize=(14,5))\nplt.show()\n\ntop_joint_tweet.plot.bar(figsize=(14,5))\nplt.show()\n\ndf['diff %'] = df['% Fed'] - df['% Tweet']\n\ntop_same = df[df['diff %'] == 0].sort_values(by='% Fed', ascending=False)[0:50]\n\ntop_same[['% Fed', '% Tweet']].plot.bar(figsize=(14,5))\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arasdar/DL | udacity-dl/CNN/cnn_bp-learning-curves.ipynb | unlicense | [
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane 1\n* automobile 2\n* bird 3\n* cat 4\n* deer 5\n* dog 6\n* frog 7\n* horse 8\n* ship 9\n* truck 10\n\nTotal 10 classes (Aras changed above/this section a bit)\n\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n ## image data shape = [t, i,j,k], t= num_img_per_batch (basically the list of images), i,j,k=height,width, and depth/channel\n return x/255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"# import helper ## I did this because sklearn.preprocessing was defined in there\nfrom sklearn import preprocessing ## from sklearn lib import preprocessing lib/sublib/functionality/class\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n\n ## This was in the helper.py which belongs to the generic helper functions\n # def display_image_predictions(features, labels, predictions):\n # n_classes = 10\n # label_names = _load_label_names()\n # label_binarizer = LabelBinarizer()\n # label_binarizer.fit(range(n_classes))\n # label_ids = label_binarizer.inverse_transform(np.array(labels))\n label_binarizer = preprocessing.LabelBinarizer() ## instantiate and initialized the one-hot encoder from class to one-hot\n n_class = 10 ## total num_classes\n label_binarizer.fit(range(n_class)) ## fit the one-vec to the range of number of classes, 10 in this case (dataset)\n return label_binarizer.transform(x) ## transform the class labels to one-hot vec\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Implementation of CNN with backprop in NumPy",
"def get_im2col_indices(x_shape, field_height, field_width, padding=1, stride=1):\n # First figure out what the size of the output should be\n N, C, H, W = x_shape\n assert (H + 2 * padding - field_height) % stride == 0\n assert (W + 2 * padding - field_height) % stride == 0\n out_height = int((H + 2 * padding - field_height) / stride + 1)\n out_width = int((W + 2 * padding - field_width) / stride + 1)\n\n i0 = np.repeat(np.arange(field_height), field_width)\n i0 = np.tile(i0, C)\n i1 = stride * np.repeat(np.arange(out_height), out_width)\n j0 = np.tile(np.arange(field_width), field_height * C)\n j1 = stride * np.tile(np.arange(out_width), out_height)\n i = i0.reshape(-1, 1) + i1.reshape(1, -1)\n j = j0.reshape(-1, 1) + j1.reshape(1, -1)\n\n k = np.repeat(np.arange(C), field_height * field_width).reshape(-1, 1)\n\n return (k.astype(int), i.astype(int), j.astype(int))\n\ndef im2col_indices(x, field_height, field_width, padding=1, stride=1):\n \"\"\" An implementation of im2col based on some fancy indexing \"\"\"\n # Zero-pad the input\n p = padding\n x_padded = np.pad(x, ((0, 0), (0, 0), (p, p), (p, p)), mode='constant')\n\n k, i, j = get_im2col_indices(x.shape, field_height, field_width, padding, stride)\n\n cols = x_padded[:, k, i, j]\n C = x.shape[1]\n cols = cols.transpose(1, 2, 0).reshape(field_height * field_width * C, -1)\n return cols\n\ndef col2im_indices(cols, x_shape, field_height=3, field_width=3, padding=1,\n stride=1):\n \"\"\" An implementation of col2im based on fancy indexing and np.add.at \"\"\"\n N, C, H, W = x_shape\n H_padded, W_padded = H + 2 * padding, W + 2 * padding\n x_padded = np.zeros((N, C, H_padded, W_padded), dtype=cols.dtype)\n k, i, j = get_im2col_indices(x_shape, field_height, field_width, padding, stride)\n cols_reshaped = cols.reshape(C * field_height * field_width, -1, N)\n cols_reshaped = cols_reshaped.transpose(2, 0, 1)\n np.add.at(x_padded, (slice(None), k, i, j), cols_reshaped)\n if padding == 0:\n return x_padded\n return x_padded[:, :, padding:-padding, padding:-padding]\n\ndef conv_forward(X, W, b, stride=1, padding=1):\n cache = W, b, stride, padding\n n_filters, d_filter, h_filter, w_filter = W.shape\n n_x, d_x, h_x, w_x = X.shape\n h_out = (h_x - h_filter + 2 * padding) / stride + 1\n w_out = (w_x - w_filter + 2 * padding) / stride + 1\n\n if not h_out.is_integer() or not w_out.is_integer():\n raise Exception('Invalid output dimension!')\n\n h_out, w_out = int(h_out), int(w_out)\n\n X_col = im2col_indices(X, h_filter, w_filter, padding=padding, stride=stride)\n W_col = W.reshape(n_filters, -1)\n\n out = W_col @ X_col + b\n out = out.reshape(n_filters, h_out, w_out, n_x)\n out = out.transpose(3, 0, 1, 2)\n\n cache = (X, W, b, stride, padding, X_col)\n\n return out, cache\n\ndef conv_backward(dout, cache):\n X, W, b, stride, padding, X_col = cache\n n_filter, d_filter, h_filter, w_filter = W.shape\n\n db = np.sum(dout, axis=(0, 2, 3))\n db = db.reshape(n_filter, -1)\n\n dout_reshaped = dout.transpose(1, 2, 3, 0).reshape(n_filter, -1)\n dW = dout_reshaped @ X_col.T\n dW = dW.reshape(W.shape)\n\n W_reshape = W.reshape(n_filter, -1)\n dX_col = W_reshape.T @ dout_reshaped\n dX = col2im_indices(dX_col, X.shape, h_filter, w_filter, padding=padding, stride=stride)\n\n return dX, dW, db\n\n# Now it is time to calculate the error using cross entropy\ndef cross_entropy(y_pred, y_train):\n m = y_pred.shape[0]\n\n prob = softmax(y_pred)\n log_like = -np.log(prob[range(m), y_train])\n\n data_loss = np.sum(log_like) / m\n # reg_loss = regularization(model, reg_type='l2', lam=lam)\n\n return data_loss # + reg_loss\n\ndef dcross_entropy(y_pred, y_train):\n m = y_pred.shape[0]\n\n grad_y = softmax(y_pred)\n grad_y[range(m), y_train] -= 1.\n grad_y /= m\n\n return grad_y\n\n# Softmax and sidmoid are equally based on Bayesian NBC/ Naiive Bayesian Classifer as a probability-based classifier\ndef softmax(X):\n eX = np.exp((X.T - np.max(X, axis=1)).T)\n return (eX.T / eX.sum(axis=1)).T\n\ndef dsoftmax(X, sX): # derivative of the softmax which is the same as sigmoid as softmax is sigmoid and bayesian function for probabilistic classfication\n # X is the input to the softmax and sX is the sX=softmax(X)\n grad = np.zeros(shape=(len(sX[0]), len(X[0])))\n \n # Start filling up the gradient\n for i in range(len(sX[0])): # mat_1xn, n=num_claess, 10 in this case\n for j in range(len(X[0])):\n if i==j: \n grad[i, j] = (sX[0, i] * (1-sX[0, i]))\n else: \n grad[i, j] = (-sX[0, i]* sX[0, j])\n # return the gradient as the derivative of softmax/bwd softmax layer\n return grad\n\ndef sigmoid(X):\n return 1. / (1 + np.exp(-X))\n\ndef dsigmoid(X):\n return sigmoid(X) * (1-sigmoid(X))\n\ndef squared_loss(y_pred, y_train):\n m = y_pred.shape[0]\n data_loss = (0.5/m) * np.sum(y_pred - y_train)**2 # This is now convex error surface x^2 \n return data_loss #+ reg_loss\n\ndef dsquared_loss(y_pred, y_train):\n m = y_pred.shape[0]\n grad_y = (y_pred - y_train)/m # f(x)-y is the convex surface for descending/minimizing\n return grad_y\n\nfrom sklearn.utils import shuffle as sklearn_shuffle\n\ndef get_minibatch(X, y, minibatch_size, shuffle=True):\n minibatches = []\n\n if shuffle:\n X, y = sklearn_shuffle(X, y)\n\n for i in range(0, X.shape[0], minibatch_size):\n X_mini = X[i:i + minibatch_size]\n y_mini = y[i:i + minibatch_size]\n\n minibatches.append((X_mini, y_mini))\n\n return minibatches",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"This is where the CNN imllementation in NumPy starts!",
"# Displaying an image using matplotlib\n# importing the library/package\nimport matplotlib.pyplot as plot\n\n# Using plot with imshow to show the image (N=5000, H=32, W=32, C=3)\nplot.imshow(valid_features[0, :, :, :])\n\n# # Training cycle\n# for epoch in range(num_):\n# # Loop over all batches\n# n_batches = 5\n# for batch_i in range(1, n_batches + 1):\n# for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n# train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n# print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n# print_stats(sess, batch_features, batch_labels, cost, accuracy)\n\n\n# # input and output dataset\nX=valid_features.transpose(0, 3, 1, 2) # NCHW == mat_txn\nY=valid_labels #NH= num_classes=10 = mat_txn\n#for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n# train_features, train_labels = helper.load_preprocess_training_batch(batch_id=, batch_size=)\n\n\n\n# Initilizting the parameters\n# Convolutional layer\n# Suppose we have 20 of 3x3 filter: 20x1x3x3. W_col will be 20x9 matrix\n# Let this be 3x3 convolution with stride = 1 and padding = 1\nh_filter=3 \nw_filter=3 \nc_filter=3\npadding=1 \nstride=1\nnum_filters = 20\nw1 = np.random.normal(loc=0.0, scale=1.0, size=(num_filters, c_filter, h_filter, w_filter))# NCHW 20x9 x 9x500 = 20x500\nw1 = w1/(c_filter* h_filter* w_filter) # taking average from them or average running for initialization.\nb1 = np.zeros(shape=(num_filters, 1), dtype=float)\n\n# FC layer to the output layer -- This is really hard to have a final size for the FC to the output layer\n# num_classes = y[0, 1] # txn\nw2 = np.random.normal(loc=0.0, scale=1.0, size=Y[0:1].shape) # This will be resized though\nb2 = np.zeros(shape=Y[0:1].shape) # number of output nodes/units/neurons are equal to the number of classes\n\n# Initializing hyper parameters\nnum_epochs = 200\n## minibatch_size = 512 # This will eventually used for stochstic or random minibatch from the whole batch\nbatch_size = X.shape[0]//1 #NCHW, N= number of samples or t\nerror_list = [] # to display the plot or plot the error curve/ learning rate\n\n# Training loops for epochs and updating params\nfor epoch in range(num_epochs): # start=0, stop=num_epochs, step=1\n\n # Initializing/reseting the gradients\n dw1 = np.zeros(shape=w1.shape)\n db1 = np.zeros(shape=b1.shape)\n dw2 = np.zeros(shape=w2.shape)\n db2 = np.zeros(shape=b2.shape)\n err = 0\n \n # # Shuffling the entire batch for a minibatch\n # # Stochastic part for randomizing/shuffling through the dataset in every single epoch\n # minibatches = get_minibatch(X=X, y=Y, minibatch_size=batch_size, shuffle=True)\n # X_mini, Y_mini = minibatches[0]\n \n \n # The loop for learning the gradients\n for t in range(batch_size): # start=0, stop=mini_batch_size/batch_size, step=1\n \n # input and output each sample in the batch/minibatch for updating the gradients/d_params/delta_params\n x= X[t:t+1] # mat_nxcxhxw\n y= Y[t:t+1] # mat_txm\n # print(\"inputs:\", x.shape, y.shape)\n \n # Forward pass\n # start with the convolution layer forward\n h1_in, h1_cache = conv_forward(X=x, W=w1, b=b1, stride=1, padding=1)\n h1_out = h1_in * 1 # activation func. = LU\n #h1_out = np.maximum(h1_in, 0) # ReLU for avoiding the very high ERROR in classification\n # print(\"Convolution layer:\", h1_out)\n\n # Connect the flattened layer to the output layer/visible layer FC layer\n h1_fc = h1_out.reshape(1, -1)\n # initializing w2 knowing the size/given the size of fc layer\n if t==0: w2 = (1/h1_fc.shape[1]) * np.resize(a=w2, new_shape=(h1_fc.shape[1], y.shape[1])) # mat_hxm # initialization\n out = h1_fc @ w2 + b2\n y_prob = softmax(X=out) # can also be sigmoid/logistic function/Bayesina/ NBC\n # print(\"Output layer: \", out, y_prob, y)\n\n # Mean Square Error: Calculate the error one by one sample from the batch -- Euclidean distance\n err += 0.5 * (1/ batch_size) * np.sum((y_prob - y)**2) # convex surface ax2+b\n dy = (1/ batch_size) * (y_prob - y) # convex surface this way # ignoring the constant coefficies\n # print(\"error:\", dy, err)\n \n # # Mean Cross Entropy Error: np.log is np.log(exp(x))=x equals to ln in math\n # err += (1/batch_size) * -(np.sum(y* np.log(y_prob))) \n # dy = (1/batch_size) * -(y/ y_prob) # y_prop= 0-1, log(y_prob)==-inf-0\n # # print(\"Error:\", dy, err)\n\n # Backward pass\n # output layer gradient\n dout = dy @ dsoftmax(X=out, sX=y_prob).T\n if t==0: dw2 = np.resize(a=dw2, new_shape=w2.shape)\n dw2 += h1_fc.T @ dout # mat_hx1 @ mat_1xm = mat_hxm\n db2 += dout # mat_1xm\n dh1_fc = dout @ w2.T # mat_1xm @ mat_mxh\n\n # convolution layer back\n dh1_out = dh1_fc.reshape(h1_out.shape)\n # dh1[h1_out<=0] = 0 #drelu\n dh1 = dh1_out * 1 # derivative of the LU in bwd pass/prop\n dX_conv, dW_conv, db_conv = conv_backward(cache=h1_cache, dout=dh1)\n dw1 += dW_conv\n db1 += db_conv\n\n # Updating the params in the model/cnn in ech epoch \n w1 -= dw1\n b1 -= db1\n w2 -= dw2\n b2 -= db2\n\n # displaying the total error and accuracy\n print(\"Epoch:\", epoch, \"Error:\", err)\n error_list.append(err)\n\n# Ploting the error list for the learning rate\n\nplot.plot(error_list)\n\nerror_list_MCE = error_list\n\nplot.plot(error_list_MCE)\n\nerror_list_MSE = error_list\n\nplot.plot(error_list_MSE)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hdesmond/StatisticalMethods | examples/SDSScatalog/CorrFunc.ipynb | gpl-2.0 | [
"\"Spatial Clustering\" - the Galaxy Correlation Function\n\n\nThe degree to which objects positions are correlated with each other - \"clustered\" - is of great interest in astronomy. \n\n\nWe expect galaxies to appear in groups and clusters, as they fall together under gravity: the statistics of galaxy clustering should contain information about galaxy evolution during hierarchical structure formation.\n\n\nLet's try and measure a clustering signal in our SDSS photometric object catalog.",
"%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport SDSS\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport copy\n\n# We want to select galaxies, and then are only interested in their positions on the sky.\n\ndata = pd.read_csv(\"downloads/SDSSobjects.csv\",usecols=['ra','dec','u','g',\\\n 'r','i','size'])\n\n# Filter out objects with bad magnitude or size measurements:\ndata = data[(data['u'] > 0) & (data['g'] > 0) & (data['r'] > 0) & (data['i'] > 0) & (data['size'] > 0)]\n\n# Make size cuts, to exclude stars and nearby galaxies, and magnitude cuts, to get good galaxy detections:\ndata = data[(data['size'] > 0.8) & (data['size'] < 10.0) & (data['i'] > 17) & (data['i'] < 22)]\n\n# Drop the things we're not so interested in:\ndel data['u'], data['g'], data['r'], data['i'],data['size']\n\ndata.head()\n\nNgals = len(data)\nramin,ramax = np.min(data['ra']),np.max(data['ra'])\ndecmin,decmax = np.min(data['dec']),np.max(data['dec'])\nprint Ngals,\"galaxy-like objects in (ra,dec) range (\",ramin,\":\",ramax,\",\",decmin,\":\",decmax,\")\"",
"The Correlation Function\n\n\nThe 2-point correlation function $\\xi(\\theta)$ is defined as \"the probability of finding two galaxies separated by an angular distance $\\theta$ with respect to that expected for a random distribution\" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies.\n\n\nThe simplest possible estimator for this excess probability is just \n$\\hat{\\xi}(\\theta) = \\frac{DD - RR}{RR}$, \nwhere $DD(\\theta) = N_{\\rm pairs}(\\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\\rm pairs}(\\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\\theta$. $RR(\\theta)$ is the same quantity computed in a \"random catalog,\" covering the same field of view but with uniformly randomly distributed positions.\n\n\nWe'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.",
"# !pip install --upgrade TreeCorr",
"Random Catalogs\nFirst we'll need a random catalog. Let's make it the same size as the data one.",
"random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : decmin + (decmax-decmin)*np.random.rand(Ngals)})\n\nprint len(random), type(random)",
"Now let's plot both catalogs, and compare.",
"fig, ax = plt.subplots(nrows=1, ncols=2)\nfig.set_size_inches(15, 6)\nplt.subplots_adjust(wspace=0.2)\n \nrandom.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random')\nax[0].set_xlabel('RA / deg')\nax[0].set_ylabel('Dec. / deg')\n\ndata.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data')\nax[1].set_xlabel('RA / deg')\nax[1].set_ylabel('Dec. / deg')",
"Estimating $\\xi(\\theta)$",
"import treecorr\n\nrandom_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg')\ndata_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg')\n\n# Set up some correlation function estimator objects:\n\nsep_units='arcmin'\nmin_sep=0.5\nmax_sep=10.0\nN = 7\nbin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N)\n\ndd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)\nrr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size)\n\n# Process the data:\ndd.process(data_cat)\nrr.process(random_cat)\n\n# Combine into a correlation function and its variance:\nxi, varxi = dd.calculateXi(rr)\n\nplt.figure(figsize=(15,8))\nplt.rc('xtick', labelsize=16) \nplt.rc('ytick', labelsize=16)\nplt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2)\n# plt.xscale('log')\nplt.xlabel('$\\\\theta / {\\\\rm arcmin}$',fontsize=20)\nplt.ylabel('$\\\\xi(\\\\theta)$',fontsize=20)\nplt.ylim([-0.1,0.2])\nplt.grid(True)",
"Q: Are galaxies uniformly randomly distributed?\nDiscuss the clustering signal (or lack thereof) in the above plot with your neighbor. What would you want to do better, in a second pass at this?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n | site/zh-cn/tutorials/distribute/custom_training.ipynb | apache-2.0 | [
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"使用 tf.distribute.Strategy 进行自定义训练\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/distribute/custom_training\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/custom_training.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 上运行</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/custom_training.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 上查看源代码</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/custom_training.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载该 notebook</a> </td>\n</table>\n\n本教程演示了如何使用 tf.distribute.Strategy 进行自定义训练循环。我们将在 Fashion-MNIST 数据集上训练一个简单的 CNN 模型。Fashion-MNIST 数据集包含了 60000 个大小为 28 x 28 的训练图像和 10000 个大小为 28 x 28 的测试图像。\n我们用自定义训练循环来训练我们的模型是因为它们在训练的过程中为我们提供了灵活性和在训练过程中更好的控制。而且,使它们调试模型和训练循环的时候更容易。",
"# Import TensorFlow\nimport tensorflow as tf\n\n# Helper libraries\nimport numpy as np\nimport os\n\nprint(tf.__version__)",
"下载流行的 MNIST 数据集",
"fashion_mnist = tf.keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\n\n# Adding a dimension to the array -> new shape == (28, 28, 1)\n# We are doing this because the first layer in our model is a convolutional\n# layer and it requires a 4D input (batch_size, height, width, channels).\n# batch_size dimension will be added later on.\ntrain_images = train_images[..., None]\ntest_images = test_images[..., None]\n\n# Getting the images in [0, 1] range.\ntrain_images = train_images / np.float32(255)\ntest_images = test_images / np.float32(255)",
"创建一个分发变量和图形的策略\ntf.distribute.MirroredStrategy 策略是如何运作的?\n\n所有变量和模型图都复制在副本上。\n输入都均匀分布在副本中。\n每个副本在收到输入后计算输入的损失和梯度。\n通过求和,每一个副本上的梯度都能同步。\n同步后,每个副本上的复制的变量都可以同样更新。\n\n注意:您可以将下面的所有代码放在一个单独单元内。 我们将它分成几个代码单元用于说明目的。",
"# If the list of devices is not specified in the\n# `tf.distribute.MirroredStrategy` constructor, it will be auto-detected.\nstrategy = tf.distribute.MirroredStrategy()\n\nprint ('Number of devices: {}'.format(strategy.num_replicas_in_sync))",
"设置输入流水线\n将图形和变量导出成平台不可识别的 SavedModel 格式。在你的模型保存后,你可以在有或没有范围的情况下载入它。",
"BUFFER_SIZE = len(train_images)\n\nBATCH_SIZE_PER_REPLICA = 64\nGLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync\n\nEPOCHS = 10",
"创建数据集并分发它们:",
"train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE) \ntest_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) \n\ntrain_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)\ntest_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)",
"创建模型\n使用 tf.keras.Sequential 创建一个模型。你也可以使用模型子类化 API 来完成这个。",
"def create_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(64, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10)\n ])\n\n return model\n\n# Create a checkpoint directory to store the checkpoints.\ncheckpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")",
"定义损失函数\n通常,在具有 1 个 GPU/CPU 的单台机器上,损失会除以输入批次中的样本数量。\n因此,使用 tf.distribute.Strategy 时应如何计算损失?\n\n\n例如,假设有 4 个 GPU,批次大小为 64。一个批次的输入会分布在各个副本(4 个 GPU)上,每个副本获得一个大小为 16 的输入。\n\n\n每个副本上的模型都会使用其各自的输入进行前向传递,并计算损失。现在,不将损失除以其相应输入中的样本数 (BATCH_SIZE_PER_REPLICA = 16),而应将损失除以 GLOBAL_BATCH_SIZE (64)。\n\n\n为什么这样做?\n\n之所以需要这样做,是因为在每个副本上计算完梯度后,会通过对梯度求和在副本之间同步梯度。\n\n如何在 TensorFlow 中执行此操作?\n\n\n如果您正在编写自定义训练循环(如本教程中所述),则应将每个样本的损失相加,然后将总和除以 GLOBAL_BATCH_SIZE: scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE),或者您可以使用 tf.nn.compute_average_loss,它会将每个样本的损失、可选样本权重和 GLOBAL_BATCH_SIZE 作为参数,并返回经过缩放的损失。\n\n\n如果在模型中使用正则化损失,则需要按副本数缩放损失值。您可以使用 tf.nn.scale_regularization_loss 函数进行此操作。\n\n\n不建议使用 tf.reduce_mean。这样做会将损失除以实际的每个副本批次大小,该大小可能会随着步骤的不同而发生变化。\n\n\n这种缩减和缩放会在 Keras model.compile 和 <br> model.fit 中自动完成。\n\n\n如果使用 tf.keras.losses 类(如下面的示例所示),则需要将损失缩减显式地指定为 NONE 或 SUM。与 tf.distribute.Strategy 一起使用时,不允许使用 AUTO 和 SUM_OVER_BATCH_SIZE。不允许使用 AUTO,因为用户应明确考虑他们想要的缩减量,以确保在分布式情况下缩减量正确。不允许使用 SUM_OVER_BATCH_SIZE,因为当前它只能按副本批次大小进行划分,而将按副本数量划分划留给用户,这可能很容易遗漏。因此,我们转而要求用户自己显式地执行缩减操作。\n\n\n如果 labels 为多维,则对每个样本中的元素数量的 per_example_loss 求平均值。例如,如果 predictions 的形状为 (batch_size, H, W, n_classes),而 labels 为 (batch_size, H, W),则需要更新 per_example_loss,例如:per_example_loss /= tf.cast(tf.reduce_prod(tf.shape(labels)[1:]), tf.float32)\n小心:验证损失的形状。tf.losses/tf.keras.losses 中的损失函数通常会返回输入最后一个维度的平均值。损失类封装这些函数。在创建损失类的实例时传递 reduction=Reduction.NONE,表示“无额外缩减”。对于样本输入形状为 [batch, W, H, n_classes] 的类别损失,会缩减 n_classes 维度。对于类似 losses.mean_squared_error 或 losses.binary_crossentropy 的逐点损失,应包含一个虚拟轴,使 [batch, W, H, 1] 缩减为 [batch, W, H]。如果没有虚拟轴,则 [batch, W, H] 将被错误地缩减为 [batch, W]。",
"with strategy.scope():\n # Set reduction to `none` so we can do the reduction afterwards and divide by\n # global batch size.\n loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True,\n reduction=tf.keras.losses.Reduction.NONE)\n def compute_loss(labels, predictions):\n per_example_loss = loss_object(labels, predictions)\n return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)",
"定义衡量指标以跟踪损失和准确性\n这些指标可以跟踪测试的损失,训练和测试的准确性。 您可以使用.result()随时获取累积的统计信息。",
"with strategy.scope():\n test_loss = tf.keras.metrics.Mean(name='test_loss')\n\n train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n name='train_accuracy')\n test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n name='test_accuracy')",
"训练循环",
"# model, optimizer, and checkpoint must be created under `strategy.scope`.\nwith strategy.scope():\n model = create_model()\n\n optimizer = tf.keras.optimizers.Adam()\n\n checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)\n\ndef train_step(inputs):\n images, labels = inputs\n\n with tf.GradientTape() as tape:\n predictions = model(images, training=True)\n loss = compute_loss(labels, predictions)\n\n gradients = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n\n train_accuracy.update_state(labels, predictions)\n return loss \n\ndef test_step(inputs):\n images, labels = inputs\n\n predictions = model(images, training=False)\n t_loss = loss_object(labels, predictions)\n\n test_loss.update_state(t_loss)\n test_accuracy.update_state(labels, predictions)\n\n# `run` replicates the provided computation and runs it\n# with the distributed input.\[email protected]\ndef distributed_train_step(dataset_inputs):\n per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))\n return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,\n axis=None)\n\[email protected]\ndef distributed_test_step(dataset_inputs):\n return strategy.run(test_step, args=(dataset_inputs,))\n\nfor epoch in range(EPOCHS):\n # TRAIN LOOP\n total_loss = 0.0\n num_batches = 0\n for x in train_dist_dataset:\n total_loss += distributed_train_step(x)\n num_batches += 1\n train_loss = total_loss / num_batches\n\n # TEST LOOP\n for x in test_dist_dataset:\n distributed_test_step(x)\n\n if epoch % 2 == 0:\n checkpoint.save(checkpoint_prefix)\n\n template = (\"Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, \"\n \"Test Accuracy: {}\")\n print (template.format(epoch+1, train_loss,\n train_accuracy.result()*100, test_loss.result(),\n test_accuracy.result()*100))\n\n test_loss.reset_states()\n train_accuracy.reset_states()\n test_accuracy.reset_states()",
"以上示例中需要注意的事项:\n\n我们使用for x in ...迭代构造train_dist_dataset和test_dist_dataset。\n缩放损失是distributed_train_step的返回值。 这个值会在各个副本使用tf.distribute.Strategy.reduce的时候合并,然后通过tf.distribute.Strategy.reduce叠加各个返回值来跨批次。\n在执行tf.distribute.Strategy.experimental_run_v2时,tf.keras.Metrics应在train_step和test_step中更新。\n\n恢复最新的检查点并进行测试\n使用 tf.distribute.Strategy 设置了检查点的模型可以使用或不使用策略进行恢复。",
"eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n name='eval_accuracy')\n\nnew_model = create_model()\nnew_optimizer = tf.keras.optimizers.Adam()\n\ntest_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)\n\[email protected]\ndef eval_step(images, labels):\n predictions = new_model(images, training=False)\n eval_accuracy(labels, predictions)\n\ncheckpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model)\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))\n\nfor images, labels in test_dataset:\n eval_step(images, labels)\n\nprint ('Accuracy after restoring the saved model without strategy: {}'.format(\n eval_accuracy.result()*100))",
"迭代一个数据集的替代方法\n使用迭代器\n如果你想要迭代一个已经给定步骤数量而不需要整个遍历的数据集,你可以创建一个迭代器并在迭代器上调用iter和显式调用next。 您可以选择在 tf.function 内部和外部迭代数据集。 这是一个小片段,演示了使用迭代器在 tf.function 外部迭代数据集。",
"for _ in range(EPOCHS):\n total_loss = 0.0\n num_batches = 0\n train_iter = iter(train_dist_dataset)\n\n for _ in range(10):\n total_loss += distributed_train_step(next(train_iter))\n num_batches += 1\n average_train_loss = total_loss / num_batches\n\n template = (\"Epoch {}, Loss: {}, Accuracy: {}\")\n print (template.format(epoch+1, average_train_loss, train_accuracy.result()*100))\n train_accuracy.reset_states()",
"在 tf.function 中迭代\n您还可以使用for x in ...构造在 tf.function 内部迭代整个输入train_dist_dataset,或者像上面那样创建迭代器。下面的例子演示了在 tf.function 中包装一个 epoch 并在功能内迭代train_dist_dataset。",
"@tf.function\ndef distributed_train_epoch(dataset):\n total_loss = 0.0\n num_batches = 0\n for x in dataset:\n per_replica_losses = strategy.run(train_step, args=(x,))\n total_loss += strategy.reduce(\n tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)\n num_batches += 1\n return total_loss / tf.cast(num_batches, dtype=tf.float32)\n\nfor epoch in range(EPOCHS):\n train_loss = distributed_train_epoch(train_dist_dataset)\n\n template = (\"Epoch {}, Loss: {}, Accuracy: {}\")\n print (template.format(epoch+1, train_loss, train_accuracy.result()*100))\n\n train_accuracy.reset_states()",
"跟踪副本中的训练的损失\n注意:作为通用的规则,您应该使用tf.keras.Metrics来跟踪每个样本的值以避免它们在副本中合并。\n我们 不 建议使用tf.metrics.Mean 来跟踪不同副本的训练损失,因为在执行过程中会进行损失缩放计算。\n例如,如果您运行具有以下特点的训练作业:\n\n两个副本\n在每个副本上处理两个例子\n产生的损失值:每个副本为[2,3]和[4,5]\n全局批次大小 = 4\n\n通过损失缩放,您可以通过添加损失值来计算每个副本上的每个样本的损失值,然后除以全局批量大小。 在这种情况下:(2 + 3)/ 4 = 1.25和(4 + 5)/ 4 = 2.25。\n如果您使用 tf.metrics.Mean 来跟踪两个副本的损失,结果会有所不同。 在这个例子中,你最终得到一个total为 3.50 和count为 2 的结果,当调用result()时,你将得到total /count = 1.75。 使用tf.keras.Metrics计算损失时会通过一个等于同步副本数量的额外因子来缩放。\n例子和教程\n以下是一些使用自定义训练循环来分发策略的示例:\n\n分布式训练指南\nDenseNet 使用 MirroredStrategy的例子。\nBERT 使用 MirroredStrategy 和TPUStrategy来训练的例子。 此示例对于了解如何在分发训练过程中如何载入一个检测点和定期生成检查点特别有帮助。\nNCF 使用 MirroredStrategy 来启用 keras_use_ctl 标记。\nNMT 使用 MirroredStrategy来训练的例子。\n\n更多的例子列在 分发策略指南。\n下一步\n\n在您的模型上尝试新的 tf.distribute.Strategy API。\n访问指南中的性能部分,了解有关其他策略和工具的更多信息,您可以使用它们来优化 TensorFlow 模型的性能。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
to266/hyperspy | hyperspy/tests/drawing/test_plot_image.ipynb | gpl-3.0 | [
"Testing (and demonstrating) plot_images()",
"# %hyperspy -r inline\nimport numpy as np\nimport hyperspy.api as hs\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n",
"plot_images() is used to plot several images in the same figure. It supports many configurations and has many options available to customize the resulting output. The function returns a list of matplotlib axes, which can be used to further customize the figure. Some examples are given below.\nDefault usage\nA common usage for plot_images() is to view the different slices of a multidimensional image (a hyperimage):",
"import scipy.ndimage\nimage = hs.signals.Image(np.random.random((2, 3, 512, 512)))\nfor i in range(2):\n for j in range(3):\n image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j)\n \naxes = image.axes_manager\naxes[2].name = \"x\"\naxes[3].name = \"y\"\naxes[2].units = \"nm\"\naxes[3].units = \"nm\"\n \nimage.metadata.General.title = 'multi-dimensional Lena'\nhs.plot.plot_images(image, tight_layout=True)",
"Specified labels\nBy default, plot_images() will attempt to auto-label the images based on the Signal titles. The labels (and title) can be customized with the label and suptitle arguments. In this example, the axes labels and ticks are also disabled with axes_decor:",
"import scipy.ndimage\nimage = hs.signals.Image(np.random.random((2, 3, 512, 512)))\nfor i in range(2):\n for j in range(3):\n image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j)\n \naxes = image.axes_manager\naxes[2].name = \"x\"\naxes[3].name = \"y\"\naxes[2].units = \"nm\"\naxes[3].units = \"nm\"\n \nimage.metadata.General.title = 'multi-dimensional Lena'\nhs.plot.plot_images(image, suptitle='Custom figure title', \n label=['Image 1', 'Image 2', 'Image 3', 'Image 4', 'Image 5', 'Image 6'],\n axes_decor=None, tight_layout=True)",
"List of images\nplot_images() can also be used to easily plot a list of Images, comparing different Signals, including RGB images. This example also demonstrates how to wrap labels using labelwrap (for preventing overlap) and using a single colorbar for all the Images, as opposed to multiple individual ones:",
"import scipy.ndimage\n\n# load red channel of raccoon as an image\nimage0 = hs.signals.Image(scipy.misc.ascent()[:,:,0])\nimage0.metadata.General.title = 'Rocky Raccoon - R'\naxes0 = image0.axes_manager\naxes0[0].name = \"x\"\naxes0[1].name = \"y\"\naxes0[0].units = \"mm\"\naxes0[1].units = \"mm\"\n\n# load lena into 2x3 hyperimage\nimage1 = hs.signals.Image(np.random.random((2, 3, 512, 512)))\nimage1.metadata.General.title = 'multi-dimensional Lena'\nfor i in range(2):\n for j in range(3):\n image1.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j)\naxes1 = image1.axes_manager\naxes1[2].name = \"x\"\naxes1[3].name = \"y\"\naxes1[2].units = \"nm\"\naxes1[3].units = \"nm\"\n\n# load green channel of raccoon as an image\nimage2 = hs.signals.Image(scipy.misc.ascent()[:,:,1])\nimage2.metadata.General.title = 'Rocky Raccoon - G'\naxes2 = image2.axes_manager\naxes2[0].name = \"x\"\naxes2[1].name = \"y\"\naxes2[0].units = \"mm\"\naxes2[1].units = \"mm\"\n\n# load rgb image\nrgb = hs.signals.Spectrum(scipy.misc.ascent())\nrgb.change_dtype(\"rgb8\")\nrgb.metadata.General.title = 'RGB'\naxesRGB = rgb.axes_manager\naxesRGB[0].name = \"x\"\naxesRGB[1].name = \"y\"\naxesRGB[0].units = \"nm\"\naxesRGB[1].units = \"nm\"\n\n\nhs.plot.plot_images([image0, image1, image2, rgb], tight_layout=True,\n #colorbar='single', \n labelwrap=20)",
"Real-world use\nAnother example for this function is plotting EDS line intensities. Using a spectrum image with EDS data, one can use the following commands to get a representative figure of the line intensities. This example also demonstrates changing the colormap (with cmap), adding scalebars to the plots (with scalebar), and changing the padding between the images. The padding is specified as a dictionary, which is used to call matplotlib.figure.Figure.subplots_adjust() (see documentation).\nNote, this padding can also be changed interactively by clicking on the subplots_adjust button (<img src=\"plot_images_subplots.png\" style=\"display:inline-block;vertical-align:bottom\">) in the GUI (button may be different when using different graphical backends).\nThe sample and the data used are described in \nP. Burdet, et al., Acta Materialia, 61, p. 3090-3098 (2013) (see http://infoscience.epfl.ch/record/185861/).\nFurther information is available in the Hyperspy EDS tutorial: \n\nhttp://nbviewer.ipython.org/github/hyperspy/hyperspy-demos/blob/master/electron_microscopy/EDS/Hyperpsy_EDS_TEM_tutorial_CAM_2015.ipynb",
"from urllib import urlretrieve\nurl = 'http://cook.msm.cam.ac.uk//~hyperspy//EDS_tutorial//'\nurlretrieve(url + 'core_shell.hdf5', 'core_shell.hdf5')\n\nsi_EDS = hs.load(\"core_shell.hdf5\")\nim = si_EDS.get_lines_intensity()\nhs.plot.plot_images(\n im, tight_layout=True, cmap='RdYlBu_r', axes_decor='off',\n colorbar='single', scalebar='all', \n scalebar_color='black', suptitle_fontsize=16,\n padding={'top':0.8, 'bottom':0.10, 'left':0.05,\n 'right':0.85, 'wspace':0.20, 'hspace':0.10}) "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/ko/tutorials/keras/regression.ipynb | apache-2.0 | [
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"자동차 연비 예측하기: 회귀\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/regression\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />TensorFlow.org에서 보기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />깃허브(GitHub) 소스 보기</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/regression.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNote: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도\n불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.\n이 번역에 개선할 부분이 있다면\ntensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.\n문서 번역이나 리뷰에 참여하려면\[email protected]로\n메일을 보내주시기 바랍니다.\n회귀(regression)는 가격이나 확률 같이 연속된 출력 값을 예측하는 것이 목적입니다. 이와는 달리 분류(classification)는 여러개의 클래스 중 하나의 클래스를 선택하는 것이 목적입니다(예를 들어, 사진에 사과 또는 오렌지가 포함되어 있을 때 어떤 과일인지 인식하는 것).\n이 노트북은 Auto MPG 데이터셋을 사용하여 1970년대 후반과 1980년대 초반의 자동차 연비를 예측하는 모델을 만듭니다. 이 기간에 출시된 자동차 정보를 모델에 제공하겠습니다. 이 정보에는 실린더 수, 배기량, 마력(horsepower), 공차 중량 같은 속성이 포함됩니다.\n이 예제는 tf.keras API를 사용합니다. 자세한 내용은 케라스 가이드를 참고하세요.",
"# 산점도 행렬을 그리기 위해 seaborn 패키지를 설치합니다\n!pip install seaborn\n\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)",
"Auto MPG 데이터셋\n이 데이터셋은 UCI 머신 러닝 저장소에서 다운로드할 수 있습니다.\n데이터 구하기\n먼저 데이터셋을 다운로드합니다.",
"dataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\")\ndataset_path",
"판다스를 사용하여 데이터를 읽습니다.",
"column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n 'Acceleration', 'Model Year', 'Origin']\nraw_dataset = pd.read_csv(dataset_path, names=column_names,\n na_values = \"?\", comment='\\t',\n sep=\" \", skipinitialspace=True)\n\ndataset = raw_dataset.copy()\ndataset.tail()",
"데이터 정제하기\n이 데이터셋은 일부 데이터가 누락되어 있습니다.",
"dataset.isna().sum()",
"문제를 간단하게 만들기 위해서 누락된 행을 삭제하겠습니다.",
"dataset = dataset.dropna()",
"\"Origin\" 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환하겠습니다:",
"origin = dataset.pop('Origin')\n\ndataset['USA'] = (origin == 1)*1.0\ndataset['Europe'] = (origin == 2)*1.0\ndataset['Japan'] = (origin == 3)*1.0\ndataset.tail()",
"데이터셋을 훈련 세트와 테스트 세트로 분할하기\n이제 데이터를 훈련 세트와 테스트 세트로 분할합니다.\n테스트 세트는 모델을 최종적으로 평가할 때 사용합니다.",
"train_dataset = dataset.sample(frac=0.8,random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)",
"데이터 조사하기\n훈련 세트에서 몇 개의 열을 선택해 산점도 행렬을 만들어 살펴 보겠습니다.",
"sns.pairplot(train_dataset[[\"MPG\", \"Cylinders\", \"Displacement\", \"Weight\"]], diag_kind=\"kde\")",
"전반적인 통계도 확인해 보죠:",
"train_stats = train_dataset.describe()\ntrain_stats.pop(\"MPG\")\ntrain_stats = train_stats.transpose()\ntrain_stats",
"특성과 레이블 분리하기\n특성에서 타깃 값 또는 \"레이블\"을 분리합니다. 이 레이블을 예측하기 위해 모델을 훈련시킬 것입니다.",
"train_labels = train_dataset.pop('MPG')\ntest_labels = test_dataset.pop('MPG')",
"데이터 정규화\n위 train_stats 통계를 다시 살펴보고 각 특성의 범위가 얼마나 다른지 확인해 보죠.\n특성의 스케일과 범위가 다르면 정규화(normalization)하는 것이 권장됩니다. 특성을 정규화하지 않아도 모델이 수렴할 수 있지만, 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어집니다.\n노트: 의도적으로 훈련 세트만 사용하여 통계치를 생성했습니다. 이 통계는 테스트 세트를 정규화할 때에도 사용됩니다. 이는 테스트 세트를 모델이 훈련에 사용했던 것과 동일한 분포로 투영하기 위해서입니다.",
"def norm(x):\n return (x - train_stats['mean']) / train_stats['std']\nnormed_train_data = norm(train_dataset)\nnormed_test_data = norm(test_dataset)",
"정규화된 데이터를 사용하여 모델을 훈련합니다.\n주의: 여기에서 입력 데이터를 정규화하기 위해 사용한 통계치(평균과 표준편차)는 원-핫 인코딩과 마찬가지로 모델에 주입되는 모든 데이터에 적용되어야 합니다. 여기에는 테스트 세트는 물론 모델이 실전에 투입되어 얻은 라이브 데이터도 포함됩니다.\n모델\n모델 만들기\n모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 Sequential 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 build_model 함수로 모델 구성 단계를 감싸겠습니다.",
"def build_model():\n model = keras.Sequential([\n layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),\n layers.Dense(64, activation='relu'),\n layers.Dense(1)\n ])\n\n optimizer = tf.keras.optimizers.RMSprop(0.001)\n\n model.compile(loss='mse',\n optimizer=optimizer,\n metrics=['mae', 'mse'])\n return model\n\nmodel = build_model()",
"모델 확인\n.summary 메서드를 사용해 모델에 대한 간단한 정보를 출력합니다.",
"model.summary()",
"모델을 한번 실행해 보죠. 훈련 세트에서 10 샘플을 하나의 배치로 만들어 model.predict 메서드를 호출해 보겠습니다.",
"example_batch = normed_train_data[:10]\nexample_result = model.predict(example_batch)\nexample_result",
"제대로 작동하는 것 같네요. 결괏값의 크기와 타입이 기대했던 대로입니다.\n모델 훈련\n이 모델을 1,000번의 에포크(epoch) 동안 훈련합니다. 훈련 정확도와 검증 정확도는 history 객체에 기록됩니다.",
"# 에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다\nclass PrintDot(keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs):\n if epoch % 100 == 0: print('')\n print('.', end='')\n\nEPOCHS = 1000\n\nhistory = model.fit(\n normed_train_data, train_labels,\n epochs=EPOCHS, validation_split = 0.2, verbose=0,\n callbacks=[PrintDot()])",
"history 객체에 저장된 통계치를 사용해 모델의 훈련 과정을 시각화해 보죠.",
"hist = pd.DataFrame(history.history)\nhist['epoch'] = history.epoch\nhist.tail()\n\nimport matplotlib.pyplot as plt\n\ndef plot_history(history):\n hist = pd.DataFrame(history.history)\n hist['epoch'] = history.epoch\n\n plt.figure(figsize=(8,12))\n\n plt.subplot(2,1,1)\n plt.xlabel('Epoch')\n plt.ylabel('Mean Abs Error [MPG]')\n plt.plot(hist['epoch'], hist['mae'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mae'],\n label = 'Val Error')\n plt.ylim([0,5])\n plt.legend()\n\n plt.subplot(2,1,2)\n plt.xlabel('Epoch')\n plt.ylabel('Mean Square Error [$MPG^2$]')\n plt.plot(hist['epoch'], hist['mse'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mse'],\n label = 'Val Error')\n plt.ylim([0,20])\n plt.legend()\n plt.show()\n\nplot_history(history)",
"이 그래프를 보면 수 백번 에포크를 진행한 이후에는 모델이 거의 향상되지 않는 것 같습니다. model.fit 메서드를 수정하여 검증 점수가 향상되지 않으면 자동으로 훈련을 멈추도록 만들어 보죠. 에포크마다 훈련 상태를 점검하기 위해 EarlyStopping 콜백(callback)을 사용하겠습니다. 지정된 에포크 횟수 동안 성능 향상이 없으면 자동으로 훈련이 멈춥니다.\n이 콜백에 대해 더 자세한 내용은 여기를 참고하세요.",
"model = build_model()\n\n# patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory = model.fit(normed_train_data, train_labels, epochs=EPOCHS,\n validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])\n\nplot_history(history)",
"이 그래프를 보면 검증 세트의 평균 오차가 약 +/- 2 MPG입니다. 좋은 결과인가요? 이에 대한 평가는 여러분에게 맡기겠습니다.\n모델을 훈련할 때 사용하지 않았던 테스트 세트에서 모델의 성능을 확인해 보죠. 이를 통해 모델이 실전에 투입되었을 때 모델의 성능을 짐작할 수 있습니다:",
"loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)\n\nprint(\"테스트 세트의 평균 절대 오차: {:5.2f} MPG\".format(mae))",
"예측\n마지막으로 테스트 세트에 있는 샘플을 사용해 MPG 값을 예측해 보겠습니다:",
"test_predictions = model.predict(normed_test_data).flatten()\n\nplt.scatter(test_labels, test_predictions)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nplt.axis('equal')\nplt.axis('square')\nplt.xlim([0,plt.xlim()[1]])\nplt.ylim([0,plt.ylim()[1]])\n_ = plt.plot([-100, 100], [-100, 100])\n",
"모델이 꽤 잘 예측한 것 같습니다. 오차의 분포를 살펴 보죠.",
"error = test_predictions - test_labels\nplt.hist(error, bins = 25)\nplt.xlabel(\"Prediction Error [MPG]\")\n_ = plt.ylabel(\"Count\")",
"가우시안 분포가 아니지만 아마도 훈련 샘플의 수가 매우 작기 때문일 것입니다.\n결론\n이 노트북은 회귀 문제를 위한 기법을 소개합니다.\n\n평균 제곱 오차(MSE)는 회귀 문제에서 자주 사용하는 손실 함수입니다(분류 문제에서 사용하는 손실 함수와 다릅니다).\n비슷하게 회귀에서 사용되는 평가 지표도 분류와 다릅니다. 많이 사용하는 회귀 지표는 평균 절댓값 오차(MAE)입니다.\n수치 입력 데이터의 특성이 여러 가지 범위를 가질 때 동일한 범위가 되도록 각 특성의 스케일을 독립적으로 조정해야 합니다.\n훈련 데이터가 많지 않다면 과대적합을 피하기 위해 은닉층의 개수가 적은 소규모 네트워크를 선택하는 방법이 좋습니다.\n조기 종료(Early stopping)은 과대적합을 방지하기 위한 좋은 방법입니다."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs | 2.3/tutorials/ltte.ipynb | gpl-3.0 | [
"Rømer and Light Travel Time Effects (ltte)\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new Bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger('error')\n\nb = phoebe.default_binary()",
"Now let's add a light curve dataset to see how ltte affects the timings of eclipses.",
"b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')",
"Relevant Parameters\nThe 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.",
"print(b['ltte@compute'])",
"Comparing with and without ltte\nIn order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.",
"b['sma@binary'] = 100\n\nb['q'] = 0.1",
"We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.",
"b.set_value_all('atm', 'blackbody')\nb.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'logarithmic')\n\nb.run_compute(irrad_method='none', ltte=False, model='ltte_off')\n\nb.run_compute(irrad_method='none', ltte=True, model='ltte_on')\n\nafig, mplfig = b.plot(show=True)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mrcinv/matpy | oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb | gpl-2.0 | [
"import math\nimport sympy\nfrom sympy import latex, solve, Eq\nfrom IPython.display import HTML, display\nfrom sympy.abc import x, a, b\n\n%matplotlib notebook\n%install_ext https://raw.githubusercontent.com/meduz/ipython_magics/master/tikzmagic.py\n%load_ext tikzmagic\n \nsympy.init_printing()",
"OMA 2. kolokvij 2012/2013\n1. naloga\nV polkrog z radijem 1 vcrtamo pravokotnik ABCD tako, da oglisci A in B lezita na premeru, oglisci C in D pa na loku polkroga. Koliksni naj bosta dolzini stranic pravokotnika, da bo ploscina pravokotnika maksimalna?",
"%%tikz s 400,400 -sc 1.2 -f png\n\\draw [domain=0:180] plot ({cos(\\x)}, {sin(\\x)});\n\\draw (-1,0) -- (1, 0);\n\\draw [color=red] (-0.5, 0) -- node[below, color=black] {2a} ++ (1, 0);\n\\draw [color=red] (-0.5, 0.8660254037844386) -- (0.5, 0.8660254037844386);\n\\draw [color=red] (-0.5, 0) -- node[left, color=black] {b} ++ (0, 0.8660254037844386);\n\\draw [color=red] (0.5, 0.8660254037844386) -- (0.5, 0);\n",
"Maksimiziramo funkcijo $P(x)=2ab$. Velja tudi $a^2 + b^2 = 1$. Namesto ploscine bomo maksimizirali njen kvadrat (ki ima maksimum v isti tocki kot prvotna funkcija.",
"P = sympy.symbols('P', cls=sympy.Function)\neq1 = Eq(P(b), (2*a*b)**2)\neq2 = Eq(a**2+b**2, 1)\nequation = Eq(P(b), solve([eq1, eq2], P(b), a**2)[P(b)])\nequation\n\nP = sympy.lambdify(b, equation.rhs)\nx = sympy.symbols('x', positive=True)\nsolve(Eq(P(x).diff(x), 0))[0]",
"2. naloga\nNaj bo \n$$f(x,y)=3x^2-3y^2+8xy-6x-8y+3.$$\nIzracunaj gradient funkcije $f(x,y)$.",
"x, y = sympy.symbols('x y')\nf = lambda x, y: 3*x**2 - 3*y**2 + 8*x*y-6*x-8*y+3\nf(x,y).diff(x), f(x,y).diff(y)",
"Izracunaj stacionarne tocke funkcije $f(x,y)$.",
"sympy.solve([f(x,y).diff(x), f(x,y).diff(y)])",
"3. naloga\nIzracunaj odvod funkcije\n$$\\frac{\\cos(x)}{\\sin(x)}.$$",
"x = sympy.symbols('x')\nf = lambda x: sympy.cos(x)/sympy.sin(x)\nsympy.simplify(f(x).diff())",
"*S pomocjo substitucije izracunaj nedoloceni integral\n$$\\int \\frac{\\cos(x)}{\\sin(x)}.$$\n*",
"sympy.simplify(f(x).integrate())",
"V zgorjem racunu poleg konstante znotraj funkcije $\\log$ manjka se absolutna vrednost (sympy racuna v kompleksnih stevilih), tako da je pravi rezultat\n$$ \\frac{1}{2}\\log(\\sin^2(x)) + C = \\log(\\sin^2(x)) + C.$$\nS pomocjo pravila za integriranje po delih izracunaj\n$$\\int\\frac{x}{\\sin^2(x)}.$$",
"x = sympy.symbols('x')\nf = lambda x: x/sympy.sin(x)**2\nsympy.simplify(f(x).integrate())",
"Tudi to resitev se da poenostaviti v \n$$ \\int\\frac{x}{\\sin^2(x)} = \\log(|\\sin(x)|) - x\\cot(x) + C.$$\n4. naloga\nNarisite lik, ki ga omejujeta krivulji $y=e^{2x}$ in $y=-e^{2x}+4$. Izracunajte ploscino lika.",
"from matplotlib import pyplot as plt\nimport numpy as np\nx = sympy.symbols('x')\nf = lambda x: np.exp(2*x)\ng = lambda x: -np.exp(2*x)+4\nfig, ax = plt.subplots()\nxs = np.linspace(0,0.6)\nax.fill_between(xs, f(xs),g(xs),where = f(xs)>=g(xs), facecolor='green',interpolate=True)\nax.fill_between(xs, f(xs), g(xs), where = f(xs)<= g(xs),facecolor='red',interpolate=True)\nplt.title(\"Liki med dvema krivuljama.\")",
"Izracunati moramo ploscino rdecega lika.",
"x = sympy.symbols('x', real=True)\nf = lambda x: sympy.E**(2*x)\ng = lambda x: -sympy.E**(2*x)+4\nintersection = sympy.solve(sympy.Eq(f(x), g(x)))[0]\nresult = sympy.integrate(g(x)-f(x), (x, 0, intersection))\nresult\n\nresult.evalf()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb | bsd-3-clause | [
"Markov switching autoregression models\nThis notebook provides an example of the use of Markov switching models in statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.\nThis is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport requests\nfrom io import BytesIO\n\n# NBER recessions\nfrom pandas_datareader.data import DataReader\nfrom datetime import datetime\nusrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))",
"Hamilton (1989) switching model of GNP\nThis replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:\n$$\ny_t = \\mu_{S_t} + \\phi_1 (y_{t-1} - \\mu_{S_{t-1}}) + \\phi_2 (y_{t-2} - \\mu_{S_{t-2}}) + \\phi_3 (y_{t-3} - \\mu_{S_{t-3}}) + \\phi_4 (y_{t-4} - \\mu_{S_{t-4}}) + \\varepsilon_t\n$$\nEach period, the regime transitions according to the following matrix of transition probabilities:\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\np_{01} & p_{11}\n\\end{bmatrix}\n$$\nwhere $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.\nThe model class is MarkovAutoregression in the time-series part of statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.\nAfter creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.",
"# Get the RGNP data to replicate Hamilton\ndta = pd.read_stata('https://www.stata-press.com/data/r14/rgnp.dta').iloc[1:]\ndta.index = pd.DatetimeIndex(dta.date, freq='QS')\ndta_hamilton = dta.rgnp\n\n# Plot the data\ndta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))\n\n# Fit the model\nmod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)\nres_hamilton = mod_hamilton.fit()\n\nres_hamilton.summary()",
"We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.\nFor reference, the shaded periods represent the NBER recessions.",
"fig, axes = plt.subplots(2, figsize=(7,7))\nax = axes[0]\nax.plot(res_hamilton.filtered_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)\nax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])\nax.set(title='Filtered probability of recession')\n\nax = axes[1]\nax.plot(res_hamilton.smoothed_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)\nax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])\nax.set(title='Smoothed probability of recession')\n\nfig.tight_layout()",
"From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.",
"print(res_hamilton.expected_durations)",
"In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.\nKim, Nelson, and Startz (1998) Three-state Variance Switching\nThis model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.\nThe model in question is:\n$$\n\\begin{align}\ny_t & = \\varepsilon_t \\\n\\varepsilon_t & \\sim N(0, \\sigma_{S_t}^2)\n\\end{align}\n$$\nSince there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypothesized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).",
"# Get the dataset\new_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content\nraw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')\nraw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')\n\ndta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean()\n\n# Plot the dataset\ndta_kns[0].plot(title='Excess returns', figsize=(12, 3))\n\n# Fit the model\nmod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)\nres_kns = mod_kns.fit()\n\nres_kns.summary()",
"Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.",
"fig, axes = plt.subplots(3, figsize=(10,7))\n\nax = axes[0]\nax.plot(res_kns.smoothed_marginal_probabilities[0])\nax.set(title='Smoothed probability of a low-variance regime for stock returns')\n\nax = axes[1]\nax.plot(res_kns.smoothed_marginal_probabilities[1])\nax.set(title='Smoothed probability of a medium-variance regime for stock returns')\n\nax = axes[2]\nax.plot(res_kns.smoothed_marginal_probabilities[2])\nax.set(title='Smoothed probability of a high-variance regime for stock returns')\n\nfig.tight_layout()",
"Filardo (1994) Time-Varying Transition Probabilities\nThis model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.\nIn the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).\nEach period, the regime now transitions according to the following matrix of time-varying transition probabilities:\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00,t} & p_{10,t} \\\np_{01,t} & p_{11,t}\n\\end{bmatrix}\n$$\nwhere $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:\n$$\np_{ij,t} = \\frac{\\exp{ x_{t-1}' \\beta_{ij} }}{1 + \\exp{ x_{t-1}' \\beta_{ij} }}\n$$\nInstead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.",
"# Get the dataset\nfilardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content\ndta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')\ndta_filardo.columns = ['month', 'ip', 'leading']\ndta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')\n\ndta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100\n# Deflated pre-1960 observations by ratio of std. devs.\n# See hmt_tvp.opt or Filardo (1994) p. 302\nstd_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()\ndta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio\n\ndta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100\ndta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()\n\n# Plot the data\ndta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))\nplt.figure()\ndta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));",
"The time-varying transition probabilities are specified by the exog_tvtp parameter.\nHere we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.\nBelow, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.",
"mod_filardo = sm.tsa.MarkovAutoregression(\n dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False,\n exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading']))\n\nnp.random.seed(12345)\nres_filardo = mod_filardo.fit(search_reps=20)\n\nres_filardo.summary()",
"Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.",
"fig, ax = plt.subplots(figsize=(12,3))\n\nax.plot(res_filardo.smoothed_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)\nax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])\nax.set(title='Smoothed probability of a low-production state');",
"Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:",
"res_filardo.expected_durations[0].plot(\n title='Expected duration of a low-production state', figsize=(12,3));",
"During recessions, the expected duration of a low-production state is much higher than in an expansion."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bioinformatica-corso/lezioni | laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb | cc0-1.0 | [
"Biopython - Esercizio4\nMAFFT è un tool di allineamento multiplo sviluppato da EMBL-EBI (European Bioinformatics Institute - European Molecular Biology Laboratory) per sequenze di DNA.\nUsare MAFFT (scegliendo ClustalW come formato di output) per allineare i 14 genomi completi di SARS-CoV-2 presenti nel file covid-sequences.fasta sequenziati nel novembre 2021 e scaricati dal sito di NCBI. Il primo, con identificatore NC_045512.2, è il genoma di riferimento.\nTrovare tutte le variazioni rispetto al genoma di riferimento.\n\nVariazione: posizione della colonna di allineamento in cui esiste almeno un genoma che ha mismatch con quello di riferimento.\nEsempio di allineamento con variazioni in posizione 8 e 13:\nREF AAGCTGATTGCACGC-T\nG1 --GCAGAGTGCAGGCCT\nG2 --GCCGAGTGCACGCCT\n\nVariazione 5: T nel reference e A in G1 e C in G2.\nVariazione 8: T nel reference e G sia in G1 e G2.\nVariazione 13: C nel reference e G in G1.\nVariazione 16: - nel reference e C sia in G1 che in G2.\n\nSi richiede di:\n- costruire il data frame delle variazioni in cui le colonne sono tutte le posizioni 1-based delle variazioni e le righe sono indicizzate con l'identificatore del genoma.\n- estrarre il genoma con più variazioni e quello con meno variazioni\n- ottenere il data frame delle variazioni \"complete\", cioè in cui tutti i genomi variano rispetto al riferimento.\n- produrre il data frame delle variazioni \"stabili\" in cui tutti i genomi variano allo stesso modo rispetto al riferimento. \n- ottenere la lista delle posizioni in cui c'è un gap nel genoma di riferimento.\n- ottenere la lista delle posizioni in cui c'è un gap in almeno uno dei genomi (diversi dal riferimento)\nInstallare il package Bio di Biopython.\nImportare il package Bio.",
"import Bio",
"Importare il package AlignIO che è il package per manipolare file contenenti allineamenti multipli in diversi formati (tra cui clustal che è quello del file di input).",
"from Bio import AlignIO",
"Leggere l'allineamento in input\nIl package AlignIO mette a disposizione la funzione read per leggete un allineamento:\n AligIO.read(input_file_name, format)\n\ne restituisce un oggetto di tipo MultipleSeqAlignment che è un oggetto iterabile contenente oggetti SeqRecord, uno per ognuna delle righe dell'allineamento letto.",
"alignment = AlignIO.read(\"mafft-alignments.clustalw\", \"clustal\")",
"La lunghezza dell'allineamento in input (numero di colonne della matrice di allineamento) è:",
"alignment.get_alignment_length()",
"Trasformare l'oggetto in una lista di oggetti SeqRecord.",
"alignment = list(alignment)\nalignment",
"Eliminare i gap iniziali.\nTrovare il più lungo prefisso di soli simboli - delle righe dell'allineamento. Supponendo che tale prefisso sia lungo g, eliminare da ogni riga dell'allinemento il prefisso di lunghezza g.\nAd esempio il seguente allineamento composto da tre righe:\nGTATGTGTCATGTTTTTGCTA\n--ATGTGTCATG-TTT-----\n----GTGTCATGTTTTTG---\n\npresenta un più lungo prefisso di soli simboli - di lunghezza g=4 (terza riga). Eliminando da tutte le righe un prefisso di lunghezza 4 si ottiene:\n GTGTCATGTTTTTGCTA\n GTGTCATG-TTT-----\n GTGTCATGTTTTTG---",
"import re\n\ngap_list = [re.findall('^-+', str(row.seq)) for row in alignment]\ngap_size_list = [len(gap[0]) for gap in gap_list if gap]\ngap_size_list[:0] = [0]\nleading_gaps = max(gap_size_list)\nalignment = [row[leading_gaps:] for row in alignment]\n\nalignment",
"Eliminare i gap finali.\nTrovare il più lungo suffisso di soli simboli - delle righe dell'allineamento. Supponendo che tale suffisso sia lungo g, eliminare da ogni riga il suffisso di lunghezza g.\nAd esempio il seguente allineamento composto da tre righe:\n GTGTCATGTTTTTGCTA\n GTGTCATG-TTT-----\n GTGTCATGTTTTTG---\n\npresenta un più lungo suffisso di soli simboli - di lunghezza g=5 (seconda riga). Eliminando da tutte le righe un suffisso di lunghezza 5 si ottiene:\n GTGTCATGTTTT\n GTGTCATG-TTT\n GTGTCATGTTTT",
"gap_list = [re.findall('-+$', str(row.seq)) for row in alignment]\ngap_size_list = [len(gap[0]) for gap in gap_list if gap]\ngap_size_list[:0] = [0]\ntrailing_gaps = max(gap_size_list)\nalignment = [row[:len(row)-trailing_gaps] for row in alignment]\n\nalignment",
"Creare la lista degli identificatori dei genomi",
"index_list = [row.id for row in alignment]\n\nindex_list",
"Creare il dizionario contenente i dati per costruire il data frame\n\n\nkey: posizione 1-based della variazione (posizione della colonna nell'allineamento in input)\n\n\nvalue: lista dei simboli allineati coinvolti nella variazione (il primo simbolo deve essere quello del reference, mentre se un genoma non presenta una differenza con il reference si deve inserire la stringa vuota)",
"df_data = {}\n\nreference = alignment.pop(0)\n\nfor (i,c) in enumerate(reference):\n variant_list = []\n is_variant = False\n for row in alignment:\n variant = ''\n if row[i] != c and row[i] in {'A', 'C', 'G', 'T'}:\n is_variant = True\n variant = row[i]\n \n variant_list.append(variant)\n \n if is_variant:\n variant_list[:0] = [c]\n df_data[str(i+leading_gaps+1)] = variant_list\n\ndf_data",
"Creare il data frame\ndf = pd.DataFrame(df_data, index = index_list)",
"import pandas as pd\n\ndf = pd.DataFrame(df_data, index = index_list)\n\ndf",
"Estrarre il genoma con più variazioni e quello con meno variazioni\nDeterminare la lista del numero di variazioni per genoma (per tutti i genomi tranne quello di riferimento).",
"variants_per_genome = [len(list(filter(lambda x: x!='', list(row)))) for row in df.values]\n\nvariants_per_genome.pop(0)\nvariants_per_genome",
"In alternativa:",
"variants_per_genome = [df.shape[1]-list(df.loc[index]).count('') for index in index_list[1:]]\n\nvariants_per_genome",
"Estrarre il genoma con più variazioni.",
"index_list[variants_per_genome.index(max(variants_per_genome))+1]",
"Estrarre il genoma con meno variazioni.",
"index_list[variants_per_genome.index(min(variants_per_genome))+1]",
"In alternativa, per estrarre il genoma con meno variazioni:",
"null_df = pd.DataFrame((df == '').sum(axis=1), columns=['difference'])\nnull_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].max()]",
"In alternativa, per estrarre il genoma con più variazioni:",
"null_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].min()]",
"Determinare il data frame delle variazioni \"complete\"\nSelezionare dal data frame precedente le sole colonne relative a variazioni \"complete\".",
"df_complete = df[[col for col in df.columns if all(df[col] != '')]]\n\ndf_complete",
"Determinare il data frame delle variazioni \"stabili\"\nSelezionare dal data frame precedente le sole colonne relative a variazioni \"stabili\".",
"df_stable = df_complete[[col for col in df_complete.columns if len(df_complete[col][1:].unique()) == 1]]\n\ndf_stable",
"Ottenere la lista delle posizioni in cui c'è un gap nel genoma di riferimento.",
"ref_gaps = [col for col in df.columns if df[col][0] == '-']\n\nref_gaps",
"Ottenere la lista delle posizioni in cui c'è un gap in almeno uno dei genomi (diversi dal riferimento).",
"other_gaps = [col for col in df.columns if any(df[col][1:] == '-')]\n\nother_gaps"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
erikdrysdale/erikdrysdale.github.io | _rmd/extra_unequalvar/unequalvar.ipynb | mit | [
"Vectorizing t-test and F-tests for unequal variances\nAlmost all modern data science tasks begin with exploratory data analysis (EDA) phase. Visualizing summary statistics and testing for associations forms the basis of hypothesis generation and subsequent exploration and modeling. Applied statisticians need to be careful not to over-interpret the results of EDA since the p-values generated during this phase do not correspond to their formal definition when used in a highly proscribed scenario. Speed is often an asset during EDA. I recently encountered the problem of needing to assess thousands of AUROC statistics (discussed in the past here and here), and found the bootstrapping procedure to be too slow for high-throughput assessment. By relying on the asymptotic normality of the AUROC statistic (which is an instance of the Mann-Whitney U-test), rapid inference could be performed because an analytic solution was available. However, I needed to develop code that could properly address the bottlenecks of my analysis:\n\nUse only the moments of the data (mean, variance, and sample size)\n(Possibly) accounting for unequal variances\nVectorizing all functions\n\nIn the rest of the post, I'll provide simple functions in python that will vectorize the Student's t-test and the F-test for the multiple comparisons problem. Each of these functions will rely on only the first two moments of the data distribution (plus the sample size). Using only the sufficient statistics of the data helps to reduce the memory overhead that other functions normally have. Means and variances can be computed quickly using methods that are already part of pandas and numpy classes. The functions in this post will also be able to account for unequal variances, which to best of my knowledge, is not available in existing python packages for the F-test.\n(1) Student's t-test for equal means\nSuppose there are two normally distributed samples: $x = (x_1, \\dots, x_n) \\sim N(\\mu_x, \\sigma^2_x)$ and $y=(y_1,\\dots,y_m)\\sim N(\\mu_y,\\sigma^2_y)$, and we would like to test the null hypothesis that $H_0: \\mu_x = \\mu_y$. If the variances of the distributions were known in practice than the average difference of the two means would have a normal distribution,\n$$\n\\begin{align}\n\\frac{\\bar{x} - \\bar{y}}{\\sqrt{\\sigma^2_x/n + \\sigma^2_y/m}} \\sim N(0,1) \\hspace{2mm} | \\hspace{2mm} H_0 \\text{ is true},\n\\end{align}\n$$\nSo that a test statistic with a known distribution could be easily constructed. However, since the variance of the two distributions needs to be estimated in practice, the statistic seen above would actually be the ratio of a normal to a chi-squared distribution, in other words a Student's t-distribution:\n$$\n\\begin{align}\nd &= \\frac{\\bar{x} - \\bar{y}}{\\sqrt{\\hat\\sigma^2_x/n + \\hat\\sigma^2_y/m}} \\label{eq:dstat} \\\nd &\\sim t(\\nu).\n\\end{align}\n$$\nWhen the variances are not equivalent, some modifications need to be made to the degrees of freedom parameter ($\\nu$), using now classic derivations from Welch in 1947:\n$$\n\\begin{align}\n\\nu &= \\begin{cases} \nn + m - 2 & \\text{ if } \\sigma^2_x = \\sigma^2_y \\\n\\frac{(\\hat\\sigma^2_x/n + \\hat\\sigma^2_y/m)^2}{(\\hat\\sigma^2_x/n)^2/(x-1)+(\\hat\\sigma^2_y/m)^2/(m-1)} & \\text{ if } \\sigma^2_x \\neq \\sigma^2_y\n\\end{cases}.\n\\end{align}\n$$\nBecause the test statistic $d$ is only a function of the first two moments,\n$$\n\\begin{align}\nd = f(\\bar x, \\bar y, \\hat\\sigma^2_x, \\hat\\sigma^2_y, n, m),\n\\end{align}\n$$\nA function can written that takes uses only these sufficient statistics from the data. The code block below will provide the first function tdist_2dist to carry out the testing and return the test statistic and associated p-values from a two-sided hypothesis test.",
"# Import modules needed to reproduce results\nimport os\nimport plotnine\nfrom plotnine import *\nimport pandas as pd\nfrom scipy import stats\nimport numpy as np\nfrom statsmodels.stats.proportion import proportion_confint as prop_CI\n\ndef tdist_2dist(mu1, mu2, se1, se2, n1, n2, var_eq=False):\n var1, var2 = se1**2, se2**2\n num = mu1 - mu2\n if var_eq:\n nu = n1 + n2 - 2\n sp2 = ((n1-1)*var1 + (n2-1)*var2) / nu\n den = np.sqrt(sp2*(1/n1 + 1/n2))\n else:\n nu = (var1/n1 + var2/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2/n2)**2/(n2-1) )\n den = np.sqrt(var1/n1 + var2/n2)\n dist_null = stats.t(df=nu)\n tstat = num / den\n pvals = 2*np.minimum(dist_null.sf(tstat), dist_null.cdf(tstat))\n return tstat, pvals\n\n# Useful short wrappers for making row or columns vectors\ndef rvec(x):\n return np.atleast_2d(x)\n\ndef cvec(x):\n return rvec(x).T",
"As a rule, I always conduct statistical simulations to make sure the functions I have written actually perform the way I expect them to when the null is known. If you can't get your method to work on a data generating procedure of your choosing, it should not leave the statistical laboratory! In the simulations below, $\\mu_y = 0$, and $\\mu_x$ will vary from zero to 0.2. At the same time, both variance homoskedasticity ($\\sigma_y = \\sigma_x$) and heteroskedasticity ($\\sigma_y \\neq \\sigma_x$) will be assessed. To further ensure the approach works, the respective sample sizes, $n$ and $m$, for each of the nsim=100K experiments will be a random integer between 25 and 75. In order to avoid an inner loop and rely of pure numpy vectorization, a data matrix of dimension 75 x 100000 will be generated. To account for the different sample sizes, if $n$ or $m$ is less than 75, the corresponding difference in rows will be set as a missing value np.NaN. The np.nanmean and np.nanstd functions will be used to handle missing values.\nNote that in all of the subsequent simulations, the type-I error rate target will be fixed to 5% ($\\alpha=0.05$), and 100K simulations will be run.",
"# Parameters of simulations\nnsim = 100000\nalpha = 0.05\nnlow, nhigh = 25, 75\nn1, n2 = np.random.randint(nlow, nhigh+1, nsim), np.random.randint(nlow, nhigh+1, nsim)\nse1, se2 = np.exp(np.random.randn(nsim)), np.exp(np.random.randn(nsim))\nmu_seq = np.arange(0,0.21,0.01)\ntt_seq, method_seq = np.repeat(['eq','neq'],2), np.tile(['neq','eq'],2)\nholder = []\nnp.random.seed(1234)\nfor mu in mu_seq:\n # Generate random data\n x1 = mu + se1*np.random.randn(nhigh, nsim)\n x2a = se1 * np.random.randn(nhigh, nsim)\n x2b = se2 * np.random.randn(nhigh, nsim)\n idx = np.tile(np.arange(nhigh),[nsim,1]).T\n # Find which rows to set to missing\n idx1, idx2 = idx < rvec(n1), idx < rvec(n2)\n x1, x2a, x2b = np.where(idx1, x1, np.nan), np.where(idx2, x2a, np.nan), np.where(idx2, x2b, np.nan)\n mu_hat1, mu_hat2a, mu_hat2b = np.nanmean(x1, 0), np.nanmean(x2a, 0), np.nanmean(x2b, 0)\n se_hat1, se_hat2a, se_hat2b = np.nanstd(x1, 0, ddof=1), np.nanstd(x2a, 0, ddof=1), np.nanstd(x2b, 0, ddof=1)\n # Calculate statistics and p-values\n tstat_neq_a, pval_neq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, False)\n tstat_eq_a, pval_eq_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, True)\n tstat_neq_b, pval_neq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, False)\n tstat_eq_b, pval_eq_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, True)\n # Find hypothesis rejection probability\n power_neq_a, power_eq_a = np.mean(pval_neq_a < alpha), np.mean(pval_eq_a < alpha)\n power_neq_b, power_eq_b = np.mean(pval_neq_b < alpha), np.mean(pval_eq_b < alpha)\n power_seq = np.array([power_neq_a, power_eq_a, power_neq_b, power_eq_b])\n holder.append(pd.DataFrame({'mu':mu,'tt':tt_seq,'method':method_seq, 'power':power_seq}))\n# Power comparison\ndi_method = {'eq':'Equal','neq':'Not Equal'}\nres_power = pd.concat(holder).assign(nsim=nsim)\nres_power[['tt','method']] = res_power[['tt','method']].apply(lambda x: x.map(di_method))\nres_power = res_power.rename(columns={'tt':'Variance'}).assign(nreject=lambda x: (x.power*x.nsim).astype(int))\nres_power = pd.concat([res_power.drop(columns=['nsim','nreject']),\n pd.concat(prop_CI(count=res_power.nreject,nobs=nsim,method='beta'),1)],1)\nres_power.rename(columns={0:'lb',1:'ub'}, inplace=True)\n\nplotnine.options.figure_size = (8, 3.5)\ngg_power_ttest = (ggplot(res_power,aes(x='mu',y='power',color='method')) +\n theme_bw() + geom_line() +\n geom_hline(yintercept=0.05,linetype='--') +\n scale_color_discrete(name='Variance assumption') +\n geom_linerange(aes(ymin='lb',ymax='ub')) +\n ggtitle('Vertical lines show 95% CI') +\n labs(y='Prob. of rejecting null',x='Mean difference') +\n facet_wrap('~Variance',labeller=label_both) +\n theme(legend_position=(0.5,-0.1),legend_direction='horizontal'))\ngg_power_ttest",
"Figure 1 above shows that the tdist_2dist function is working as expected. When the variances of $x$ and $y$ are equivalent, there is no difference in performance between approaches. When the mean difference is zero, the probability of rejecting the null is exactly equivalent to the level of the test (5%). However, when the variances differ, using the degrees of freedom calculation assuming they are equal leads to an inflated type-I error rate. Whereas using the adjustment from Welch's t-test gets to the right nominal level.\n(2) Checking power calculations\nAfter checking that function's test-statistic has the right nominal coverage on simulated data, I find is useful to check whether the power of the test can be predicted for different values of the alternative hypothesis. For some test statistics, this is not possible to do analytically, since the distribution of the test statistic under the alternative may not be known. However, for the student-t distribution, a difference in true means amounts to a noncentral t-distribution.\n$$\n\\begin{align}\nT &= \\frac{Z + c}{\\sqrt{V/\\nu}} \\ \nT &\\sim \\text{nct}(\\nu,c) \\\nZ&\\sim N(0,1), \\hspace{3mm} V\\sim \\chi^2(\\nu), \\hspace{3mm} \\mu \\neq 0\n\\end{align}\n$$\nThe statistic $d$ from \\eqref{eq:dstat} can be modified to match the noncentral t-distribution:\n$$\n\\begin{align}\nd + \\underbrace{\\frac{\\mu_x - \\mu_y}{\\sqrt{\\sigma^2_x/n + \\sigma^2_y/m}}}_{c}.\n\\end{align}\n$$\nThe power simulations below will fix $n=25$, $m=75$, and unit variances when $\\sigma_x=\\sigma_y$ and $\\sigma_x=1$ and $\\sigma_y=2$ in the heteroskedastic case.",
"n1, n2 = 25, 75\nse1 = 1\nse2a, se2b = se1, se1 + 1\nvar1, var2a, var2b = se1**2, se2a**2, se2b**2\n# ddof under different assumptions\nnu_a = n1 + n2 - 2\nnu_b = (var1/n1 + var2b/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2b/n2)**2/(n2-1) )\nmu_seq = np.round(np.arange(0, 1.1, 0.1),2)\n\n# Pre-calculate power\ncrit_ub_a, crit_lb_a = stats.t(df=nu_a).ppf(1-alpha/2), stats.t(df=nu_a).ppf(alpha/2)\ncrit_ub_b, crit_lb_b = stats.t(df=nu_b).ppf(1-alpha/2), stats.t(df=nu_b).ppf(alpha/2)\nlam_a = np.array([mu/np.sqrt(var1*(1/n1 + 1/n2)) for mu in mu_seq])\nlam_b = np.array([mu/np.sqrt((var1/n1 + var2b/n2)) for mu in mu_seq])\ndist_alt_a, dist_alt_b = stats.nct(df=nu_a, nc=lam_a), stats.nct(df=nu_b, nc=lam_b)\npower_a = (1-dist_alt_a.cdf(crit_ub_a)) + dist_alt_a.cdf(crit_lb_a)\npower_b = (1-dist_alt_b.cdf(crit_ub_b)) + dist_alt_b.cdf(crit_lb_b)\ndat_theory = pd.concat([pd.DataFrame({'mu':mu_seq,'theory':power_a,'method':'eq'}),\n pd.DataFrame({'mu':mu_seq,'theory':power_b,'method':'neq'})])\n\n# Run simulations to confirm\nnp.random.seed(1234)\nholder = []\nfor mu in mu_seq:\n x1 = mu + se1 * np.random.randn(n1, nsim)\n x2a = se2a * np.random.randn(n2, nsim)\n x2b = se2b * np.random.randn(n2, nsim)\n mu_hat1, mu_hat2a, mu_hat2b = x1.mean(0), x2a.mean(0), x2b.mean(0)\n se_hat1, se_hat2a, se_hat2b = x1.std(0,ddof=1), x2a.std(0, ddof=1), x2b.std(0, ddof=1)\n stat_a, pval_a = tdist_2dist(mu_hat1, mu_hat2a, se_hat1, se_hat2a, n1, n2, var_eq=True)\n stat_b, pval_b = tdist_2dist(mu_hat1, mu_hat2b, se_hat1, se_hat2b, n1, n2, var_eq=False)\n reject_a, reject_b = np.mean(pval_a < 0.05), np.mean(pval_b < 0.05)\n holder.append(pd.DataFrame({'mu': mu,'method':['eq','neq'], 'power': [reject_a, reject_b]}))\nres_theory = pd.concat(holder).merge(dat_theory).sort_values(['method','mu']).reset_index(None, True)\nres_theory = res_theory.assign(nreject=lambda x: (x.power*nsim).astype(int))\nres_theory = pd.concat([res_theory.drop(columns='nreject'),\n pd.concat(prop_CI(count=res_theory.nreject,nobs=nsim,method='beta'),1)],1)\nres_theory.rename(columns={0:'lb',1:'ub','method':'Variance'}, inplace=True)\nres_theory = res_theory.assign(Variance=lambda x: x.Variance.map(di_method))\n\nplotnine.options.figure_size = (8, 3.5)\ngg_power_theory = (ggplot(res_theory,aes(x='theory',y='power')) +\n theme_bw() + geom_point() +\n geom_linerange(aes(ymin='lb',ymax='ub')) +\n facet_wrap('~Variance', labeller=label_both) +\n theme(legend_position=(0.5, -0.1), legend_direction='horizontal') +\n labs(x='Expected power',y='Actual power') +\n scale_y_continuous(limits=[0,1]) + scale_x_continuous(limits=[0,1]) +\n geom_abline(slope=1,intercept=0,color='blue',linetype='--'))\ngg_power_theory",
"Figure 2 shows that the power calculations line up exactly with the analytical expectations for both equal and unequal variances. Having thoroughly validated the type-I and type-II errors of this function we can now move onto testing whether the means from multiple normal distributions are equal. \n(3) F-test for equality of means\nSuppose there are $K$ normal data vectors: $x_1=(x_{1,1},\\dots,x_{1,n_1})$ to $x_k=(x_{k,1},\\dots,x_{1,n_k})$, and we want to test the null hypothesis of $\\mu_1 = \\mu_2 = \\dots = \\mu_K$ against an alternative hypothesis that there is at least 1 inequality in the means, where $x_{k,i} \\sim N(\\mu_k,\\sigma^2_k)$. As before, the variances of each vector may or may not be equal. When the variances are equal, the sum of squared differences between the total mean and any one group mean will be chi-square. Similarly, the sum of the sample variances will also have a chi-square distribution. Hence, the F-test for equality of means is the ratio of the variation \"between\" versus \"within\" the groups,\n$$\n\\begin{align}\nR &= \\frac{\\frac{1}{K-1}\\sum_{k=1}^K n_k (\\bar x_k - \\bar x)^2 }{\\frac{1}{N-K}\\sum_{k=1}^K (n_k - 1)\\hat\\sigma^2_k}, \\\nR &\\sim F(K-1, N-K) \\hspace{3mm} \\text{ if } \\sigma^2_k = \\sigma^2 \\hspace{3mm} \\forall k \\in {1,\\dots,K}\n\\end{align}\n$$\nWhere $N = \\sum_k n_k$. To account for heteroskedasticity in the data (i.e. non-equal variances), both the test and degrees of freedom need to be modified using an approach Welch proposed in 1951.\n$$\n\\begin{align}\nR_W &= \\frac{\\frac{1}{K-1}\\sum_{k=1}^K w_k (\\bar x_k - \\bar x_w)^2 }{1 + \\frac{2}{3}((K-2)\\nu)}, \\\nw_k &= n_k / \\hat\\sigma^2_k \\\n\\bar x_w &= \\frac{\\sum_{k=1}^K w_k \\bar x_k}{\\sum_{k=1}^K w_k}\\\n\\nu &= \\frac{3\\cdot \\sum_{k=1}^K \\Bigg[ \\frac{1}{n_k - 1} \\Big( 1 - \\frac{w_k}{\\sum_{k=1}^K w_k} \\Big)^2 \\Bigg]^2}{K^2-1} \\\nR_W &\\sim F(K-1, 1/\\nu) \\hspace{3mm} \\text{ if } \\sigma^2_k \\neq \\sigma^2_{-k} \\hspace{3mm} \\text{for at least one }k\n\\end{align}\n$$\nThe fdist_anova function below carries out an F-test for the equality of means using only the empirical means, standard deviations, and sample sizes for either variance assumption. In R this would be equivalent to using aov for equal variances or oneway.test for unequal variances. In python, it will replicate the scipy.stats.f_oneway function (for equal variances). I am unaware of a python function that does a Welch-adjustment (if you know please message me and I will provide an update with this information). As before, because the function only relies on the moments of the data, it can be fully vectorized to handle matrices of means, variances, and sample sizes. \nThe simulation below assesses how well the two F-test approaches (homoskedasticity vs heteroskedasticity) do when the ground truth variances are either all equal or vary. To vary the signal in the data, I generate the $K$ different means from $(-\\mu,\\dots,0,\\dots,\\mu)$, where $\\mu$ is referred to as \"mean dispersion\" in the subsequent figures.",
"def fdist_anova(mus, ses, ns, var_eq=False):\n lshape = len(mus.shape)\n assert lshape <= 2\n assert mus.shape == ses.shape\n if len(ns.shape) == 1:\n ns = cvec(ns.copy())\n else:\n assert ns.shape == mus.shape\n if lshape == 1:\n mus = cvec(mus.copy())\n ses = cvec(ses.copy())\n vars = ses ** 2 # variance\n n, k = ns.sum(0), len(ns) # Total samples and groups\n df1, df2 = (k - 1), (n - k)\n if var_eq: # classical anova\n xbar = np.atleast_2d(np.sum(mus * ns, 0) / n)\n vb = np.sum(ns*(xbar - mus)**2,0) / df1 # numerator is variance between\n vw = np.sum((vars * (ns - 1)), 0) / df2 # den is variance within\n fstat = vb / vw\n pval = stats.f(dfn=df1,dfd=df2).sf(fstat)\n else:\n w = ns / vars\n xbar = np.sum(w * mus, 0) / np.sum(w,0)\n num = np.sum(w * (xbar - mus) ** 2,0) / df1\n v = 3*np.sum((1-w/w.sum(0))**2 / (ns-1),0) / (k**2 - 1)\n den = 1 + 2*((k-2)*v)/3\n fstat = num / den\n pval = stats.f(dfn=df1, dfd=1/v).sf(fstat)\n return fstat, pval\n\nnlow, niter = 25, 5\nk_seq = [5, 7, 9]\ndisp_seq = np.round(np.arange(0, 0.51, 0.1),2)\ndgp_seq = np.repeat(['eq', 'neq'], 2)\nmethod_seq = np.tile(['eq', 'neq'], 2)\n\nholder = []\nnp.random.seed(1)\nfor k in k_seq:\n n_seq = np.arange(nlow, nlow+k * niter, niter)\n n_seq = np.tile(n_seq, [nsim, 1]).T\n nhigh = np.max(n_seq)\n dim_3d = [1, 1, k]\n for disp in disp_seq:\n mu_k = np.linspace(-disp, disp, num=k)\n se_k1 = np.repeat(1,k).reshape(dim_3d)\n se_k2 = np.exp(np.random.randn(k)).reshape(dim_3d)\n X1 = mu_k + se_k1 * np.random.randn(nhigh,nsim,k)\n X2 = mu_k + se_k2 * np.random.randn(nhigh, nsim, k)\n idx = np.tile(np.arange(nhigh),[k,nsim,1]).T <= np.atleast_3d(n_seq).T\n X1, X2 = np.where(idx, X1, np.nan), np.where(idx, X2, np.nan)\n # Calculate means and variance : (k x nsim)\n mu_X1, mu_X2 = np.nanmean(X1, 0).T, np.nanmean(X2, 0).T\n se_X1, se_X2 = np.nanstd(X1, 0, ddof=1).T, np.nanstd(X2, 0, ddof=1).T\n assert n_seq.shape == mu_X1.shape == se_X1.shape\n # Calculate significance\n fstat_eq1, pval_eq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=True)\n fstat_neq1, pval_neq1 = fdist_anova(mus=mu_X1, ses=se_X1, ns=n_seq, var_eq=False)\n fstat_eq2, pval_eq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=True)\n fstat_neq2, pval_neq2 = fdist_anova(mus=mu_X2, ses=se_X2, ns=n_seq, var_eq=False)\n reject_eq1, reject_neq1 = np.mean(pval_eq1 < alpha), np.mean(pval_neq1 < alpha)\n reject_eq2, reject_neq2 = np.mean(pval_eq2 < alpha), np.mean(pval_neq2 < alpha)\n reject_seq = [reject_eq1, reject_neq1, reject_eq2, reject_neq2]\n tmp = pd.DataFrame({'k':k,'disp':disp,'dgp':dgp_seq,'method':method_seq,'reject':reject_seq})\n # print(tmp)\n holder.append(tmp)\nres_f = pd.concat(holder).reset_index(None,True)\nres_f[['dgp','method']] = res_f[['dgp','method']].apply(lambda x: x.map(di_method),0)\nres_f.rename(columns={'dgp':'Variance'}, inplace=True)\n\nplotnine.options.figure_size = (8, 6)\ngg_fdist = (ggplot(res_f, aes(x='disp',y='reject',color='method.astype(str)')) +\n theme_bw() + geom_line() + geom_point() +\n facet_grid('k~Variance',labeller=label_both) +\n labs(x='Mean dispersion',y='Prob. of rejecting null') +\n geom_hline(yintercept=0.05,linetype='--') +\n scale_y_continuous(limits=[0,1]) +\n scale_color_discrete(name='Variance assumption'))\ngg_fdist",
"The simulations in Figure 3 show a similar finding to the that of t-test: when the ground truth variances are equal, there is almost no differences between the tests, and an expected 5% false positive rate occurs when the means are equal. However, for the unequal variance situation, the assumption of homoskedasticity leads to an inflated type-I error rate (as was the case for the t-test), but also lower power when the null is false (which was not the case for the t-test). Using the Welch adjustment is better in both cases. The one surprising finding is that the power of the test is not monotonically increasing in the heteroskedastic case. I am not completely sure why this is the case. One theory could be that since a higher mean dispersion leads to a higher variance of $\\bar{x}_w$, the ratio of the degrees of freedom may be more stable for lower values of $\\mu$, leading to a more consistent rejection rate.\n(4) Quick sanity checks\nAfter confirming the frequentist properties of a test statistic, it is worthwhile checking the results of any custom function to similar functions from other libraries. The tdist_2dist function will be compared to it's scipy counterpart on the Iris dataset.",
"from sklearn import datasets\nix, iy = datasets.load_iris(return_X_y=True)\nv1, v2 = ix[:,0], ix[:,1]\nk = 1\nall_stats = [stats.ttest_ind(v1, v2, equal_var=True)[k],\n tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=True)[k],\n stats.ttest_ind(v1, v2, equal_var=False)[k],\n tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=False)[k]]\npd.DataFrame({'test':'t-test',\n 'method':np.tile(['scipy','custom'],2),\n 'pval':all_stats})",
"So far so good. Next, we'll use rpy2 to get the results in R which supports equal and unequal variances with two different functions.",
"import rpy2.robjects as robjects\n\nmoments_x = pd.DataFrame({'x':ix[:,0],'y':iy}).groupby('y').x.describe()[['mean','std','count']]\n\nall_stats = [np.array(robjects.r('summary(aov(Sepal.Length~Species,iris))[[1]][1, 5]'))[0],\n fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=True)[1][0],\n np.array(robjects.r('oneway.test(Sepal.Length~Species,iris)$p.value'))[0],\n fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=False)[1][0]]\npd.DataFrame({'test':'F-test',\n 'method':np.tile(['R','custom'],2),\n 'pval':all_stats})",
"Once again the results are identical to the benchmark functions.\n(5) Application to AUROC inference\nThe empirical AUROC has an asymptotically normal distribution. Consequently, the difference between two AUROCs will also have an asymptotically normal distribution. For small sample sizes, the Hanley and McNeil adjustment to the AUROC standard error will obtain slightly better coverage. For a review of the notation and meaning of the AUROC, see a previous post here.\n$$\n\\begin{align}\nAUC &= \\frac{1}{n_1 n_0} \\sum_{i: y_i = 1} \\sum_{j: y_j=0} I(s_i > s_j) \\\n\\sigma_{N} &= \\sqrt{\\frac{n_1 + n_0 + 1}{12\\cdot n_1 n_0}} \\ \n\\sigma_{HM} &= \\sqrt{\\frac{AUC\\cdot (1-AUC) + q_1 + q_0}{n_1 n_0}} \\\nq_1 &= (n_1 - 1)\\cdot ( AUC / (2-AUC) - AUC^2) \\\nq_0&= (n_0- 1)\\cdot ( AUC^2 / (1+AUC) - AUC^2)\n\\end{align}\n$$\nThe standard error from the normal approximation ($\\sigma_N$) is only a function of the positive ($n_1$) and negative ($n_0$) class sample sizes whereas the Hanley and McNeil adjustment ($\\sigma_{HM}$) uses the empirical AUROC as well. The previous t- and F-tests relied on the fact that the sample mean had a variance that $O(1/n)$ so that $\\bar x \\sim N(\\mu, \\sigma^2/n)$. As can be seen from either formula, the sample variance for the AUROC can not be nearly re-written as a function of the sample size. We can still appeal to the t-test, the only difference being that the sample size is built into the variance estimate:\n$$\n\\begin{align}\n\\frac{AUC_A - AUC_B}{\\sqrt{\\sigma^2_{HM_A} + \\sigma^2_{HM_B}}} &\\sim N(0,1) \\hspace{3mm} \\text{ if $H_0$ is true} \n\\end{align}\n$$\nIn the simulation below, scores will come from one of two distributions. The negative class will have 200 samples drawn from a standard normal ($n_0$). The positive class scores will have 100 samples ($n_1$) drawn from either a standard normal (for the null distribution) and a normal with a mean at or above zero. The difference in AUROCs between these two distributions will be evaluated. Since the null distribution will have an (average) AUROC of 50%, the difference in these distribution will be above zero when the mean from the alternative is greater than zero.",
"n1, n0 = 100, 200\nn = n1 + n0\nn1n0 = n1 * n0\nmu_seq = np.round(np.arange(0, 1.01, 0.1),2)\n\ndef se_auroc_hanley(auroc, n1, n0):\n q1 = (n1 - 1) * ((auroc / (2 - auroc)) - auroc ** 2)\n q0 = (n0 - 1) * ((2 * auroc ** 2) / (1 + auroc) - auroc ** 2)\n se_auroc = np.sqrt((auroc * (1 - auroc) + q1 + q0) / (n1 * n0))\n return se_auroc\n\ndef se_auroc_normal(n1, n0):\n return np.sqrt( (n1 + n0 + 1) / (12 * n1 * n0) )\n\nnp.random.seed(1)\nholder = []\nfor mu in mu_seq:\n x1_null, x0 = np.random.randn(n1, nsim), np.random.randn(n0, nsim)\n x1 = mu + np.random.randn(n1, nsim)\n x, x_null = np.concatenate((x1, x0)), np.concatenate((x1_null, x0))\n auc = (np.sum(stats.rankdata(x, axis=0)[:n1],0) - n1*(n1+1)/2) / n1n0\n auc_null = (np.sum(stats.rankdata(x_null, axis=0)[:n1], 0) - n1 * (n1 + 1) / 2) / n1n0\n se_HM, se_null_HM = se_auroc_hanley(auc, n1, n0), se_auroc_hanley(auc_null, n1, n0)\n se_N = se_auroc_normal(n1, n0)\n # Do pairwise t-test\n dauc = auc - auc_null\n t_score_HM = dauc / np.sqrt(se_HM**2 + se_null_HM**2)\n t_score_N = dauc / np.sqrt(2 * se_N**2)\n dist_null = stats.t(df=2*n - 2)\n pval_HM = 2 * np.minimum(dist_null.sf(t_score_HM), dist_null.cdf(t_score_HM))\n pval_N = 2 * np.minimum(dist_null.sf(t_score_N), dist_null.cdf(t_score_N))\n reject_HM, reject_N = np.mean(pval_HM < alpha), np.mean(pval_N < alpha)\n tmp = pd.DataFrame({'method':['HM','N'],'mu':mu, 'reject':[reject_HM, reject_N]})\n holder.append(tmp)\n# Merge and analyse\nres_auc = pd.concat(holder).reset_index(None, True)\nres_auc = res_auc.assign(auc=lambda x: stats.norm.cdf(x.mu/np.sqrt(2)),\n nreject=lambda x: (x.reject*nsim).astype(int))\nres_auc = pd.concat([res_auc.drop(columns='nreject'),\n pd.concat(prop_CI(count=res_auc.nreject,nobs=nsim,method='beta'),1)],1)\nres_auc.rename(columns={0:'lb',1:'ub'},inplace=True)\n \n# plot\nplotnine.options.figure_size = (5, 4)\ngg_auc = (ggplot(res_auc,aes(x='auc',y='reject',color='method')) + theme_bw() +\n geom_line() +\n labs(x='Alternative hypothesis AUROC',y='Prob. of rejecting null') +\n geom_hline(yintercept=0.05,linetype='--') +\n geom_linerange(aes(ymin='lb',ymax='ub')) + \n scale_color_discrete(name='Method',labels=['Hanley-McNeil','Normal']))\ngg_auc",
"Figure 4 shows that the standard errors from both methods yield almost identical results. Furthermore, the standard errors are conservative (too large), leading to an under-rejection of the null hypothesis when the null is true (i.e. the alternative hypothesis AUROC is 50%). The alternative hypothesis AUROC needs to reach around 53% before the rejection rate reaches the expected normal level. However, between 53%-70%, the power of the test approaches 100% for this sample size combination."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tuanavu/python-cookbook-3rd | notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb | mit | [
"Unpacking a Sequence into Separate Variables\nProblem\n\nYou have an N-element tuple or sequence that you would like to unpack into a collection of N variables.\n\nSolution\nAny sequence (or iterable) can be unpacked into variables using a simple assignment operation. The only requirement is that the number of variables and structure match the sequence.\nExample 1",
"# Example 1\np = (4, 5)\nx, y = p\nprint x\nprint y",
"Example 2",
"# Example 2\ndata = ['ACME', 50, 91.1, (2012, 12, 21)]\nname, shares, price, date = data\nprint name\nprint date\n\nname, shares, price, (year, mon, day) = data\nprint name\nprint year\nprint mon\nprint day",
"Example 3\n\nIf there is a mismatch in the number of elements, you’ll get an error",
"# Example 3\n# error with mismatch in number of elements\np = (4, 5)\nx, y, z = p",
"Example 4\n\nUnpacking actually works with any object that happens to be iterable, not just tuples or lists. This includes strings, files, iterators, and generators.",
"# Example 4: string\ns = 'Hello'\na, b, c, d, e = s\nprint a\nprint b\nprint e",
"Example 5\n\nDiscard certain values",
"# Example 5\n# discard certain values\ndata = [ 'ACME', 50, 91.1, (2012, 12, 21) ]\n_, shares, price, _ = data\nprint shares\nprint price"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
proinsias/gilbert-shannon-reeds | Gilbert-Shannon-Reeds.ipynb | mit | [
"import multiprocessing as mp\nimport typing\n\nimport matplotlib\nmatplotlib.use('nbagg')\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker\nimport numpy as np\nimport scipy as sp\nimport sklearn.utils\n\nfrom IPython import get_ipython # For automatically-generated python file.\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"From Wikipedia:\n\nIn the mathematics of shuffling playing cards, the Gilbert–Shannon–Reeds model is a probability distribution on riffle shuffle permutations that has been reported to be a good match for experimentally observed outcomes of human shuffling, and that forms the basis for a recommendation that a deck of cards should be riffled seven times in order to thoroughly randomize it. ... The deck of cards is cut into two packets... [t]hen, one card at a time is repeatedly moved from the bottom of one of the packets to the top of the shuffled deck.\n\nHere we implement the Gilbert–Shannon–Reeds model, and verify this recommendation of seven shuffles.\nNote that the functions below have doctest examples.\nTo test the functions, just run pytest in the top level of the repository.\nFirst, define a function to determine how many cards to split into our right hand.",
"def get_random_number_for_right_deck(n: int, seed: int=None, ) -> int:\n \"\"\"\n Return the number of cards to split into the right sub-deck.\n\n :param n: one above the highest number that could be returned by this\n function.\n :param seed: optional seed for the random number generator to enable\n deterministic behavior.\n :return: a random integer (between 1 and n-1) that represents the\n desired number of cards.\n\n Examples:\n\n >>> get_random_number_for_right_deck(n=5, seed=0, )\n 1\n \"\"\"\n random = sklearn.utils.check_random_state(seed=seed, )\n \n return random.randint(low=1, high=n, )",
"Next, define a function to determine which hand to drop a card from.",
"def should_drop_from_right_deck(n_left: int, n_right:int, seed: int=None, ) -> bool:\n \"\"\"\n Determine whether we drop a card from the right or left sub-deck.\n \n Either `n_left` or `n_right` (or both) must be greater than zero.\n \n :param n_left: the number of cards in the left sub-deck.\n :param n_right: the number of cards in the right sub-deck.\n :param seed: optional seed for the random number generator to\n enable deterministic behavior.\n :return: True if we should drop a card from the right sub-deck,\n False otherwise.\n \n Examples:\n\n >>> should_drop_from_right_deck(n_left=32, n_right=5, seed=0, )\n True\n\n >>> should_drop_from_right_deck(n_left=0, n_right=5, )\n True\n\n >>> should_drop_from_right_deck(n_left=7, n_right=0, )\n False\n\n >>> should_drop_from_right_deck(n_left=0, n_right=0, )\n Traceback (most recent call last):\n ...\n ValueError: Either `n_left` or `n_right` (or both) must be greater than zero.\n \"\"\"\n if n_left > 0 and n_right > 0:\n # There are cards left in both sub-decks, so pick a\n # sub-deck at random.\n random = sklearn.utils.check_random_state(seed=seed, )\n num = random.randint(low=0, high=2, )\n boolean = (num == 0)\n return boolean\n elif n_left == 0 and n_right > 0:\n # There are no more cards in the left sub-deck, only\n # the right sub-deck, so we drop from the right sub-deck.\n return True\n elif n_left > 0 and n_right == 0:\n # There are no more cards in the right sub-deck, only\n # the left sub-deck, so we drop from the left sub-deck.\n return False\n else:\n # There are no more cards in either sub-deck.\n raise ValueError ('Either `n_left` or `n_right` '\\\n '(or both) must be greater than zero.')",
"Now we can implement the 'Gilbert–Shannon–Reeds' shuffle.",
"def shuffle(deck: np.array, seed: int=None, ) -> np.array:\n \"\"\"\n Shuffle the input 'deck' using the Gilbert–Shannon–Reeds method.\n\n :param seq: the input sequence of integers.\n :param seed: optional seed for the random number generator\n to enable deterministic behavior.\n :return: A new deck containing shuffled integers from the\n input deck.\n\n Examples:\n\n >>> shuffle(deck=np.array([0, 7, 3, 8, 4, 9, ]), seed=0, )\n array([4, 8, 3, 7, 0, 9])\n \"\"\"\n \n # First randomly divide the 'deck' into 'left' and 'right'\n # 'sub-decks'.\n num_cards_in_deck = len(deck)\n orig_num_cards_right_deck = get_random_number_for_right_deck(\n n=num_cards_in_deck,\n seed=seed,\n )\n\n # By definition of get_random_number_for_right_deck():\n n_right = orig_num_cards_right_deck\n \n n_left = num_cards_in_deck - orig_num_cards_right_deck\n \n shuffled_deck = np.empty(num_cards_in_deck, dtype=int)\n \n # We will drop a card n times.\n for index in range(num_cards_in_deck):\n drop_from_right_deck = should_drop_from_right_deck(\n n_left=n_left,\n n_right=n_right,\n seed=seed,\n )\n \n if drop_from_right_deck is True:\n # Drop from the bottom of right sub-deck\n # onto the shuffled pile.\n shuffled_deck[index] = deck[n_right - 1]\n n_right = n_right - 1\n else:\n # Drop from the bottom of left sub-deck\n # onto the shuffled pile.\n shuffled_deck[index] = deck[\n orig_num_cards_right_deck + n_left - 1\n ]\n n_left = n_left - 1\n \n return shuffled_deck",
"Finally, we run some experiments to confirm the recommendation of seven shuffles for a deck of 52 cards.",
"num_cards = 52\nmax_num_shuffles = 20\nnum_decks = 10000\n\n# Shuffling the cards using a uniform probability\n# distribution results in the same expected frequency\n# for each card in each deck position.\nuniform_rel_freqs = np.full(\n shape=[num_cards, num_cards],\n fill_value=1./num_cards,\n)\n\ndef calculate_differences(\n num_shuffles: int\n ) -> typing.Tuple[np.float64, np.float64, np.float64,]:\n \"\"\"\n Calculate differences between observed and uniform distributions.\n \n :param The number of times to shuffle the deck each time.\n :return Three metrics for differences between the\n observed and uniform relative frequencies.\n \"\"\"\n shuffled_decks = np.empty(shape=[num_decks, num_cards], )\n\n # First create a random deck.\n orig_deck = np.array(range(num_cards))\n np.random.shuffle(orig_deck)\n\n for i in range(num_decks):\n # Now shuffle this deck using the Gilbert–Shannon–Reeds method.\n new_deck = orig_deck\n for j in range(num_shuffles):\n new_deck = shuffle(new_deck)\n \n shuffled_decks[i] = new_deck\n\n # Calculate the relative frequencies of each card in each position.\n rel_freqs = np.empty(shape=[num_cards, num_cards], )\n\n for i in range(num_cards):\n col = shuffled_decks[:, i]\n \n # Make sure that each card appears at least once in this\n # position, by first adding the entire deck, and then\n # subtracting 1 from the total counts of each card in\n # this position.\n col = np.append(col, orig_deck)\n col_freqs = sp.stats.itemfreq(col)[:, 1]\n col_freqs = col_freqs - 1\n rel_freqs[i] = col_freqs / num_decks\n \n # Here I use three metrics for differences between the\n # observed and uniform relative frequencies:\n # * The sum of the squared element-wise differences,\n # * The relative information entropy, and\n # * The Kolmogorov-Smirnov statistic.\n sum_squared = np.sum(np.square(np.subtract(uniform_rel_freqs, rel_freqs)))\n entropy = sp.stats.entropy(rel_freqs.flatten(), uniform_rel_freqs.flatten())\n kstest = sp.stats.kstest(rel_freqs.flatten(), 'uniform').statistic\n \n return sum_squared, entropy, kstest\n\n# Now run the experiment using all our CPUs!\n\nnum_cpus = max(mp.cpu_count() - 2, 1)\n\nwith mp.Pool(num_cpus) as p:\n results = p.map(calculate_differences, range(1, max_num_shuffles+1))\n results = np.array(results)\n\nsums_squared = results[:, 0]\nentropies = results[:, 1]\nkstests = results[:, 2]",
"The KS statistics are of most use here. You can see how the statistic approaches its maximum value around num_shuffles = 7.",
"fs = 14\n\nfig, ax = plt.subplots(figsize=(8, 6), dpi=300)\nax.scatter(range(1, max_num_shuffles + 1), kstests, );\nax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True))\nax.set_xlabel('Number of Shuffles', fontsize=fs, )\nax.set_ylabel('Kolmogorov-Smirnov Statistic', fontsize=fs, )\nax.set_xlim([0, max_num_shuffles + 1])\nplt.show();\n\nfig, ax = plt.subplots(figsize=(8, 6), dpi=300)\nax.scatter(range(1, max_num_shuffles + 1), sums_squared, );\nax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True))\nax.set_xlabel('Number of Shuffles', fontsize=fs, )\nax.set_ylabel('Sum of the Squared Differences', fontsize=fs, )\nax.set_xlim([0, max_num_shuffles + 1])\nplt.show();\n\nfig, ax = plt.subplots(figsize=(8, 6), dpi=300)\nax.scatter(range(1, max_num_shuffles + 1), entropies, );\nax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True))\nax.set_xlabel('Number of Shuffles', fontsize=fs, )\nax.set_ylabel('Relative Information Entropy', fontsize=fs, )\nax.set_xlim([0, max_num_shuffles + 1])\nplt.show();"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs | 2.3/examples/eccentric_ellipsoidal.ipynb | gpl-3.0 | [
"Eccentric Ellipsoidal (Heartbeat)\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"",
"As always, let's do imports and initialize a logger and a new bundle.",
"import phoebe\nimport numpy as np\n\nb = phoebe.default_binary()",
"Now we need a highly eccentric system that nearly overflows at periastron and is slightly eclipsing.",
"b.set_value('q', value=0.7)\nb.set_value('period', component='binary', value=10)\nb.set_value('sma', component='binary', value=25)\nb.set_value('incl', component='binary', value=0)\nb.set_value('ecc', component='binary', value=0.9)\n\nprint(b.filter(qualifier='requiv*', context='component'))\n\nb.set_value('requiv', component='primary', value=1.1)\nb.set_value('requiv', component='secondary', value=0.9)",
"Adding Datasets\nWe'll add light curve, orbit, and mesh datasets.",
"b.add_dataset('lc', \n compute_times=phoebe.linspace(-2, 2, 201),\n dataset='lc01')\n\nb.add_dataset('orb', compute_times=phoebe.linspace(-2, 2, 201))\n\nanim_times = phoebe.linspace(-2, 2, 101)\n\nb.add_dataset('mesh', \n compute_times=anim_times,\n coordinates='uvw',\n dataset='mesh01')",
"Running Compute",
"b.run_compute(irrad_method='none')",
"Plotting",
"afig, mplfig = b.plot(kind='lc', x='phases', t0='t0_perpass', show=True)",
"Now let's make a nice figure.\nLet's go through these options:\n* time: make the plot at this single time\n* z: by default, orbits plot in 2d, but since we're overplotting with a mesh, we want the z-ordering to be correct, so we'll have them plot with w-coordinates in the z-direction.\n* c: (will be ignored by the mesh): set the color to blue for the primary and red for the secondary (will only affect the orbits as the light curve is not tagged with any component).\n* fc: (will be ignored by everything but the mesh): set the facecolor to be blue for the primary and red for the secondary.\n* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to \"see-through\" the triangle edges.\n* uncover: for the orbit, uncover based on the current time.\n* trail: for the orbit, let's show a \"trail\" behind the current position.\n* highlight: disable highlighting for the orbit, since the mesh will be in the same position.\n* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.",
"afig, mplfig = b.plot(time=0.0, \n z={'orb': 'ws'},\n c={'primary': 'blue', 'secondary': 'red'},\n fc={'primary': 'blue', 'secondary': 'red'}, \n ec='face', \n uncover={'orb': True},\n trail={'orb': 0.1},\n highlight={'orb': False},\n tight_layout=True,\n show=True)",
"Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:\n\ntimes: pass our array of times that we want the animation to loop over.\npad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.\nanimate: self-explanatory.\nsave: we could use show=True, but that doesn't always play nice with jupyter notebooks\nsave_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.",
"afig, mplfig = b.plot(times=anim_times, \n z={'orb': 'ws'},\n c={'primary': 'blue', 'secondary': 'red'},\n fc={'primary': 'blue', 'secondary': 'red'}, \n ec='face', \n uncover={'orb': True},\n trail={'orb': 0.1},\n highlight={'orb': False},\n tight_layout=True, pad_aspect=False,\n animate=True, \n save='eccentric_ellipsoidal.gif',\n save_kwargs={'writer': 'imagemagick'})",
""
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
shengshuyang/PCLCombinedObjectDetection | TheanoLearning/TheanoLearning/theano_demo.ipynb | gpl-2.0 | [
"Basics about Theano\nFirst let's do the standard import",
"import time\nimport numpy as np\n#import matplotlib.pyplot as plt\nimport theano\n# By convention, the tensor submodule is loaded as T\nimport theano.tensor as T",
"The following are all Theano defined types:",
"A = T.matrix('A')\nb = T.scalar('b')\nv = T.vector('v')\n\nprint A.type\nprint b.type\nprint v.type",
"All those types are symbolic, meaning they don't have values at all. Theano variables can be defined with simple relations, such as",
"a = T.scalar('a')\nc = a**2\n\nprint a.type\nprint c.type",
"note that c is also a symbolic scalar here.\nWe can also define a function:",
"f = theano.function([a],a**2)\nprint f",
"Again, Theano functions are symbolic as well. We must evaluate the function with some input to check its output. For example:",
"print f(2)",
"Shared variable is also a Theano type",
"shared_var = theano.shared(np.array([[1, 2], [3, 4]], \n dtype=theano.config.floatX))\nprint 'variable type:'\nprint shared_var.type\nprint '\\nvariable value:'\nprint shared_var.get_value()\n\nshared_var.set_value(np.array([[4, 5], [6, 7]], \n dtype=theano.config.floatX))\nprint '\\nvalues changed:'\nprint shared_var.get_value()",
"They have a fixed value, but are still treated as symbolic (can be input to functions etc.).\nShared variables are perfect to use as state variables or parameters. Fore example, in CNN each layer has a parameter matrix 'W', we need to store its value so we can perform testing against thousands of images, yet we also need to update their values during training.\nAs a side note, since they have fixed value, they don't need to be explicitly specified as input to a function:",
"bias = T.matrix('bias')\nshared_squared = shared_var**2 + bias\n\nbias_value = np.array([[1,1],[1,1]], \n dtype=theano.config.floatX)\n\nf1 = theano.function([bias],shared_squared)\nprint f1(bias_value)\nprint '\\n'\nprint shared_squared.eval({bias:bias_value})",
"The example above defines a function that takes square of a shared_var and add by a bias. When evaluating the function we only provide value for bias because we know that the shared variable is fixed value.\nGradients\nTo calculate gradient we can use a T.grad() function to return a tensor variable. We first define some variable:",
"def square(a):\n return a**2\n\na = T.scalar('a')\nb = square(a)\nc = square(b)",
"Then we define two ways to evaluate gradient:",
"grad = T.grad(c,a)\nf_grad = theano.function([a],grad)",
"The TensorVariable grad calculates gradient of b w.r.t. a. \nThe function f_grad takes a as input and grad as output, so it should be equivalent. However, evaluating them have different formats:",
"print grad.eval({a:10})\nprint f_grad(10)",
"MLP Demo with Theano",
"class layer(object):\n def __init__(self, W_init, b_init, activation):\n\n [n_output, n_input] = W_init.shape\n assert b_init.shape == (n_output,1) or b_init.shape == (n_output,)\n self.W = theano.shared(value = W_init.astype(theano.config.floatX),\n name = 'W',\n borrow = True)\n self.b = theano.shared(value = b_init.reshape(n_output,1).astype(theano.config.floatX),\n name = 'b',\n borrow = True,\n broadcastable=(False, True))\n self.activation = activation\n self.params = [self.W, self.b]\n #return super(layer, self).__init__(*args, **kwargs)\n def output(self, x):\n lin_output = T.dot(self.W, x) + self.b\n if self.activation is not None:\n non_lin_output = self.activation(lin_output)\n return ( lin_output if self.activation is None else non_lin_output )\n\nt1 = time.time()\nW_init = np.ones([3,3])\nb_init = np.array([1,3,2.5]).transpose()\nactivation = None #T.nnet.sigmoid\nL = layer(W_init,b_init,activation)\nx = T.vector('x')\nout = L.output(x)\nprint out.eval({x:np.array([1.0,2,3.5]).astype(theano.config.floatX)})\nt2 = time.time()\nprint 'time:', t2-t1",
"Plotting Flowchart (or Theano Graph)\nThe code snippet below can plot a flowchart that shows what happens inside our mlp layer. Note that the input out is the output of layer, as defined above.",
"from IPython.display import SVG\nSVG(theano.printing.pydotprint(out, return_image=True,\n compact = True, \n var_with_name_simple = True,\n format='svg'))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tylere/earthengine-api | python/examples/ipynb/EarthEngineColabInstall.ipynb | apache-2.0 | [
"Copyright 2018 Google LLC.\nSPDX-License-Identifier: Apache-2.0\nEarth Engine Colab installation\nThis notebook demonstrates a simple installation of Earth Engine to a Colab notebook.\nColab setup\nThis notebook section installs the Earth Engine Python API on your Colab virtual machine (VM) and will need to be executed each time a new Colab notebook is created. Colab VMs are recycled after they are idle for a while.\nInstall Earth Engine\nThe Earth Engine Python API and command line tools can be installed using Python's pip package installation tool. The following notebook cell line is starts with ! to indicate that a shell command should be invoked.",
"!pip install earthengine-api",
"Authenticate to Earth Engine\nIn order to access Earth Engine, signup at signup.earthengine.google.com.\nOnce you have signed up and the Earth Engine package is installed, use the earthengine authenticate shell command to create and store authentication credentials on the Colab VM. These credentials are used by the Earth Engine Python API and command line tools to access Earth Engine servers.\nYou will need to follow the link to the permissions page and give this notebook access to your Earth Engine account. Once you have authorized access, paste the authorization code into the input box displayed in the cell output.",
"import ee\n\n# Check if the server is authenticated. If not, display instructions that\n# explain how to complete the process.\ntry:\n ee.Initialize()\nexcept ee.EEException:\n !earthengine authenticate",
"Test the installation\nImport the Earth Engine library and initialize it with the authorization token stored on the notebook VM. Also import a display widget and display a thumbnail image of an Earth Engine dataset.",
"import ee\nfrom IPython.display import Image\n\n# Initialize the Earth Engine module.\nee.Initialize()\n\n# Display a thumbnail of a sample image asset.\nImage(url=ee.Image('CGIAR/SRTM90_V4').getThumbUrl({'min': 0, 'max': 3000}))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ForestClaw/forestclaw | applications/clawpack/advection/2d/disk/swirl.ipynb | bsd-2-clause | [
"Swirl\n\nScalar advection problem with swirling velocity field.\n\nRun code in serial mode (will work, even if code is compiled with MPI)",
"!swirl ",
"Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)",
"!mpirun -n 4 swirl",
"Create PNG files for web-browser viewing, or animation.",
"%run make_plots.py",
"View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.",
"%pylab inline\n\nimport glob\nfrom matplotlib import image\nfrom clawpack.visclaw.JSAnimation import IPython_display\nfrom matplotlib import animation\n\nfigno = 0\nfname = '_plots/*fig' + str(figno) + '.png'\nfilenames=sorted(glob.glob(fname))\n\nfig = plt.figure()\nim = plt.imshow(image.imread(filenames[0]))\ndef init():\n im.set_data(image.imread(filenames[0]))\n return im,\n\ndef animate(i):\n image_i=image.imread(filenames[i])\n im.set_data(image_i)\n return im,\n\nanimation.FuncAnimation(fig, animate, init_func=init,\n frames=len(filenames), interval=500, blit=True)",
""
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
leoferres/prograUDD | labs/ejercicios_for.ipynb | mit | [
"Una contraseña es valida si cumple con lo siguiente:",
"passwd = input(\"Ingrese su contraseña: \")\n\nif len(passwd) < 8:\n print(\"Contraseña no valida, faltan caracteres\")\nelse:\n cantnum = 0\n cantsimb = 0\n for i in passwd:\n if i.isdigit():\n cantnum += 1\n elif not i.isalpha():\n cantsimb += 1\n if cantsimb != 1:\n print(\"Contraseña no valida, debe tener solo un caracter no letra ni numero\")\n elif cantnum == 0:\n print(\"Contraseña no valida, no tiene numeros\")\n else:\n print(\"Contraseña valida!!!\")\n\n# aqui probamos primero para conocer las funciones con is.\nfor i in passwd:\n print(i, i.isdigit(), i.isalpha())",
"Imprima todos los numeros primos positivos menores que n, solicite el n.",
"n = int(input(\"Ingrese el n: \"))\n\n#ojo... podriamos saltar de a 2, desde el 3.\nfor i in range(2,n):\n cantdiv = 0\n for j in range(2,int(i**0.5)+1):\n if i%j == 0:\n cantdiv += 1\n if cantdiv == 0:\n print(i, \"es primo\")",
"Solicite n y k y calcule $\\binom{n}{k}$",
"n = int(input(\"Ingrese el n: \"))\n\nk = int(input(\"Ingrese el k: \"))\n\nif n < 0 or k < 0:\n print(\"No se puede calcular, hay un elemento negativo\")\nelif n < k:\n print(\"No se puede calcular, n debe ser mayor o igual a k\")\nelse:\n menor = k\n if (n-k) < menor:\n menor = (n-k)\n resultado = 1\n for i in range(menor):\n resultado *= (n-i)/(i+1)\n print(\"El resultado es =\", resultado)",
"Se define como numero perfecto, aquél natural positivo en que la suma de sus divisores, no incluido si mismo, es el mismo número.",
"# obviaremos chequeo que sea positivo\nn = int(input(\"Ingrese su n: \"))\n\nfor i in range(1, n+1):\n suma = 0\n for j in range(1,i):\n if i%j == 0:\n suma += j\n if suma == i:\n print(i, \"es perfecto\")\n\nfor i in range(1, 496):\n if 496%i == 0:\n print(i)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xMyrst/BigData | python/howto/005_Estructuras de control.ipynb | gpl-3.0 | [
"ESTRUCTURAS DE CONTROL\n<br />\n INSTRUCCIONES IF, ELIF, ELSE\nEl intérprete de Python ejecuta un programa ejecutando una instrucción cada vez.\nif <condition>:\n <do something>\nelif <condition2>:\n <do other thing>\nelse:\n <do other thing>\n\nRecordar que en Python los bloques se delimitan por sangrado.\n\n\nCuando ponemos los dos puntos al final de la primera línea del condicional, todo lo que vaya a continuación con un nivel de sangrado superior se considera dentro del condicional.\n\n\nEn cuanto escribimos la primera línea con un nivel de sangrado inferior, hemos cerrado el condicional.",
"x, y = 2, 0\nif x > y:\n print(\"x es mayor que y\")\n print(\"x sigue siendo mayor que y\")\n\nif 1 < 0:\n print(\"1 es mayor que 0\") # esto está dentro de bucle, pero no se escribe por que no cumple que 1 sea menor que 0\nprint(\"Esto se ejecuta siempre\") # esto no está dentro del bloque y SE ejecuta siempre; ya que no pertenece al IF\n\nif 1 < 0:\n print(\"1 es menor que 0\")\n print(\"1 sigue siendo menor que 0\") # error de sangrado, porque el sangrado es superior",
"Si queremos añadir ramas adicionales al condicional, podemos emplear la instrucción elif (abreviatura de else if). Para la parte final, que debe ejecutarse si ninguna de las condiciones anteriores se ha cumplido, usamos la instrucción else.",
"x, y = 2, 0\nif x > y:\n print(\"x es mayor que y\")\nelse:\n print(\"x es menor que y\")\n\n# Uso de ELIF (contracción de else if)\nx, y = 2 , 0\nif x < y:\n print(\"x es menor que y\")\nelif x == y:\n print(\"x es igual a y\")\nelse:\n print(\"x es mayor que y\")",
"<br />\n EXPRESIONES TERNARIAS\nLas expresiones ternarias en Python tienen la siguiente forma:\n\ne = valorSiTrue if <condicion> else valorSiFalse\n\nPermite definir la instrucción if-else en una sola línea. La expresión anterior es equivalente a:\n\nif <condicion>:\n e = valorSiTrue\nelse: \n e = valorSiFalse",
"# Una expresión ternaria es una contracción de un bucle FOR\n# Se define la variable para poder comparar\nx = 8\n# Se puede escrbir la expresión de esta manera o almacenando el resultado dentro de una variable \n# para trabajar posteriormente con ella\n\"Hola CICE\" if x == 8 else \"Adios CICE\"\na = 'x es igual a 8' if x == 8 else 'x es distinto de 8'\na",
"<br />\n BUCLES FOR Y WHILE\nEl bucle FOR\nPermite realizar una tarea un número fijo de veces. Se utiliza para recorrer una colección completa de elementos (una tupla, una lista, un diccionario, etc ):\n for <element> in <iterable_object>:\n <hacer algo...>\n\n\nAquí el objeto <iterable_object> puede ser una lista, tupla, array, diccionario, etc.\nEl bucle se repite un número fijo de veces, que es la longitud de la colección de elementos.",
"# itera soble los elementos de la tupla\nfor elemento in (1, 2, 3, 4, 5):\n print(elemento)\n\n# Suma todos los elementos de la tupla\nsuma = 0\nfor elemento in (1, 2, 3, 4, 5):\n suma = suma + elemento\nprint(suma)\n\n# Muestra todos los elemntos de una lista\ndias = [\"Lunes\", \"Martes\", \"Miércoles\", \"Jueves\", 'Viernes', 'Sábado', 'Domingo']\nfor nombre in dias:\n print(nombre)\n\n# La instrucción CONTINUE permite saltar de una iteración a otra\n# Crea la lista de enteros en el intervalo [0,10)\n# No se ejecuta si j es un número par, gracias a la sentencia continue\nfor j in range(10):\n if j % 2 == 0:\n continue \n print(j)\n\n# También es posible recorrer un diccionario mediante el bucle FOR\n# dic.items\n# dic.keys\n# dic.values\ndic = {1:'Lunes', 2:'Martes', 3:'Miércoles' }\n# Python 3.5\nfor i in dic.items():\n print(i)\n\ndic = {1:'Lunes', 2:'Martes', 3:'Miércoles' }\n# Python 3.5\nfor (clave, valor) in dic.items():\n print(\"La clave es: \" + str(clave) + \" y el valor es: \" + valor)\n\ndic.items(), dic.keys(), dic.values()",
"<br />\nEl bucle WHILE\nLos bucles while repetirán las instrucciones anidadas en él mientras se cumpla una condición:\n while <condition>:\n <things to do>\n\n\nEl número de iteraciones es variable, depende de la condición.\n+Como en el caso de los condicionales, los bloques se separan por sangrado sin necesidad de instrucciones del tipo end.",
"i = -2\nwhile i < 5:\n print(i)\n i = i + 1\nprint(\"Estoy fuera del while\")\n\n# Para interrumpir un bucle WHILE se utiliza la instrucción BREAK\n# Se pueden usar expresiones de tipo condicional (AND, OR, NOT)\ni = 0\nj = 1\nwhile i < 5 and j == 1:\n print(i)\n i = i + 1\n if i == 3:\n break",
"<BR />\nLA FUNCIÓN ENUMERATE\nCuando trabajamos con secuencias de elementos puede resultar útil conocer el índice de cada elemento. La función enumerate devuelve una secuencia de tuplas de la forma (i, valor).\nMediante un bucle es posible recorrerse dicha secuencia:",
"# Creamos una lista llamada ciudades\n# Recorremos dicha lista mediante la instrucción FOR asignando una secuencia de números (i) a cada valor de la lista\nciudades = [\"Madrid\", \"Sevilla\", \"Segovia\", \"Valencia\" ]\nfor (i, valor) in enumerate(ciudades):\n print('%d: %s' % (i, valor))\n\n# Uso de la función reversed \n# la función reversed devuelve un iterador inverso de una lista\nfor (i, valor) in enumerate(reversed(ciudades)):\n print('%d: %s' % (i, valor))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/team1 | code/Assignment11_Group.ipynb | apache-2.0 | [
"Group",
"from mpl_toolkits.mplot3d import axes3d\nimport matplotlib.pyplot as plt\n%matplotlib inline \nimport numpy as np\nimport urllib2\nimport scipy.stats as stats\n\nnp.set_printoptions(precision=3, suppress=True)\nurl = ('https://raw.githubusercontent.com/Upward-Spiral-Science'\n '/data/master/syn-density/output.csv')\ndata = urllib2.urlopen(url)\ncsv = np.genfromtxt(data, delimiter=\",\")[1:] # don't want first row (labels)\n\n# chopping data based on thresholds on x and y coordinates\nx_bounds = (409, 3529)\ny_bounds = (1564, 3124)\n\ndef check_in_bounds(row, x_bounds, y_bounds):\n if row[0] < x_bounds[0] or row[0] > x_bounds[1]:\n return False\n if row[1] < y_bounds[0] or row[1] > y_bounds[1]:\n return False\n if row[3] == 0:\n return False\n \n return True\n\nindices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,\n x_bounds, y_bounds))\ndata_thresholded = csv[indices_in_bound]\nn = data_thresholded.shape[0]\n\n\ndef synapses_over_unmasked(row):\n s = (row[4]/row[3])*(64**3)\n return [row[0], row[1], row[2], s]\n\nsyn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)\nsyn_normalized = syn_unmasked\nprint 'end setup'",
"1) Boxplot of General Density",
"# syn_unmasked_T = syn_unmasked.values.T.tolist()\n# columns = [syn_unmasked[i] for i in [4]]\n\nplt.boxplot(syn_unmasked[:,3], 0, 'gD')\nplt.xticks([1], ['Set'])\nplt.ylabel('Density Distribution')\nplt.title('Density Distrubution Boxplot')\nplt.show()",
"2) Is the spike noise? More evidence.\nWe saw from Emily's analysis that there is strong evidence against the spike being noise. If we see that the spike is noticeable in the histogram of synapses as well as the histogram of synapse density, we will gain even more evidence that the spike is noise.",
"figure = plt.figure()\nplt.hist(data_thresholded[:,4],5000)\nplt.title('Histogram of Synapses in Brain Sample')\nplt.xlabel('Synapses')\nplt.ylabel('frequency')",
"Since we don't see the spike in the histogram of synapses, the spike may be some artifact of the unmasked value. Let's take a look!\n3) What is the spike? We still don't know.",
"plt.hist(data_thresholded[:,3],5000)\nplt.title('Histogram of Unmasked Values')\nplt.xlabel('unmasked')\nplt.ylabel('frequency')",
"4) Synapses and unmasked: Spike vs Whole Data Set",
"# Spike\na = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)\nspike = a[np.logical_and(a <= 0.0015, a >= 0.0012)]\nprint \"Average Density: \", np.mean(spike)\nprint \"Std Deviation: \", np.std(spike)\n\n# Histogram\nn, bins, _ = plt.hist(spike, 2000)\nplt.title('Histogram of Synaptic Density')\nplt.xlabel('Synaptic Density (syn/voxel)')\nplt.ylabel('frequency')\n\nbin_max = np.where(n == n.max())\n\nprint 'maxbin', bins[bin_max][0]\n\nbin_width = bins[1]-bins[0]\nsyn_normalized[:,3] = syn_normalized[:,3]/(64**3)\nspike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]\nprint \"There are \", len(spike), \" points in the 'spike'\"\n\nspike_thres = data_thresholded[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]\nprint spike_thres\n\nimport math\nfig, ax = plt.subplots(1,2,sharey = True, figsize=(20,5))\nweights = np.ones_like(spike_thres[:,3])/len(spike_thres[:,3])\nweights2 = np.ones_like(data_thresholded[:,3])/len(data_thresholded[:,3])\n\nax[0].hist(data_thresholded[:,3], bins = 100, alpha = 0.5, weights = weights2, label = 'all data')\nax[0].hist(spike_thres[:,3], bins = 100, alpha = 0.5, weights = weights, label = 'spike')\nax[0].legend(loc='upper right')\nax[0].set_title('Histogram of Unmasked values in the Spike vs All Data')\n\nweights = np.ones_like(spike_thres[:,4])/len(spike_thres[:,4])\nweights2 = np.ones_like(data_thresholded[:,4])/len(data_thresholded[:,4])\n\nax[1].hist(data_thresholded[:,4], bins = 100, alpha = 0.5, weights = weights2, label = 'all data')\nax[1].hist(spike_thres[:,4], bins = 100, alpha = 0.5, weights = weights, label = 'spike')\nax[1].legend(loc='upper right')\nax[1].set_title('Histogram of Synapses in the Spike vs All Data')\n\nplt.show()",
"5) Boxplot of different clusters by coordinates and densities\nCluster 4 has relatively high density",
"import sklearn.mixture as mixture\n\nn_clusters = 4\ngmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag')\nlabels = gmm.fit_predict(syn_unmasked)\nclusters = []\nfor l in range(n_clusters):\n a = np.where(labels == l)\n clusters.append(syn_unmasked[a,:])\n\nprint len(clusters)\nprint clusters[0].shape\n\ncounter = 0\nindx = 0\nindy = 0\nfor cluster in clusters:\n s = cluster.shape\n cluster = cluster.reshape((s[1], s[2]))\n counter += 1\n print \n print'Working on cluster: ' + str(counter)\n plt.boxplot(cluster[:,-1], 0, 'gD', showmeans=True)\n plt.xticks([1])\n plt.ylabel('Density')\n plt.title('Boxplot of density \\n at cluster = ' + str(int(counter)))\n plt.show()\n \n \n print \"Done with cluster\"\nplt.show()\n",
"5 OLD ) Boxplot distrubutions of each Z layer",
"\ndata_uniques, UIndex, UCounts = np.unique(syn_unmasked[:,2], return_index = True, return_counts = True)\n'''\nprint 'uniques'\nprint 'index: ' + str(UIndex)\nprint 'counts: ' + str(UCounts)\nprint 'values: ' + str(data_uniques)\n'''\nfig, ax = plt.subplots(3,4,figsize=(10,20))\ncounter = 0\n\nfor i in np.unique(syn_unmasked[:,2]):\n # print 'calcuating for z: ' + str(int(i))\n \n def check_z(row):\n if row[2] == i:\n return True\n return False\n \n counter += 1\n xind = (counter%3) - 1\n yind = (counter%4) - 1\n \n index_true = np.where(np.apply_along_axis(check_z, 1, syn_unmasked))\n syn_uniqueZ = syn_unmasked[index_true]\n \n ax[xind,yind].boxplot(syn_uniqueZ[:,3], 0, 'gD')\n ax[xind,yind].set_xticks([1], i)\n ax[xind,yind].set_ylabel('Density')\n ax[xind,yind].set_title('Boxplot at \\n z = ' + str(int(i)))\n\n#print 'yind = %d, xind = %d' %(yind,xind)\n#print i\n\nax[xind+1,yind+1].boxplot(syn_uniqueZ[:,3], 0, 'gD',showmeans=True)\nax[xind+1,yind+1].set_xticks([1], 'set')\nax[xind+1,yind+1].set_ylabel('Density')\nax[xind+1,yind+1].set_title('Boxplot for \\n All Densities')\n\nprint \"Density Distrubtion Boxplots:\"\nplt.tight_layout()\n\nplt.show()\n \n\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
leriomaggio/python-in-a-notebook | 08 Classes and OOP.ipynb | mit | [
"Classes\nSo far you have learned about Python's core data types: strings, numbers, lists, tuples, and dictionaries. In this section you will learn about the last major data structure, classes. Classes are quite unlike the other data types, in that they are much more flexible. Classes allow you to define the information and behavior that characterize anything you want to model in your program. Classes are a rich topic, so you will learn just enough here to dive into the projects you'd like to get started on.\nThere is a lot of new language that comes into play when you start learning about classes. If you are familiar with object-oriented programming from your work in another language, this will be a quick read about how Python approaches OOP. If you are new to programming in general, there will be a lot of new ideas here. Just start reading, try out the examples on your own machine, and trust that it will start to make sense as you work your way through the examples and exercises.\n<a name=\"top\"></a>Contents\n\nWhat are classes?\nObject-Oriented Terminology\nGeneral terminology\nA closer look at the Rocket class\nThe __init()__ method\nA simple method\nMaking multiple objects from a class\nA quick check-in\nExercises\n\n\n\n\nRefining the Rocket class\nAccepting parameters for the __init__() method\nAccepting parameters in a method\nAdding a new method\nExercises\n\n\nInheritance\nThe Shuttle class\nExercises\n\n\nModules and classes\nStoring a single class in a module\nStoring multiple classes in a module\nA number of ways to import modules and classes\nA module of functions\nExercises\n\n\nMethod Resolution Order (example)\n\ntop\n<a name=\"what\"></a>What are classes?\nClasses are a way of combining information and behavior. For example, let's consider what you'd need to do if you were creating a rocket ship in a game, or in a physics simulation. One of the first things you'd want to track are the x and y coordinates of the rocket. Here is what a simple rocket ship class looks like in code:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0",
"One of the first things you do with a class is to define the _init_() method. The __init__() method sets the values for any parameters that need to be defined when an object is first created. The self part will be explained later; basically, it's a syntax that allows you to access a variable from anywhere else in the class.\nThe Rocket class stores two pieces of information so far, but it can't do anything. The first behavior to define is a core behavior of a rocket: moving up. Here is what that might look like in code:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1",
"The Rocket class can now store some information, and it can do something. But this code has not actually created a rocket yet. Here is how you actually make a rocket:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n\n# Create a Rocket object.\nmy_rocket = Rocket()\nprint(my_rocket)",
"To actually use a class, you create a variable such as my_rocket. Then you set that equal to the name of the class, with an empty set of parentheses. Python creates an object from the class. An object is a single instance of the Rocket class; it has a copy of each of the class's variables, and it can do any action that is defined for the class. In this case, you can see that the variable my_rocket is a Rocket object from the __main__ program file, which is stored at a particular location in memory.\nOnce you have a class, you can define an object and use its methods. Here is how you might define a rocket and have it start to move up:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n\n# Create a Rocket object, and have it start to move up.\nmy_rocket = Rocket()\nprint(\"Rocket altitude:\", my_rocket.y)\n\nmy_rocket.move_up()\nprint(\"Rocket altitude:\", my_rocket.y)\n\nmy_rocket.move_up()\nprint(\"Rocket altitude:\", my_rocket.y)",
"To access an object's variables or methods, you give the name of the object and then use dot notation to access the variables and methods. So to get the y-value of my_rocket, you use my_rocket.y. To use the move_up() method on my_rocket, you write my_rocket.move_up().\nOnce you have a class defined, you can create as many objects from that class as you want. Each object is its own instance of that class, with its own separate variables. All of the objects are capable of the same behavior, but each object's particular actions do not affect any of the other objects.\nOnce you have a class, you can define an object and use its methods. Here is how you might define a rocket and have it start to move up:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Create a fleet of 5 rockets, and store them in a list.\nmy_rockets = []\nfor x in range(0,5):\n new_rocket = Rocket()\n my_rockets.append(new_rocket)\n\n# Show that each rocket is a separate object.\nfor rocket in my_rockets:\n print(rocket)",
"You can see that each rocket is at a separate place in memory. By the way, if you understand list comprehensions, you can make the fleet of rockets in one line:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Create a fleet of 5 rockets, and store them in a list.\nmy_rockets = [Rocket() for x in range(0,5)]\n\n# Show that each rocket is a separate object.\nfor rocket in my_rockets:\n print(rocket)",
"You can prove that each rocket has its own x and y values by moving just one of the rockets:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Create a fleet of 5 rockets, and store them in a list.\nmy_rockets = [Rocket() for x in range(0,5)]\n\n# Move the first rocket up.\nmy_rockets[0].move_up()\n\n# Show that only the first rocket has moved.\nfor rocket in my_rockets:\n print(\"Rocket altitude:\", rocket.y)",
"The syntax for classes may not be very clear at this point, but consider for a moment how you might create a rocket without using classes. You might store the x and y values in a dictionary, but you would have to write a lot of ugly, hard-to-maintain code to manage even a small set of rockets. As more features become incorporated into the Rocket class, you will see how much more efficiently real-world objects can be modeled with classes than they could be using just lists and dictionaries.\ntop\n<a name='exercises_what'></a>Exercises\nRocket With No Class\n\nUsing just what you already know, try to write a program that simulates the above example about rockets.\nStore an x and y value for a rocket.\nStore an x and y value for each rocket in a set of 5 rockets. Store these 5 rockets in a list.\n\n\nDon't take this exercise too far; it's really just a quick exercise to help you understand how useful the class structure is, especially as you start to see more capability added to the Rocket class.",
"# Ex 9.0 : Rocket with no Class\n\n# put your code here",
"top\n<a name='oop_terminology'></a>Object-Oriented terminology\nClasses are part of a programming paradigm called object-oriented programming. Object-oriented programming, or OOP for short, focuses on building reusable blocks of code called classes. When you want to use a class in one of your programs, you make an object from that class, which is where the phrase \"object-oriented\" comes from. Python itself is not tied to object-oriented programming, but you will be using objects in most or all of your Python projects. In order to understand classes, you have to understand some of the language that is used in OOP.\n<a name='general_terminology'></a>General terminology\nA class is a body of code that defines the attributes and behaviors required to accurately model something you need for your program. You can model something from the real world, such as a rocket ship or a guitar string, or you can model something from a virtual world such as a rocket in a game, or a set of physical laws for a game engine.\nAn attribute is a piece of information. In code, an attribute is just a variable that is part of a class.\nA behavior is an action that is defined within a class. These are made up of methods, which are just functions that are defined for the class.\nAn object is a particular instance of a class. An object has a certain set of values for all of the attributes (variables) in the class. You can have as many objects as you want for any one class.\nThere is much more to know, but these words will help you get started. They will make more sense as you see more examples, and start to use classes on your own.\ntop\n<a name='closer_look'></a>A closer look at the Rocket class\nNow that you have seen a simple example of a class, and have learned some basic OOP terminology, it will be helpful to take a closer look at the Rocket class.\n<a name=\"init_method\"></a>The __init()__ method\nHere is the initial code block that defined the Rocket class:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0",
"The first line shows how a class is created in Python. The keyword class tells Python that you are about to define a class. The rules for naming a class are the same rules you learned about naming variables, but there is a strong convention among Python programmers that classes should be named using CamelCase. If you are unfamiliar with CamelCase, it is a convention where each letter that starts a word is capitalized, with no underscores in the name. The name of the class is followed by a set of parentheses. These parentheses will be empty for now, but later they may contain a class upon which the new class is based.\nIt is good practice to write a comment at the beginning of your class, describing the class. There is a more formal syntax for documenting your classes, but you can wait a little bit to get that formal. For now, just write a comment at the beginning of your class summarizing what you intend the class to do. Writing more formal documentation for your classes will be easy later if you start by writing simple comments now.\nFunction names that start and end with two underscores are special built-in functions that Python uses in certain ways. The __init()__ method is one of these special functions. It is called automatically when you create an object from your class. The __init()__ method lets you make sure that all relevant attributes are set to their proper values when an object is created from the class, before the object is used. In this case, The __init__() method initializes the x and y values of the Rocket to 0.\nThe self keyword often takes people a little while to understand. The word \"self\" refers to the current object that you are working with. When you are writing a class, it lets you refer to certain attributes from any other part of the class. Basically, all methods in a class need the self object as their first argument, so they can access any attribute that is part of the class.\nNow let's take a closer look at a method.\n<a name='simple_method'></a>A simple method\nHere is the method that was defined for the Rocket class:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1",
"A method is just a function that is part of a class. Since it is just a function, you can do anything with a method that you learned about with functions. You can accept positional arguments, keyword arguments, an arbitrary list of argument values, an arbitrary dictionary of arguments, or any combination of these. Your arguments can return a value or a set of values if you want, or they can just do some work without returning any values.\nEach method has to accept one argument by default, the value self. This is a reference to the particular object that is calling the method. This self argument gives you access to the calling object's attributes. In this example, the self argument is used to access a Rocket object's y-value. That value is increased by 1, every time the method move_up() is called by a particular Rocket object. This is probably still somewhat confusing, but it should start to make sense as you work through your own examples.\nIf you take a second look at what happens when a method is called, things might make a little more sense:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n\n# Create a Rocket object, and have it start to move up.\nmy_rocket = Rocket()\nprint(\"Rocket altitude:\", my_rocket.y)\n\nmy_rocket.move_up()\nprint(\"Rocket altitude:\", my_rocket.y)\n\nmy_rocket.move_up()\nprint(\"Rocket altitude:\", my_rocket.y)",
"In this example, a Rocket object is created and stored in the variable my_rocket. After this object is created, its y value is printed. The value of the attribute y is accessed using dot notation. The phrase my_rocket.y asks Python to return \"the value of the variable y attached to the object my_rocket\".\nAfter the object my_rocket is created and its initial y-value is printed, the method move_up() is called. This tells Python to apply the method move_up() to the object my_rocket. Python finds the y-value associated with my_rocket and adds 1 to that value. This process is repeated several times, and you can see from the output that the y-value is in fact increasing.\ntop\n<a name='multiple_objects'></a>Making multiple objects from a class\nOne of the goals of object-oriented programming is to create reusable code. Once you have written the code for a class, you can create as many objects from that class as you need. It is worth mentioning at this point that classes are usually saved in a separate file, and then imported into the program you are working on. So you can build a library of classes, and use those classes over and over again in different programs. Once you know a class works well, you can leave it alone and know that the objects you create in a new program are going to work as they always have.\nYou can see this \"code reusability\" already when the Rocket class is used to make more than one Rocket object. Here is the code that made a fleet of Rocket objects:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Create a fleet of 5 rockets, and store them in a list.\nmy_rockets = []\nfor x in range(0,5):\n new_rocket = Rocket()\n my_rockets.append(new_rocket)\n\n# Show that each rocket is a separate object.\nfor rocket in my_rockets:\n print(rocket)",
"If you are comfortable using list comprehensions, go ahead and use those as much as you can. I'd rather not assume at this point that everyone is comfortable with comprehensions, so I will use the slightly longer approach of declaring an empty list, and then using a for loop to fill that list. That can be done slightly more efficiently than the previous example, by eliminating the temporary variable new_rocket:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Create a fleet of 5 rockets, and store them in a list.\nmy_rockets = []\nfor x in range(0,5):\n my_rockets.append(Rocket())\n\n# Show that each rocket is a separate object.\nfor rocket in my_rockets:\n print(rocket)",
"What exactly happens in this for loop? The line my_rockets.append(Rocket()) is executed 5 times. Each time, a new Rocket object is created and then added to the list my_rockets. The __init__() method is executed once for each of these objects, so each object gets its own x and y value. When a method is called on one of these objects, the self variable allows access to just that object's attributes, and ensures that modifying one object does not affect any of the other objecs that have been created from the class.\nEach of these objects can be worked with individually. At this point we are ready to move on and see how to add more functionality to the Rocket class. We will work slowly, and give you the chance to start writing your own simple classes.\n<a name='check_in'></a>A quick check-in\nIf all of this makes sense, then the rest of your work with classes will involve learning a lot of details about how classes can be used in more flexible and powerful ways. If this does not make any sense, you could try a few different things:\n\nReread the previous sections, and see if things start to make any more sense.\nType out these examples in your own editor, and run them. Try making some changes, and see what happens.\nTry the next exercise, and see if it helps solidify some of the concepts you have been reading about.\nRead on. The next sections are going to add more functionality to the Rocket class. These steps will involve rehashing some of what has already been covered, in a slightly different way.\n\nClasses are a huge topic, and once you understand them you will probably use them for the rest of your life as a programmer. If you are brand new to this, be patient and trust that things will start to sink in.\ntop\n<a name='exercises_closer_look'></a>Exercises\nYour Own Rocket\n\nWithout looking back at the previous examples, try to recreate the Rocket class as it has been shown so far.\nDefine the Rocket() class.\nDefine the __init__() method, which sets an x and a y value for each Rocket object.\nDefine the move_up() method.\nCreate a Rocket object.\nPrint the object.\nPrint the object's y-value.\nMove the rocket up, and print its y-value again.\nCreate a fleet of rockets, and prove that they are indeed separate Rocket objects.",
"# Ex 9.1 : Your Own Rocket\n\n# put your code here",
"top\n<a name='refining_rocket'></a>Refining the Rocket class\nThe Rocket class so far is very simple. It can be made a little more interesting with some refinements to the __init__() method, and by the addition of some methods.\n<a name='init_parameters'></a>Accepting paremeters for the __init__() method\nThe __init__() method is run automatically one time when you create a new object from a class. The __init__() method for the Rocket class so far is pretty simple:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self):\n # Each rocket has an (x,y) position.\n self.x = 0\n self.y = 0\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1",
"All the __init__() method does so far is set the x and y values for the rocket to 0. We can easily add a couple keyword arguments so that new rockets can be initialized at any position:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1",
"Now when you create a new Rocket object you have the choice of passing in arbitrary initial values for x and y:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_up(self):\n # Increment the y-position of the rocket.\n self.y += 1\n \n# Make a series of rockets at different starting places.\nrockets = []\nrockets.append(Rocket())\nrockets.append(Rocket(0,10))\nrockets.append(Rocket(100,0))\n\n# Show where each rocket is.\nfor index, rocket in enumerate(rockets):\n print(\"Rocket %d is at (%d, %d).\" % (index, rocket.x, rocket.y))",
"top\n<a name='method_parameters'></a>Accepting paremeters in a method\nThe __init__ method is just a special method that serves a particular purpose, which is to help create new objects from a class. Any method in a class can accept parameters of any kind. With this in mind, the move_up() method can be made much more flexible. By accepting keyword arguments, the move_up() method can be rewritten as a more general move_rocket() method.\nThis new method will allow the rocket to be moved any amount, in any direction:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment",
"The paremeters for the move() method are named x_increment and y_increment rather than x and y. It's good to emphasize that these are changes in the x and y position, not new values for the actual position of the rocket. By carefully choosing the right default values, we can define a meaningful default behavior. If someone calls the method move_rocket() with no parameters, the rocket will simply move up one unit in the y-direciton. Note that this method can be given negative values to move the rocket left or right:",
"class Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n# Create three rockets.\nrockets = [Rocket() for x in range(0,3)]\n\n# Move each rocket a different amount.\nrockets[0].move_rocket()\nrockets[1].move_rocket(10,10)\nrockets[2].move_rocket(-10,0)\n \n# Show where each rocket is.\nfor index, rocket in enumerate(rockets):\n print(\"Rocket %d is at (%d, %d).\" % (index, rocket.x, rocket.y))",
"top\n<a name='adding_method'></a>Adding a new method\nOne of the strengths of object-oriented programming is the ability to closely model real-world phenomena by adding appropriate attributes and behaviors to classes. One of the jobs of a team piloting a rocket is to make sure the rocket does not get too close to any other rockets. Let's add a method that will report the distance from one rocket to any other rocket.\nIf you are not familiar with distance calculations, there is a fairly simple formula to tell the distance between two points if you know the x and y values of each point.\nThis new method performs that calculation, and then returns the resulting distance.",
"from math import sqrt\n\nclass Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n def get_distance(self, other_rocket):\n # Calculates the distance from this rocket to another rocket,\n # and returns that value.\n distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)\n return distance\n \n# Make two rockets, at different places.\nrocket_0 = Rocket()\nrocket_1 = Rocket(10,5)\n\n# Show the distance between them.\ndistance = rocket_0.get_distance(rocket_1)\nprint(\"The rockets are %f units apart.\" % distance)",
"Hopefully these short refinements show that you can extend a class' attributes and behavior to model the phenomena you are interested in as closely as you want. The rocket could have a name, a crew capacity, a payload, a certain amount of fuel, and any number of other attributes. You could define any behavior you want for the rocket, including interactions with other rockets and launch facilities, gravitational fields, and whatever you need it to! There are techniques for managing these more complex interactions, but what you have just seen is the core of object-oriented programming.\nAt this point you should try your hand at writing some classes of your own. After trying some exercises, we will look at object inheritance, and then you will be ready to move on for now.\ntop\n<a name='exercises_refining_rocket'></a>Exercises\nYour Own Rocket 2\n\nThere are enough new concepts here that you might want to try re-creating the Rocket class as it has been developed so far, looking at the examples as little as possible. Once you have your own version, regardless of how much you needed to look at the example, you can modify the class and explore the possibilities of what you have already learned.\nRe-create the Rocket class as it has been developed so far:\nDefine the Rocket() class.\nDefine the __init__() method. Let your __init__() method accept x and y values for the initial position of the rocket. Make sure the default behavior is to position the rocket at (0,0).\nDefine the move_rocket() method. The method should accept an amount to move left or right, and an amount to move up or down.\nCreate a Rocket object. Move the rocket around, printing its position after each move.\nCreate a small fleet of rockets. Move several of them around, and print their final positions to prove that each rocket can move independently of the other rockets.\nDefine the get_distance() method. The method should accept a Rocket object, and calculate the distance between the current rocket and the rocket that is passed into the method.\nUse the get_distance() method to print the distances between several of the rockets in your fleet.\n\n\n\n\n\nRocket Attributes\n\nStart with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.\nAdd several of your own attributes to the __init__() function. The values of your attributes can be set automatically by the __init__ function, or they can be set by paremeters passed into __init__().\nCreate a rocket and print the values for the attributes you have created, to show they have been set correctly.\nCreate a small fleet of rockets, and set different values for one of the attributes you have created. Print the values of these attributes for each rocket in your fleet, to show that they have been set properly for each rocket.\nIf you are not sure what kind of attributes to add, you could consider storing the height of the rocket, the crew size, the name of the rocket, the speed of the rocket, or many other possible characteristics of a rocket.\n\nRocket Methods\n\nStart with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.\nAdd a new method to the class. This is probably a little more challenging than adding attributes, but give it a try.\nThink of what rockets do, and make a very simple version of that behavior using print statements. For example, rockets lift off when they are launched. You could make a method called launch(), and all it would do is print a statement such as \"The rocket has lifted off!\" If your rocket has a name, this sentence could be more descriptive.\nYou could make a very simple land_rocket() method that simply sets the x and y values of the rocket back to 0. Print the position before and after calling the land_rocket() method to make sure your method is doing what it's supposed to.\nIf you enjoy working with math, you could implement a safety_check() method. This method would take in another rocket object, and call the get_distance() method on that rocket. Then it would check if that rocket is too close, and print a warning message if the rocket is too close. If there is zero distance between the two rockets, your method could print a message such as, \"The rockets have crashed!\" (Be careful; getting a zero distance could mean that you accidentally found the distance between a rocket and itself, rather than a second rocket.)\n\n\n\n<a name='exercise_person_class'></a>Person Class\n\nModeling a person is a classic exercise for people who are trying to learn how to write classes. We are all familiar with characteristics and behaviors of people, so it is a good exercise to try.\nDefine a Person() class.\nIn the __init()__ function, define several attributes of a person. Good attributes to consider are name, age, place of birth, and anything else you like to know about the people in your life.\nWrite one method. This could be as simple as introduce_yourself(). This method would print out a statement such as, \"Hello, my name is Eric.\"\nYou could also make a method such as age_person(). A simple version of this method would just add 1 to the person's age.\nA more complicated version of this method would involve storing the person's birthdate rather than their age, and then calculating the age whenever the age is requested. But dealing with dates and times is not particularly easy if you've never done it in any other programming language before.\n\n\nCreate a person, set the attribute values appropriately, and print out information about the person.\nCall your method on the person you created. Make sure your method executed properly; if the method does not print anything out directly, print something before and after calling the method to make sure it did what it was supposed to.\n\n\n\n<a name='exercise_car_class'></a>Car Class\n\nModeling a car is another classic exercise.\nDefine a Car() class.\nIn the __init__() function, define several attributes of a car. Some good attributes to consider are make (Subaru, Audi, Volvo...), model (Outback, allroad, C30), year, num_doors, owner, or any other aspect of a car you care to include in your class.\nWrite one method. This could be something such as describe_car(). This method could print a series of statements that describe the car, using the information that is stored in the attributes. You could also write a method that adjusts the mileage of the car or tracks its position.\nCreate a car object, and use your method.\nCreate several car objects with different values for the attributes. Use your method on several of your cars.",
"# Ex 9.2 : Your Own Rocket 2\n\n# put your code here\n\n# Ex 9.3 : Rocket Attributes\n\n# put your code here\n\n# Ex 9.4 : Rocket Methods\n\n# put your code here\n\n# Ex 9.5 : Person Class\n\n# put your code here\n\n# Ex 9.6 : Car Class\n\n# put your code here",
"top\n<a name='inheritance'></a>Inheritance\nOne of the most important goals of the object-oriented approach to programming is the creation of stable, reliable, reusable code. If you had to create a new class for every kind of object you wanted to model, you would hardly have any reusable code. \nIn Python and any other language that supports OOP, one class can inherit from another class. This means you can base a new class on an existing class; the new class inherits all of the attributes and behavior of the class it is based on. \nA new class can override any undesirable attributes or behavior of the class it inherits from, and it can add any new attributes or behavior that are appropriate. The original class is called the parent class, and the new class is a child of the parent class. The parent class is also called a superclass, and the child class is also called a subclass.\nThe child class inherits all attributes and behavior from the parent class, but any attributes that are defined in the child class are not available to the parent class. This may be obvious to many people, but it is worth stating. \nThis also means a child class can override behavior of the parent class. If a child class defines a method that also appears in the parent class, objects of the child class will use the new method rather than the parent class method.\nTo better understand inheritance, let's look at an example of a class that can be based on the Rocket class.\n<a name='shuttle'></a>The SpaceShuttle class\nIf you wanted to model a space shuttle, you could write an entirely new class. But a space shuttle is just a special kind of rocket. Instead of writing an entirely new class, you can inherit all of the attributes and behavior of a Rocket, and then add a few appropriate attributes and behavior for a Shuttle.\nOne of the most significant characteristics of a space shuttle is that it can be reused. So the only difference we will add at this point is to record the number of flights the shutttle has completed. Everything else you need to know about a shuttle has already been coded into the Rocket class.\nHere is what the Shuttle class looks like:",
"from math import sqrt\n\nclass Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n def get_distance(self, other_rocket):\n # Calculates the distance from this rocket to another rocket,\n # and returns that value.\n distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)\n return distance\n \nclass Shuttle(Rocket):\n # Shuttle simulates a space shuttle, which is really\n # just a reusable rocket.\n \n def __init__(self, x=0, y=0, flights_completed=0):\n super().__init__(x, y)\n self.flights_completed = flights_completed\n \nshuttle = Shuttle(10,0,3)\nprint(shuttle)",
"When a new class is based on an existing class, you write the name of the parent class in parentheses when you define the new class:",
"class NewClass(ParentClass):",
"The __init__() function of the new class needs to call the __init__() function of the parent class. The __init__() function of the new class needs to accept all of the parameters required to build an object from the parent class, and these parameters need to be passed to the __init__() function of the parent class. The super().__init__() function takes care of this:",
"class NewClass(ParentClass):\n \n def __init__(self, arguments_new_class, arguments_parent_class):\n super().__init__(arguments_parent_class)\n # Code for initializing an object of the new class.",
"The super() function passes the self argument to the parent class automatically. You could also do this by explicitly naming the parent class when you call the __init__() function, but you then have to include the self argument manually:",
"class Shuttle(Rocket):\n # Shuttle simulates a space shuttle, which is really\n # just a reusable rocket.\n \n def __init__(self, x=0, y=0, flights_completed=0):\n Rocket.__init__(self, x, y)\n self.flights_completed = flights_completed",
"This might seem a little easier to read, but it is preferable to use the super() syntax. When you use super(), you don't need to explicitly name the parent class, so your code is more resilient to later changes. As you learn more about classes, you will be able to write child classes that inherit from multiple parent classes, and the super() function will call the parent classes' __init__() functions for you, in one line. This explicit approach to calling the parent class' __init__() function is included so that you will be less confused if you see it in someone else's code.\nThe output above shows that a new Shuttle object was created. This new Shuttle object can store the number of flights completed, but it also has all of the functionality of the Rocket class: it has a position that can be changed, and it can calculate the distance between itself and other rockets or shuttles. This can be demonstrated by creating several rockets and shuttles, and then finding the distance between one shuttle and all the other shuttles and rockets.\nThis example uses a simple function called randint, which generates a random integer between a lower and upper bound, to determine the position of each rocket and shuttle:",
"from math import sqrt\nfrom random import randint\n\nclass Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n def get_distance(self, other_rocket):\n # Calculates the distance from this rocket to another rocket,\n # and returns that value.\n distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)\n return distance\n\nclass Shuttle(Rocket):\n # Shuttle simulates a space shuttle, which is really\n # just a reusable rocket.\n \n def __init__(self, x=0, y=0, flights_completed=0):\n super().__init__(x, y)\n self.flights_completed = flights_completed\n\n# Create several shuttles and rockets, with random positions.\n# Shuttles have a random number of flights completed.\nshuttles = []\nfor x in range(0,3):\n x = randint(0,100)\n y = randint(1,100)\n flights_completed = randint(0,10)\n shuttles.append(Shuttle(x, y, flights_completed))\n\nrockets = []\nfor x in range(0,3):\n x = randint(0,100)\n y = randint(1,100)\n rockets.append(Rocket(x, y))\n\n# Show the number of flights completed for each shuttle.\nfor index, shuttle in enumerate(shuttles):\n print(\"Shuttle %d has completed %d flights.\" % (index, shuttle.flights_completed))\n \nprint(\"\\n\") \n# Show the distance from the first shuttle to all other shuttles.\nfirst_shuttle = shuttles[0]\nfor index, shuttle in enumerate(shuttles):\n distance = first_shuttle.get_distance(shuttle)\n print(\"The first shuttle is %f units away from shuttle %d.\" % (distance, index))\n\n\nprint(\"\\n\")\n# Show the distance from the first shuttle to all other rockets.\nfor index, rocket in enumerate(rockets):\n distance = first_shuttle.get_distance(rocket)\n print(\"The first shuttle is %f units away from rocket %d.\" % (distance, index))",
"Inheritance is a powerful feature of object-oriented programming. Using just what you have seen so far about classes, you can model an incredible variety of real-world and virtual phenomena with a high degree of accuracy. The code you write has the potential to be stable and reusable in a variety of applications.\ntop\n<a name='exercises_inheritance'></a>Exercises\n<a name='exercise_student_class'></a>Student Class\n\nStart with your program from Person Class.\nMake a new class called Student that inherits from Person.\nDefine some attributes that a student has, which other people don't have.\nA student has a school they are associated with, a graduation year, a gpa, and other particular attributes.\n\n\nCreate a Student object, and prove that you have used inheritance correctly.\nSet some attribute values for the student, that are only coded in the Person class.\nSet some attribute values for the student, that are only coded in the Student class.\nPrint the values for all of these attributes.\n\n\n\nRefining Shuttle\n\nTake the latest version of the Shuttle class. Extend it.\nAdd more attributes that are particular to shuttles such as maximum number of flights, capability of supporting spacewalks, and capability of docking with the ISS.\nAdd one more method to the class, that relates to shuttle behavior. This method could simply print a statement, such as \"Docking with the ISS,\" for a dock_ISS() method.\nProve that your refinements work by creating a Shuttle object with these attributes, and then call your new method.",
"# Ex 9.7 : Student Class\n\n# put your code here\n\n# Ex 9.8 : Refining Shuttle\n\n# put your code here",
"top\n<a name='modules_classes'></a>Modules and classes\nNow that you are starting to work with classes, your files are going to grow longer. This is good, because it means your programs are probably doing more interesting things. But it is bad, because longer files can be more difficult to work with. Python allows you to save your classes in another file and then import them into the program you are working on. This has the added advantage of isolating your classes into files that can be used in any number of different programs. As you use your classes repeatedly, the classes become more reliable and complete overall.\n<a name='single_class_module'></a>Storing a single class in a module\nWhen you save a class into a separate file, that file is called a module. You can have any number of classes in a single module. There are a number of ways you can then import the class you are interested in.\nStart out by saving just the Rocket class into a file called rocket.py. Notice the naming convention being used here: the module is saved with a lowercase name, and the class starts with an uppercase letter.\nThis convention is pretty important for a number of reasons, and it is a really good idea to follow the convention.",
"# Save as rocket.py\nfrom math import sqrt\n\nclass Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n def get_distance(self, other_rocket):\n # Calculates the distance from this rocket to another rocket,\n # and returns that value.\n distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)\n return distance",
"Make a separate file called rocket_game.py. If you are more interested in science than games, feel free to call this file something like rocket_simulation.py. Again, to use standard naming conventions, make sure you are using a lowercase_underscore name for this file.",
"# Save as rocket_game.py\nfrom rocket import Rocket\n\nrocket = Rocket()\nprint(\"The rocket is at (%d, %d).\" % (rocket.x, rocket.y))",
"This is a really clean and uncluttered file. A rocket is now something you can define in your programs, without the details of the rocket's implementation cluttering up your file. You don't have to include all the class code for a rocket in each of your files that deals with rockets; the code defining rocket attributes and behavior lives in one file, and can be used anywhere.\nThe first line tells Python to look for a file called rocket.py. It looks for that file in the same directory as your current program. You can put your classes in other directories, but we will get to that convention a bit later. Notice that you do not.\nWhen Python finds the file rocket.py, it looks for a class called Rocket. When it finds that class, it imports that code into the current file, without you ever seeing that code. You are then free to use the class Rocket as you have seen it used in previous examples.\ntop\n<a name='multiple_classes_module'></a>Storing multiple classes in a module\nA module is simply a file that contains one or more classes or functions, so the Shuttle class actually belongs in the rocket module as well:",
"# Save as rocket.py\nfrom math import sqrt\n\nclass Rocket():\n # Rocket simulates a rocket ship for a game,\n # or a physics simulation.\n \n def __init__(self, x=0, y=0):\n # Each rocket has an (x,y) position.\n self.x = x\n self.y = y\n \n def move_rocket(self, x_increment=0, y_increment=1):\n # Move the rocket according to the paremeters given.\n # Default behavior is to move the rocket up one unit.\n self.x += x_increment\n self.y += y_increment\n \n def get_distance(self, other_rocket):\n # Calculates the distance from this rocket to another rocket,\n # and returns that value.\n distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)\n return distance\n \n\nclass Shuttle(Rocket):\n # Shuttle simulates a space shuttle, which is really\n # just a reusable rocket.\n \n def __init__(self, x=0, y=0, flights_completed=0):\n super().__init__(x, y)\n self.flights_completed = flights_completed",
"Now you can import the Rocket and the Shuttle class, and use them both in a clean uncluttered program file:",
"# Save as rocket_game.py\nfrom rocket import Rocket, Shuttle\n\nrocket = Rocket()\nprint(\"The rocket is at (%d, %d).\" % (rocket.x, rocket.y))\n\nshuttle = Shuttle()\nprint(\"\\nThe shuttle is at (%d, %d).\" % (shuttle.x, shuttle.y))\nprint(\"The shuttle has completed %d flights.\" % shuttle.flights_completed)",
"The first line tells Python to import both the Rocket and the Shuttle classes from the rocket module. You don't have to import every class in a module; you can pick and choose the classes you care to use, and Python will only spend time processing those particular classes.\n<a name='multiple_ways_import'></a>A number of ways to import modules and classes\nThere are several ways to import modules and classes, and each has its own merits.\nimport module_name\nThe syntax for importing classes that was just shown:",
"from module_name import ClassName",
"is straightforward, and is used quite commonly. It allows you to use the class names directly in your program, so you have very clean and readable code. This can be a problem, however, if the names of the classes you are importing conflict with names that have already been used in the program you are working on. This is unlikely to happen in the short programs you have been seeing here, but if you were working on a larger program it is quite possible that the class you want to import from someone else's work would happen to have a name you have already used in your program.\nIn this case, you can use simply import the module itself:",
"# Save as rocket_game.py\nimport rocket\n\nrocket_0 = rocket.Rocket()\nprint(\"The rocket is at (%d, %d).\" % (rocket_0.x, rocket_0.y))\n\nshuttle_0 = rocket.Shuttle()\nprint(\"\\nThe shuttle is at (%d, %d).\" % (shuttle_0.x, shuttle_0.y))\nprint(\"The shuttle has completed %d flights.\" % shuttle_0.flights_completed)",
"The general syntax for this kind of import is:",
"import module_name",
"After this, classes are accessed using dot notation:",
"module_name.ClassName",
"This prevents some name conflicts. If you were reading carefully however, you might have noticed that the variable name rocket in the previous example had to be changed because it has the same name as the module itself. This is not good, because in a longer program that could mean a lot of renaming.\nimport module_name as local_module_name\nThere is another syntax for imports that is quite useful:",
"import module_name as local_module_name",
"When you are importing a module into one of your projects, you are free to choose any name you want for the module in your project. So the last example could be rewritten in a way that the variable name rocket would not need to be changed:",
"# Save as rocket_game.py\nimport rocket as rocket_module\n\nrocket = rocket_module.Rocket()\nprint(\"The rocket is at (%d, %d).\" % (rocket.x, rocket.y))\n\nshuttle = rocket_module.Shuttle()\nprint(\"\\nThe shuttle is at (%d, %d).\" % (shuttle.x, shuttle.y))\nprint(\"The shuttle has completed %d flights.\" % shuttle.flights_completed)",
"This approach is often used to shorten the name of the module, so you don't have to type a long module name before each class name that you want to use. But it is easy to shorten a name so much that you force people reading your code to scroll to the top of your file and see what the shortened name stands for. In this example,",
"import rocket as rocket_module",
"leads to much more readable code than something like:",
"import rocket as r",
"from module_name import *\nThere is one more import syntax that you should be aware of, but you should probably avoid using. This syntax imports all of the available classes and functions in a module:",
"from module_name import *",
"This is not recommended, for a couple reasons. First of all, you may have no idea what all the names of the classes and functions in a module are. If you accidentally give one of your variables the same name as a name from the module, you will have naming conflicts. Also, you may be importing way more code into your program than you need.\nIf you really need all the functions and classes from a module, just import the module and use the module_name.ClassName syntax in your program.\nYou will get a sense of how to write your imports as you read more Python code, and as you write and share some of your own code.\ntop\n<a name='module_functions'></a>A module of functions\nYou can use modules to store a set of functions you want available in different programs as well, even if those functions are not attached to any one class. To do this, you save the functions into a file, and then import that file just as you saw in the last section. Here is a really simple example; save this is multiplying.py:",
"# Save as multiplying.py\ndef double(x):\n return 2*x\n\ndef triple(x):\n return 3*x\n\ndef quadruple(x):\n return 4*x",
"Now you can import the file multiplying.py, and use these functions. Using the from module_name import function_name syntax:",
"from multiplying import double, triple, quadruple\n\nprint(double(5))\nprint(triple(5))\nprint(quadruple(5))",
"Using the import module_name syntax:",
"import multiplying\n\nprint(multiplying.double(5))\nprint(multiplying.triple(5))\nprint(multiplying.quadruple(5))",
"Using the import module_name as local_module_name syntax:",
"import multiplying as m\n\nprint(m.double(5))\nprint(m.triple(5))\nprint(m.quadruple(5))",
"Using the from module_name import * syntax:",
"from multiplying import *\n\nprint(double(5))\nprint(triple(5))\nprint(quadruple(5))",
"top\n<a name='exercises_importing'></a>Exercises\nImporting Student\n\nTake your program from Student Class\nSave your Person and Student classes in a separate file called person.py.\nSave the code that uses these classes in four separate files.\nIn the first file, use the from module_name import ClassName syntax to make your program run.\nIn the second file, use the import module_name syntax.\nIn the third file, use the import module_name as different_local_module_name syntax.\nIn the fourth file, use the import * syntax.\n\n\n\n\n\nImporting Car\n\nTake your program from Car Class\nSave your Car class in a separate file called car.py.\nSave the code that uses the car class into four separate files.\nIn the first file, use the from module_name import ClassName syntax to make your program run.\nIn the second file, use the import module_name syntax.\nIn the third file, use the import module_name as different_local_module_name syntax.\nIn the fourth file, use the import * syntax.",
"# Ex 9.9 : Importing Student\n\n# put your code here\n\n# Ex 9.10 : Importing Car\n\n# put your code here",
"top\n<a name=\"mro\"></a>Method Resolution Order (mro)",
"class A:\n def __init__(self, a):\n self.a = a\n \n\nclass GreatB:\n \n def greetings(self):\n print('Greetings from Type: ', self.__class__)\n \nclass B(GreatB):\n def __init__(self, b):\n self.b = b\n \n \nclass C(A,B):\n def __init__(self, a, b):\n A.__init__(self, a)\n B.__init__(self, b)\n \nprint('MRO: ', C.mro()) \nc = C('A', 'B')\nprint('c.a: ', c.a)\nprint('c.b: ', c.b)\n\nc.greetings()\nsuper(C, c).greetings()\nsuper(B, c).greetings()\n\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ekostat/ekostat_calculator | notebooks/lv_notebook_kustzon.ipynb | mit | [
"# coding: utf-8\n\n# In[1]:\n\n\nimport os \nimport sys\npath = \"../\"\npath = \"D:/github/w_vattenstatus/ekostat_calculator\"\nsys.path.append(path)\n#os.path.abspath(\"../\")\nprint(os.path.abspath(path))\n\nimport pandas as pd\nimport numpy as np\nimport json\nimport timeit\nimport time\nimport core\nimport importlib\nimportlib.reload(core)\nimport logging\nimportlib.reload(core) \ntry:\n logging.shutdown()\n importlib.reload(logging)\nexcept:\n pass\nfrom event_handler import EventHandler\nprint(core.__file__)\npd.__version__",
"Load directories",
"root_directory = 'D:/github/w_vattenstatus/ekostat_calculator'#\"../\" #os.getcwd()\nworkspace_directory = root_directory + '/workspaces' \nresource_directory = root_directory + '/resources'\n#alias = 'lena'\nuser_id = 'test_user' #kanske ska vara off_line user?\n# workspace_alias = 'lena_indicator' # kustzonsmodellen_3daydata\nworkspace_alias = 'kustzonsmodellen_3daydata'\n\n# ## Initiate EventHandler\nprint(root_directory)\npaths = {'user_id': user_id, \n 'workspace_directory': root_directory + '/workspaces', \n 'resource_directory': root_directory + '/resources', \n 'log_directory': 'D:/github' + '/log', \n 'test_data_directory': 'D:/github' + '/test_data',\n 'cache_directory': 'D:/github/w_vattenstatus/cache'}\n\nt0 = time.time()\nekos = EventHandler(**paths)\n#request = ekos.test_requests['request_workspace_list']\n#response = ekos.request_workspace_list(request) \n#ekos.write_test_response('request_workspace_list', response)\nprint('-'*50)\nprint('Time for request: {}'.format(time.time()-t0))\n\n###############################################################################################################################\n# ### Make a new workspace\n\n# ekos.copy_workspace(source_uuid='default_workspace', target_alias='kustzonsmodellen_3daydata')\n\n# ### See existing workspaces and choose workspace name to load\nekos.print_workspaces()\nworkspace_uuid = ekos.get_unique_id_for_alias(workspace_alias = workspace_alias) #'kuszonsmodellen' lena_indicator \nprint(workspace_uuid)\n\nworkspace_alias = ekos.get_alias_for_unique_id(workspace_uuid = workspace_uuid)\n\n###############################################################################################################################\n# ### Load existing workspace\nekos.load_workspace(unique_id = workspace_uuid)\n\n###############################################################################################################################\n# ### import data\n# ekos.import_default_data(workspace_alias = workspace_alias)\n\n###############################################################################################################################\n# ### Load all data in workspace\n# #### if there is old data that you want to remove\nekos.get_workspace(workspace_uuid = workspace_uuid).delete_alldata_export()\nekos.get_workspace(workspace_uuid = workspace_uuid).delete_all_export_data()\n\n###############################################################################################################################\n# #### to just load existing data in workspace\nekos.load_data(workspace_uuid = workspace_uuid)\n\n############################################################################################################################### \n# ### check workspace data length\nw = ekos.get_workspace(workspace_uuid = workspace_uuid)\nlen(w.data_handler.get_all_column_data_df())\n\n############################################################################################################################### \n# ### see subsets in data \nfor subset_uuid in w.get_subset_list():\n print('uuid {} alias {}'.format(subset_uuid, w.uuid_mapping.get_alias(unique_id=subset_uuid)))\n\n############################################################################################################################### \n# # Step 0 \nprint(w.data_handler.all_data.columns)\n\n############################################################################################################################### \n# ### Apply first data filter \nw.apply_data_filter(step = 0) # This sets the first level of data filter in the IndexHandler \n\n############################################################################################################################### \n# # Step 1 \n# ### make new subset\n# w.copy_subset(source_uuid='default_subset', target_alias='test_kustzon') \n\n###############################################################################################################################\n# ### Choose subset name to load\nsubset_alias = 'test_kustzon'\n# subset_alias = 'period_2007-2012_refvalues_2013'\n# subset_alias = 'test_subset'\nsubset_uuid = ekos.get_unique_id_for_alias(workspace_alias = workspace_alias, subset_alias = subset_alias)\nprint('subset_alias', subset_alias, 'subset_uuid', subset_uuid)",
"Set subset filters",
"# #### year filter\nw.set_data_filter(subset = subset_uuid, step=1, \n filter_type='include_list', \n filter_name='MYEAR', \n data=[2007,2008,2009,2010,2011,2012])#['2011', '2012', '2013']) #, 2014, 2015, 2016\n\n###############################################################################################################################\n# #### waterbody filter\nw.set_data_filter(subset = subset_uuid, step=1, \n filter_type='include_list', \n filter_name='viss_eu_cd', data = []) #'SE584340-174401', 'SE581700-113000', 'SE654470-222700', 'SE633000-195000', 'SE625180-181655'\n# data=['SE584340-174401', 'SE581700-113000', 'SE654470-222700', 'SE633000-195000', 'SE625180-181655']) \n# wb with no data for din 'SE591400-182320'\n \nf1 = w.get_data_filter_object(subset = subset_uuid, step=1) \nprint(f1.include_list_filter)\n\nprint('subset_alias:', subset_alias, '\\nsubset uuid:', subset_uuid)\n\nf1 = w.get_data_filter_object(subset = subset_uuid, step=1) \nprint(f1.include_list_filter)\n\n############################################################################################################################### \n# ## Apply step 1 datafilter to subset\nw.apply_data_filter(subset = subset_uuid, step = 1)\nfiltered_data = w.get_filtered_data(step = 1, subset = subset_uuid)\nprint(filtered_data['VISS_EU_CD'].unique())\n\nfiltered_data[['AMON','NTRA','DIN','CPHL_INTEG_CALC','DEPH']].head()",
"#########################################################################################################################\nStep 2",
"### Load indicator settings filter \nw.get_step_object(step = 2, subset = subset_uuid).load_indicator_settings_filters()\n\n############################################################################################################################### \n### set available indicators \nw.get_available_indicators(subset= subset_uuid, step=2)\n \n\n###############################################################################################################################\n# ### choose indicators\n#list(zip(typeA_list, df_step1.WATER_TYPE_AREA.unique()))\n# indicator_list = ['oxygen','din_winter','ntot_summer', 'ntot_winter', 'dip_winter', 'ptot_summer', 'ptot_winter','bqi', 'biov', 'chl', 'secchi']\n# indicator_list = ['din_winter','ntot_summer', 'ntot_winter', 'dip_winter', 'ptot_summer', 'ptot_winter']\n#indicator_list = ['biov', 'chl']\n# indicator_list = ['bqi', 'biov', 'chl', 'secchi']\n#indicator_list = ['bqi', 'secchi'] + ['biov', 'chl'] + ['din_winter']\n# indicator_list = ['din_winter','ntot_summer']\n# indicator_list = ['indicator_' + indicator for indicator in indicator_list]\nindicator_list = w.available_indicators\n\n############################################################################################################################### \n# ### Apply indicator data filter\nprint('apply indicator data filter to {}'.format(indicator_list))\nfor indicator in indicator_list:\n w.apply_indicator_data_filter(step = 2, \n subset = subset_uuid, \n indicator = indicator)#,\n# water_body_list = test_wb)\n #print(w.mapping_objects['water_body'][wb])\n #print('*************************************')\n\n#df = w.get_filtered_data(subset = subset_uuid, step = 'step_2', water_body = 'SE625180-181655', indicator = 'indicator_din_winter').dropna(subset = ['DIN'])",
"#########################################################################################################################\nStep 3",
"# ### Set up indicator objects\nprint('indicator set up to {}'.format(indicator_list))\nw.get_step_object(step = 3, subset = subset_uuid).indicator_setup(indicator_list = indicator_list) \n\n###############################################################################################################################\n# ### CALCULATE STATUS\nprint('CALCULATE STATUS to {}'.format(indicator_list))\nw.get_step_object(step = 3, subset = subset_uuid).calculate_status(indicator_list = indicator_list)\n\n############################################################################################################################### \n# ### CALCULATE QUALITY ELEMENTS\nw.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'nutrients')\n# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'phytoplankton')\n# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'bottomfauna')\n# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'oxygen')\n# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'secchi')\n \n# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(subset_unique_id = subset_uuid, quality_element = 'Phytoplankton')\n \n "
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cuttlefishh/emp | methods/figure-data/fig-1/Fig1_data_files.ipynb | bsd-3-clause | [
"import pandas as pd",
"Figure 1 csv data generation\nFigure data consolidation for Figure 1, which maps samples and shows distribution across EMPO categories\nFigure 1a and 1b\nfor these figure, we just need the samples, EMPO level categories, and lat/lon coordinates",
"# Load up metadata map\n\nmetadata_fp = '../../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv'\n\nmetadata = pd.read_csv(metadata_fp, header=0, sep='\\t')\n\nmetadata.head()\n\nmetadata.columns\n\n# take just the columns we need for this figure panel\n\nfig1ab = metadata.loc[:,['#SampleID','empo_0','empo_1','empo_2','empo_3','latitude_deg','longitude_deg']]\nfig1ab.head()",
"Write to Excel notebook",
"fig1 = pd.ExcelWriter('Figure1_data.xlsx')\n\nfig1ab.to_excel(fig1,'Fig-1ab')\n\nfig1.save()"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
tiegz/ThreatExchange | ipynb/ThreatExchange Data Dashboard.ipynb | bsd-3-clause | [
"ThreatExchange Data Dashboard\nPurpose\nThe ThreatExchange APIs are designed to make consuming threat intelligence from multiple sources easy. This notebook will walk you through:\n\nbuilding an initial dashboard for assessing the data visible to your appID;\nfiltering down to a subset you consider high value; and\nexporting the high value data to a file.\n\nWhat you need\nBefore getting started, you'll need a few Python packages installed:\n\nPandas for data manipulation and analysis\nPytx for ThreatExchange access\nSeaborn for making charts pretty\n\nAll of the python packages mentioned can be installed via \npip install <package_name>\nSetup a ThreatExchange access_token\nIf you don't already have an access_token for your app, use the Facebook Access Token Tool to get one.",
"from pytx.access_token import access_token\nfrom pytx.logger import setup_logger\nfrom pytx.vocabulary import PrivacyType as pt\n\n# Specify the location of your token via one of several ways:\n# https://pytx.readthedocs.org/en/latest/pytx.access_token.html\naccess_token()",
"Optionally, enable debug level logging",
"# Uncomment this if you want debug logging enabled\n#setup_logger(log_file=\"pytx.log\")",
"Search for data in ThreatExchange\nStart by running a query against the ThreatExchange APIs to pull down any/all data relevant to you over a specified period of days.",
"# Our basic search parameters, we default to querying over the past 14 days\ndays_back = 14\nsearch_terms = ['abuse', 'phishing', 'malware', 'exploit', 'apt', 'ddos', 'brute', 'scan', 'cve']",
"Next, we execute the query using our search parameters and put the results in a Pandas DataFrame",
"from datetime import datetime, timedelta\nfrom time import strftime\nimport pandas as pd\nimport re\n\nfrom pytx import ThreatDescriptor\nfrom pytx.vocabulary import ThreatExchange as te\n\n# Define your search string and other params, see \n# https://pytx.readthedocs.org/en/latest/pytx.common.html#pytx.common.Common.objects\n# for the full list of options\nsearch_params = {\n te.FIELDS: ThreatDescriptor._default_fields,\n te.LIMIT: 1000,\n te.SINCE: strftime('%Y-%m-%d %H:%m:%S +0000', (datetime.utcnow() + timedelta(days=(-1*days_back))).timetuple()),\n te.TEXT: search_terms,\n te.UNTIL: strftime('%Y-%m-%d %H:%m:%S +0000', datetime.utcnow().timetuple()),\n te.STRICT_TEXT: False\n}\n\ndata_frame = None\nfor search_term in search_terms:\n print \"Searching for '%s' over -%d days\" % (search_term, days_back)\n results = ThreatDescriptor.objects(\n fields=search_params[te.FIELDS],\n limit=search_params[te.LIMIT],\n text=search_term, \n since=search_params[te.SINCE], \n until=search_params[te.UNTIL],\n strict_text=search_params[te.STRICT_TEXT]\n )\n tmp = pd.DataFrame([result.to_dict() for result in results])\n tmp['search_term'] = search_term\n print \"\\t... found %d descriptors\" % tmp.size\n if data_frame is None:\n data_frame = tmp\n else:\n data_frame = data_frame.append(tmp)\n \nprint \"\\nFound %d descriptors in total.\" % data_frame.size",
"Do some data munging for easier analysis and then preview as a sanity check",
"from time import mktime\n\n# Extract a datetime and timestamp, for easier analysis\ndata_frame['ds'] = pd.to_datetime(data_frame.added_on.str[0:10], format='%Y-%m-%d')\ndata_frame['ts'] = pd.to_datetime(data_frame.added_on)\n\n# Extract the owner data\nowner = data_frame.pop('owner')\nowner = owner.apply(pd.Series)\ndata_frame = pd.concat([data_frame, owner.email, owner.name], axis=1)\n\n# Extract freeform 'tags' in the description\ndef extract_tags(text):\n return re.findall(r'\\[([a-zA-Z0-9\\:\\-\\_]+)\\]', text)\ndata_frame['tags'] = data_frame.description.map(lambda x: [] if x is None else extract_tags(x))\n\ndata_frame.head(n=5)",
"Create a Dashboard to Get a High-level View\nThe raw data is great, but it would be much better if we could take a higher level view of the data. This dashboard will provide more insight into:\n\nwhat data is available\nwho's sharing it\nhow is labeled\nhow much of it is likely to be directly applicable for alerting",
"import math\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom pytx.vocabulary import ThreatDescriptor as td\n\n%matplotlib inline\n\n# Setup subplots for our dashboard\nfig, axes = plt.subplots(nrows=4, ncols=2, figsize=(16,32))\naxes[0,0].set_color_cycle(sns.color_palette(\"coolwarm_r\", 15))\n\n# Plot by Type over time\ntype_over_time = data_frame.groupby(\n [pd.Grouper(freq='d', key='ds'), te.TYPE]\n ).count().unstack(te.TYPE)\ntype_over_time.added_on.plot(\n kind='line', \n stacked=True, \n title=\"Indicator Types Per Day (-\" + str(days_back) + \"d)\",\n ax=axes[0,0]\n)\n\n# Plot by threat_type over time\ntt_over_time = data_frame.groupby(\n [pd.Grouper(freq='w', key='ds'), 'threat_type']\n ).count().unstack('threat_type')\ntt_over_time.added_on.plot(\n kind='bar', \n stacked=True, \n title=\"Threat Types Per Week (-\" + str(days_back) + \"d)\",\n ax=axes[0,1]\n)\n\n# Plot the top 10 tags\ntags = pd.DataFrame([item for sublist in data_frame.tags for item in sublist])\ntags[0].value_counts().head(10).plot(\n kind='bar', \n stacked=True,\n title=\"Top 10 Tags (-\" + str(days_back) + \"d)\",\n ax=axes[1,0]\n)\n\n# Plot by who is sharing\nowner_over_time = data_frame.groupby(\n [pd.Grouper(freq='w', key='ds'), 'name']\n ).count().unstack('name')\nowner_over_time.added_on.plot(\n kind='bar', \n stacked=True, \n title=\"Who's Sharing Each Week? (-\" + str(days_back) + \"d)\",\n ax=axes[1,1]\n)\n\n# Plot the data as a timeseries of when it was published\ndata_over_time = data_frame.groupby(pd.Grouper(freq='6H', key='ts')).count()\ndata_over_time.added_on.plot(\n kind='line',\n title=\"Data shared over time (-\" + str(days_back) + \"d)\",\n ax=axes[2,0]\n)\n\n# Plot by status label\ndata_frame.status.value_counts().plot(\n kind='pie', \n title=\"Threat Statuses (-\" + str(days_back) + \"d)\",\n ax=axes[2,1]\n)\n\n# Heatmap by type / source\nowner_and_type = pd.DataFrame(data_frame[['name', 'type']])\nowner_and_type['n'] = 1\ngrouped = owner_and_type.groupby(['name', 'type']).count().unstack('type').fillna(0)\nax = sns.heatmap(\n data=grouped['n'], \n robust=True,\n cmap=\"YlGnBu\",\n ax=axes[3,0]\n)\n\n# These require a little data munging\n# translate a severity enum to a value\n# TODO Add this translation to Pytx\ndef severity_value(severity):\n if severity == 'UNKNOWN': return 0\n elif severity == 'INFO': return 1\n elif severity == 'WARNING': return 3\n elif severity == 'SUSPICIOUS': return 5\n elif severity == 'SEVERE': return 7\n elif severity == 'APOCALYPSE': return 10\n return 0\n# translate a severity \ndef value_severity(severity):\n if severity >= 9: return 'APOCALYPSE'\n elif severity >= 6: return 'SEVERE'\n elif severity >= 4: return 'SUSPICIOUS'\n elif severity >= 2: return 'WARNING'\n elif severity >= 1: return 'INFO'\n elif severity >= 0: return 'UNKNOWN'\n\n# Plot by how actionable the data is \n# Build a special dataframe and chart it\ndata_frame['severity_value'] = data_frame.severity.apply(severity_value)\ndf2 = pd.DataFrame({'count' : data_frame.groupby(['name', 'confidence', 'severity_value']).size()}).reset_index()\nax = df2.plot(\n kind='scatter', \n x='severity_value', y='confidence', \n xlim=(-1,11), ylim=(-10,110), \n title='Data by Conf / Sev With Threshold Line',\n ax=axes[3,1],\n s=df2['count'].apply(lambda x: 1000 * math.log10(x)),\n use_index=td.SEVERITY\n)\n# Draw a threshhold for data we consider likely using for alerts (aka 'high value')\nax.plot([2,10], [100,0], c='red')",
"Dive A Little Deeper\nTake a subset of the data and understand it a little more. \nIn this example, we presume that we'd like to take phishing related data and study it, to see if we can use it to better defend a corporate network or abuse in a product. \nAs a simple example, we'll filter down to data labeled MALICIOUS and the word phish in the description, to see if we can make a more detailed conclusion on how to apply the data to our existing internal workflows.",
"from pytx.vocabulary import Status as s\n\n\nphish_data = data_frame[(data_frame.status == s.MALICIOUS) \n & data_frame.description.apply(lambda x: x.find('phish') if x != None else False)]\n# TODO: also filter for attack_type == PHISHING, when Pytx supports it\n\n%matplotlib inline\n\n# Setup subplots for our deeper dive plots\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16,8))\n\n# Heatmap of type / source\nowner_and_type = pd.DataFrame(phish_data[['name', 'type']])\nowner_and_type['n'] = 1\ngrouped = owner_and_type.groupby(['name', 'type']).count().unstack('type').fillna(0)\nax = sns.heatmap(\n data=grouped['n'], \n robust=True,\n cmap=\"YlGnBu\",\n ax=axes[0]\n)\n\n# Tag breakdown of the top 10 tags\ntags = pd.DataFrame([item for sublist in phish_data.tags for item in sublist])\ntags[0].value_counts().head(10).plot(\n kind='pie',\n title=\"Top 10 Tags (-\" + str(days_back) + \"d)\",\n ax=axes[1]\n)\n",
"Extract The High Confidence / Severity Data For Use\nWith a better understanding of the data, let's filter the MALICIOUS, REVIEWED_MANUALLY labeled data down to a pre-determined threshold for confidence + severity. \nYou can add more filters, or change the threshold, as you see fit.",
"from pytx.vocabulary import ReviewStatus as rs\n\n# define our threshold line, which is the same as the red, threshold line in the chart above\nsev_min = 2\nsev_max = 10\nconf_min= 0\nconf_max = 100\n\n# build a new series, to indicate if a row passes our confidence + severity threshold\ndef is_high_value(conf, sev):\n return (((sev_max - sev_min) * (conf - conf_max)) - ((conf_min - conf_max) * (sev - sev_min))) > 0\ndata_frame['is_high_value']= data_frame.apply(lambda x: is_high_value(x.confidence, x.severity_value), axis=1)\n\n# filter down to just the data passing our criteria, you can add more here to filter by type, source, etc.\nhigh_value_data = data_frame[data_frame.is_high_value \n & (data_frame.status == s.MALICIOUS)\n & (data_frame.review_status == rs.REVIEWED_MANUALLY)].reset_index(drop=True)\n\n# get a count of how much we kept\nprint \"Kept %d of %d data as high value\" % (high_value_data.size, data_frame.size)\n\n# ... and preview it\nhigh_value_data.head()",
"Now, output all of the high value data to a file as CSV or JSON, for consumption in our other systems and workflows.",
"use_csv = False\n\nif use_csv:\n file_name = 'threat_exchange_high_value.csv'\n high_value_data.to_csv(path_or_buf=file_name)\n print \"CSV data written to %s\" % file_name\nelse:\n file_name = 'threat_exchange_high_value.json'\n high_value_data.to_json(path_or_buf=file_name, orient='index')\n print \"JSON data written to %s\" % file_name"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ykharo/notebooks | C elemental, querido Cython..ipynb | bsd-2-clause | [
"Cython, que no CPython\nNo, no nos hemos equivocado en el título, hoy vamos a hablar de Cython.\n¿Qué es Cython?\nCython son dos cosas:\n\nPor una parte, Cython es un lenguaje de programación (un superconjunto de Python) que une Python con el sistema de tipado estático de C y C++.\nPor otra parte, cython es un compilador que traduce codigo fuente escrito en Cython en eficiente código C o C++. El código resultante se podría usar como una extensión Python o como un ejecutable.\n\n¡Guau! ¿Cómo os habéis quedado?\nLo que se pretende es, básicamente, aprovechar las fortalezas de Python y C, combinar una sintaxis sencilla con el poder y la velocidad.\nSalvando algunas excepciones, el código Python (tanto Python 2 como Python 3) es código Cython válido. Además, Cython añade una serie de palabras clave para poder usar el sistema de tipado de C con Python y que el compilador cython pueda generar código C eficiente.\nPero, ¿quién usa Cython?\nPues mira, igual no lo sabes pero seguramente estés usando Cython todos los días. Sage tiene casi medio millón de líneas de Cython (que se dice pronto), Scipy y Pandas más de 20000, scikit-learn unas 15000,...\n¿Nos empezamos a meter en harina?\nLa idea principal de este primer acercamiento a Cython será empezar con un código Python que sea nuestro cuello de botella e iremos creando versiones que sean cada vez más rápidas, o eso intentaremos.\nPor ejemplo, imaginemos que tenemos que detectar valores mínimos locales dentro de una malla. Los valores mínimos deberán ser simplemente valores más bajos que los que haya en los 8 nodos de su entorno inmediato. En el siguiente gráfico, el nodo en verde será un nodo con un mínimo y en su entorno son todo valores superiores:\n<table>\n <tr>\n <td style=\"background:red\">(2, 0)</td>\n <td style=\"background:red\">(2, 1)</td>\n <td style=\"background:red\">(2, 2)</td>\n </tr>\n <tr>\n <td style=\"background:red\">(1, 0)</td>\n <td style=\"background:green\">(1. 1)</td>\n <td style=\"background:red\">(1, 2)</td>\n </tr>\n <tr>\n <td style=\"background:red\">(0, 0)</td>\n <td style=\"background:red\">(0, 1)</td>\n <td style=\"background:red\">(0, 2)</td>\n </tr>\n</table>\n\n[INCISO] Los números y porcentajes que veáis a continuación pueden variar levemente dependiendo de la máquina donde se ejecute. Tomad los valores como aproximativos.\nSetup\nComo siempre, importamos algunas librerías antes de empezar a picar código:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Creamos una matriz cuadrada relativamente grande (4 millones de elementos).",
"np.random.seed(0)\ndata = np.random.randn(2000, 2000)",
"Ya tenemos los datos listos para empezar a trabajar. \nVamos a crear una función en Python que busque los mínimos tal como los hemos definido.",
"def busca_min(malla):\n minimosx = []\n minimosy = []\n for i in range(1, malla.shape[1]-1):\n for j in range(1, malla.shape[0]-1):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)",
"Veamos cuanto tarda esta función en mi máquina:",
"%timeit busca_min(data)",
"Buff, tres segundos y pico en un i7... Si tengo que buscar los mínimos en 500 de estos casos me va a tardar casi media hora.\nPor casualidad, vamos a probar numba a ver si es capaz de resolver el problema sin mucho esfuerzo, es código Python muy sencillo en el cual no usamos cosas muy 'extrañas' del lenguaje.",
"from numba import jit\n\n@jit\ndef busca_min_numba(malla):\n minimosx = []\n minimosy = []\n for i in range(1, malla.shape[1]-1):\n for j in range(1, malla.shape[0]-1):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_numba(data)",
"Ooooops! Parece que la magia de numba no funciona aquí.\nVamos a especificar los tipos de entrada y de salida (y a modificar el output) a ver si mejora algo:",
"from numba import jit\nfrom numba import int32, float64\n\n@jit(int32[:,:](float64[:,:]))\ndef busca_min_numba(malla):\n minimosx = []\n minimosy = []\n for i in range(1, malla.shape[1]-1):\n for j in range(1, malla.shape[0]-1):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array([minimosx, minimosy], dtype = np.int32)\n\n%timeit busca_min_numba(data)",
"Pues parece que no, el resultado es del mismo pelo. Usando la opción nopython me casca un error un poco feo,... \nHabrá que seguir esperando a que numba esté un poco más maduro. En mis pocas experiencias no he conseguido aun el efecto que buscaba y en la mayoría de los casos obtengo errores muy crípticos. No es que no tenga confianza en la gente que está detrás, solo estoy diciendo que aun no está listo para 'producción'. Esto no pretende ser una guerra Cython/numba, solo he usado numba para ver si a pelo era capaz de mejorar algo el tema. Como no ha sido así, nos olvidamos de numba de momento.\nCythonizando, que es gerundio (toma 1).\nLo más sencillo y evidente es usar directamente el compilador cython y ver si usando el código python tal cual es un poco más rápido. Para ello, vamos a usar las funciones mágicas que Cython pone a nuestra disposición en el notebook. Solo vamos a hablar de la función mágica %%cython, de momento, aunque hay otras.",
"# antes cythonmagic\n%load_ext Cython",
"EL comando %%cython nos permite escribir código Cython en una celda. Una vez que ejecutamos la celda, IPython se encarga de coger el código, crear un fichero de código Cython con extensión .pyx, compilarlo a C y, si todo está correcto, importar ese fichero para que todo esté disponible dentro del notebook.\n[INCISO] a la función mágica %%cython le podemos pasar una serie de argumentos. Veremos alguno en este análisis pero ahora vamos a definir uno que sirve para que podamos nombrar a la funcíon que se crea y compila al vuelo, -n o --name.",
"%%cython --name probandocython1\nimport numpy as np\n\ndef busca_min_cython1(malla):\n minimosx = []\n minimosy = []\n for i in range(1, malla.shape[1]-1):\n for j in range(1, malla.shape[0]-1):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)",
"El fichero se creará dentro de la carpeta cython disponible dentro del directorio resultado de la función get_ipython_cache_dir. Veamos la localización del fichero en mi equipo:",
"from IPython.utils.path import get_ipython_cache_dir\n\nprint(get_ipython_cache_dir() + '/cython/probandocython1.c')",
"No lo muestro por aquí porque el resultado son más de ¡¡2400!! líneas de código C.\nVeamos ahora lo que tarda.",
"%timeit busca_min_cython1(data)",
"Bueno, parece que sin hacer mucho esfuerzo hemos conseguido ganar en torno a un 5% - 25% de rendimiento (dependerá del caso). No es gran cosa pero Cython es capaz de mucho más...\nCythonizando, que es gerundio (toma 2).\nEn esta parte vamos a introducir una de las palabras clave que Cython introduce para extender Python, cdef. La palabra clave cdef sirve para 'tipar' estáticamente variables en Cython (luego veremos que se usa también para definir funciones). Por ejemplo:\nPython\ncdef int var1, var2\ncdef float var3\nEn el bloque de código de más arriba he creado dos variables de tipo entero, var1 y var2, y una variable de tipo float, var3. Los tipos anteriores son la nomenclatura C.\nVamos a intentar usar cdef con algunos tipos de datos que tenemos dentro de nuestra función. Para empezar, veo evidente que tengo varias listas (minimosx y minimosy), tenemos los índices de los bucles (i y j) y voy a convertir los parámetros de los range en tipos estáticos (ii y jj):",
"%%cython --name probandocython2\nimport numpy as np\n\ndef busca_min_cython2(malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n minimosx = []\n minimosy = []\n for i in range(1, ii):\n for j in range(1, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_cython2(data)",
"Vaya decepción... No hemos conseguido gran cosa, tenemos un código un poco más largo y estamos peor que en la toma 1.\nEn realidad, estamos usando objetos Python como listas (no es un tipo C/C++ puro pero Cython lo declara como puntero a algún tipo struct de Python) o numpy arrays y no hemos definido las variables de entrada y de salida.\n[INCISO] Cuando existe un tipo Python y C que tienen el mismo nombre (por ejemplo, int) predomina el de C (porque es lo deseable, ¿no?).\nCythonizando, que es gerundio (toma 3).\nEn Cython existen tres tipos de funciones, las definidas en el espacio Python con def, las definidas en el espacio C con cdef (sí, lo mismo que usamos para declarar los tipos) y las definidas en ambos espacios con cpdef.\n\ndef: ya lo hemos visto y funciona como se espera. Accesible desde Python\ncdef: No es accesible desde Python y la tendremos que envolver con una función Python para poder acceder a la misma.\ncpdef: Es accesible tanto desde Python como desde C y Cython se encargará de hacer el 'envoltorio' para nosotros. Esto meterá un poco más de código y empeorará levemente el rendimiento.\n\nSi definimos una función con cdef debería ser una función que se usa internamente dentro del módulo Cython que vayamos a crear y que no sea necesario llamar desde Python.\nVeamos un ejemplo de lo dicho anteriormente definiendo la salida de la función como tupla:",
"%%cython --name probandocython3\nimport numpy as np\n\ncdef tuple cbusca_min_cython3(malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = []\n minimosy = []\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\ndef busca_min_cython3(malla):\n return cbusca_min_cython3(malla)\n\n%timeit busca_min_cython3(data)",
"Vaya, seguimos sin estar muy a gusto con estos resultados.\nSeguimos sin definir el tipo del valor de entrada.\nLa función mágica %%cython dispone de una serie de funcionalidades entre la que se encuentra -a o --annotate (además del -n o --name que ya hemos visto). Si le pasamos este parámetro podremos ver una representación del código con colores marcando las partes más lentas (amarillo más oscuro) y más optmizadas (más claro) o a la velocidad de C (blanco). Vamos a usarlo para saber donde tenemos cuellos de botella (aplicado a nuestra última versión del código):",
"%%cython --annotate\nimport numpy as np\n\ncdef tuple cbusca_min_cython3(malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = []\n minimosy = []\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\ndef busca_min_cython3(malla):\n return cbusca_min_cython3(malla)",
"El if parece la parte más lenta. Estamos usando el valor de entrada que no tiene un tipo Cython definido.\nLos bucles parece que están optimizados (las variables envueltas en el bucle las hemos declarado como unsigned int).\nPero todas las partes por las que pasa el numpy array parece que no están muy optimizadas...\nCythonizando, que es gerundio (toma 4).\nAhora mismo, haciendo import numpy as np tenemos acceso a la funcionalidad Python de numpy. Para poder acceder a la funcionalidad C de numpy hemos de hacer un cimport de numpy.\nEl cimport se usa para importar información especial del módulo numpy en el momento de compilación. Esta información se encuentra en el fichero numpy.pxd que es parte de la distribución Cython. El cimport también se usa para poder importar desde la stdlib de C.\nVamos a usar esto para declarar el tipo del array de numpy.",
"%%cython --name probandocython4\nimport numpy as np\ncimport numpy as np\n\ncpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = []\n minimosy = []\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_cython4(data)",
"Guauuuu!!! Acabamos de obtener un incremento de entre 25x a 30x veces más rápido.\nVamos a comprobar que el resultado sea el mismo que la función original:",
"a, b = busca_min(data)\nprint(a)\nprint(b)\n\naa, bb = busca_min_cython4(data)\nprint(aa)\nprint(bb)\n\nprint(np.array_equal(a, aa))\n\nprint(np.array_equal(b, bb))",
"Pues parece que sí :-)\nVamos a ver si hemos dejado la mayoría del código anterior en blanco o más clarito usando --annotate.",
"%%cython --annotate\nimport numpy as np\ncimport numpy as np\n\ncpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = []\n minimosy = []\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)",
"Vemos que muchas de las partes oscuras ahora son más claras!!! Pero parece que sigue quedando espacio para la mejora.\nCythonizando, que es gerundio (toma 5).\nVamos a ver si definiendo el tipo del resultado de la función como un numpy array en lugar de como una tupla nos introduce alguna mejora:",
"%%cython --name probandocython5\nimport numpy as np\ncimport numpy as np\n\ncpdef np.ndarray[int, ndim = 2] busca_min_cython5(np.ndarray[double, ndim = 2] malla):\n cdef list minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = []\n minimosy = []\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array([minimosx, minimosy])\n\n%timeit busca_min_cython5(data)",
"Vaya, parece que con respecto a la versión anterior solo obtenemos una ganancia de un 2% - 4%.\nCythonizando, que es gerundio (toma 6).\nVamos a dejar de usar listas y vamos a usar numpy arrays vacios que iremos 'rellenando' con numpy.append. A ver si usando todo numpy arrays conseguimos algún tipo de mejora:",
"%%cython --name probandocython6\nimport numpy as np\ncimport numpy as np\n\ncpdef tuple busca_min_cython6(np.ndarray[double, ndim = 2] malla):\n cdef np.ndarray[long, ndim = 1] minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = np.array([], dtype = np.int)\n minimosy = np.array([], dtype = np.int)\n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n np.append(minimosx, i)\n np.append(minimosy, j)\n\n return minimosx, minimosy\n\n%timeit busca_min_cython6(data)\n\nnp.append?",
"En realidad, en la anterior porción de código estoy usando algo muy ineficiente. La función numpy.append no funciona como una lista a la que vas anexando elementos. Lo que estamos haciendo en realidad es crear copias del array existente para convertirlo a un nuevo array con un elemento nuevo. Esto no es lo que pretendiamos!!!!\nCythonizando, que es gerundio (toma 7).\nEn Python existen arrays eficientes para valores numéricos (según reza la documentación) que también pueden ser usados de la forma en que estoy usando las listas en mi función (arrays vacios a los que les vamos añadiendo elementos). Vamos a usarlos con Cython.",
"%%cython --name probandocython7\nimport numpy as np\ncimport numpy as np\nfrom cpython cimport array as c_array\nfrom array import array\n\ncpdef tuple busca_min_cython7(np.ndarray[double, ndim = 2] malla):\n cdef c_array.array minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = array('L', [])\n minimosy = array('L', []) \n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_cython7(data)",
"Parece que hemos ganado otro 25% - 30% con respecto a lo anterior más eficiente que habíamos conseguido. Con respecto a la implementación inicial en Python puro tenemos una mejora de 30x - 35x veces la velocidad inicial.\nVamos a comprobar si seguimos teniendo los mismos resultados.",
"a, b = busca_min(data)\nprint(a)\nprint(b)\naa, bb = busca_min_cython7(data)\nprint(aa)\nprint(bb)\nprint(np.array_equal(a, aa))\nprint(np.array_equal(b, bb))",
"¿Qué pasa si el tamaño del array se incrementa?",
"data2 = np.random.randn(5000, 5000)\n%timeit busca_min(data2)\n%timeit busca_min_cython7(data2)\n\na, b = busca_min(data2)\nprint(a)\nprint(b)\naa, bb = busca_min_cython7(data2)\nprint(aa)\nprint(bb)\nprint(np.array_equal(a, aa))\nprint(np.array_equal(b, bb))",
"Parece que al ir aumentando el tamaño de los datos de entrada a la función los números son consistentes y el rendimiento se mantiene. En este caso concreto parece que ya hemos llegado a rendimientos de más de ¡¡35x!! con respecto a la implementación inicial.\nCythonizando, que es gerundio (toma 8).\nPodemos usar directivas de compilación que ayuden al compilador a decidir mejor qué es lo que tiene que hacer. Entre ellas se encuentra una opción que es boundscheck que evita mirar la posibilidad de obtener IndexError asumiendo que el código está libre de estos errores de indexación. Lo vamos a usar conjuntamente con wraparound. Esta última opción se encarga de evitar mirar indexaciones relativas al final del iterable (por ejemplo, mi_iterable[-1]). En este caso concreto, la segunda opción no aporta nada de mejora de rendimiento pero la dijamos ya que la hemos probado.",
"%%cython --name probandocython8\nimport numpy as np\ncimport numpy as np\nfrom cpython cimport array as c_array\nfrom array import array\ncimport cython\n\[email protected](False) \[email protected](False)\ncpdef tuple busca_min_cython8(np.ndarray[double, ndim = 2] malla):\n cdef c_array.array minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n minimosx = array('L', [])\n minimosy = array('L', []) \n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_cython8(data)",
"Parece que hemos conseguido arañar otro poquito de rendimiento.\nCythonizando, que es gerundio (toma 9).\nEn lugar de usar numpy arrays vamos a usar memoryviews. Los memoryviews son arrays de acceso rápido. Si solo queremos almacenar cosas y no necesitamos ninguna de las características de un numpy array pueden ser una buena solución. Si necesitamos alguna funcionalidad extra siempre lo podemos convertir en un numpy array usando numpy.asarray.",
"%%cython --name probandocython9\nimport numpy as np\ncimport numpy as np\nfrom cpython cimport array as c_array\nfrom array import array\ncimport cython\n\[email protected](False) \[email protected](False)\n#cpdef tuple busca_min_cython9(np.ndarray[double, ndim = 2] malla):\ncpdef tuple busca_min_cython9(double [:,:] malla):\n cdef c_array.array minimosx, minimosy\n cdef unsigned int i, j\n cdef unsigned int ii = malla.shape[1]-1\n cdef unsigned int jj = malla.shape[0]-1\n cdef unsigned int start = 1\n #cdef float [:, :] malla_view = malla\n minimosx = array('L', [])\n minimosy = array('L', []) \n for i in range(start, ii):\n for j in range(start, jj):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\n%timeit busca_min_cython9(data)",
"Parece que, virtualmente, el rendimiento es parecido a lo que ya teniamos por lo que parece que nos hemos quedado igual.\nBonus track\nVoy a intentar usar pypy (2.4 (CPython 2.7)) conjuntamente con numpypy para ver lo que conseguimos.",
"%%pypy\nimport numpy as np\nimport time\n\nnp.random.seed(0)\ndata = np.random.randn(2000,2000)\n\ndef busca_min(malla):\n minimosx = []\n minimosy = []\n for i in range(1, malla.shape[1]-1):\n for j in range(1, malla.shape[0]-1):\n if (malla[j, i] < malla[j-1, i-1] and\n malla[j, i] < malla[j-1, i] and\n malla[j, i] < malla[j-1, i+1] and\n malla[j, i] < malla[j, i-1] and\n malla[j, i] < malla[j, i+1] and\n malla[j, i] < malla[j+1, i-1] and\n malla[j, i] < malla[j+1, i] and\n malla[j, i] < malla[j+1, i+1]):\n minimosx.append(i)\n minimosy.append(j)\n\n return np.array(minimosx), np.array(minimosy)\n\nresx, resy = busca_min(data)\nprint(data)\nprint(len(resx), len(resy))\nprint(resx)\nprint(resy)\n\nt = []\nfor i in range(100):\n t0 = time.time()\n busca_min(data)\n t1 = time.time() - t0\n t.append(t1)\nprint(sum(t) / 100.)",
"El último valor del output anterior es el tiempo promedio después de repetir el cálculo 100 veces.\nWow!! Parece que sin hacer modificaciones tenemos que el resultado es 10x - 15x veces más rápido que el obtenido usando la función inicial. Y llega a ser solo 3.5x veces más lento que lo que hemos conseguido con Cython.\nResumen de resultados.\nVamos a ver los resultados completos en un breve resumen. Primero vamos a ver los tiempos de las diferentes versiones de la función busca_min_xxx:",
"funcs = [busca_min, busca_min_numba, busca_min_cython1,\n busca_min_cython2, busca_min_cython3,\n busca_min_cython4, busca_min_cython5,\n busca_min_cython6, busca_min_cython7,\n busca_min_cython8, busca_min_cython9]\nt = []\nfor func in funcs:\n res = %timeit -o func(data)\n t.append(res.best)\n\nindex = np.arange(len(t))\nplt.figure(figsize = (12, 6))\nplt.bar(index, t)\nplt.xticks(index + 0.4, [func.__name__[9:] for func in funcs])\nplt.tight_layout()",
"En el gráfico anterior, la primera barra corresponde a la función de partida (busca_min). Recordemos que la versión de pypy ha tardado unos 0.38 segundos.\nY ahora vamos a ver los tiempos entre busca_min (la versión original) y la última versión de cython que hemos creado, busca_min_cython9 usando diferentes tamaños de la matriz de entrada:",
"tamanyos = [10, 100, 500, 1000, 2000, 5000]\nt_p = []\nt_c = []\nfor i in tamanyos:\n data = np.random.randn(i, i)\n res = %timeit -o busca_min(data)\n t_p.append(res.best)\n res = %timeit -o busca_min_cython9(data)\n t_c.append(res.best)\n\nplt.figure(figsize = (10,6))\nplt.plot(tamanyos, t_p, 'bo-')\nplt.plot(tamanyos, t_c, 'ro-')\n\nratio = np.array(t_p) / np.array(t_c)\nplt.figure(figsize = (10,6))\nplt.plot(tamanyos, ratio, 'bo-')",
"Parece que conseguimos rendimientos que son 40 veces más rápidos que con Python puro que usa un numpy array de por medio (excepto para tamaños de arrays muy pequeños en los que el rendimiento no sería una gran problema).\nApuntes finales\nDespués de haber probado Python, Cython, Numba y Pypy:\nNumba:\n\n\nNumba no parece fácilmente generalizable a día de hoy (experiencia personal) y no soporta ni parece que soportará todas las características del lenguaje. La idea me parece increible pero creo que le falta todavía un poco de madurez.\n\n\nMe ha costado instalar numba y llvmlite en linux sin usar conda (con conda no lo he probado por lo que no puedo opinar).\n\n\n(Creo que JuanLu estaba preparando un post sobre Numba. Habrá que esperar a ver sus conclusiones).\nPypy:\n\n\nPypy ha funcionado como un titán sin necesidad de hacer modificaciones. \n\n\nDestacar que no tengo excesivas experiencias con el mismo \n\n\nInstalarlo no es tarea fácil (he intentado usar PyPy3 con numpypy y he fallado vilmente). Quería usar numpypy y al final he optado por descargar una versión portable con numpy de serie que quizá afecte al rendimiento ¿?.\n\n\nCython:\n\n\nMe ha parecido el más generalizable de todos. Se pueden crear paquetes para CPython, para Pypy,...\n\n\nNo lo he probado en Windows por lo que no sé lo doloroso que puede llegar a ser. Mañana lo probaré en el trabajo y ya dejaré un comentario por ahí.\n\n\nEl manejo no es tan evidente como con Numba y Pypy. Requiere entender como funcionan los tipos de C y requiere conocer una serie de interioridades de C. Sin duda es el que más esfuerzo requiere de las alternativas aquí expuestas pra este caso concreto y no generalizable.\n\n\nCreo que, una vez hecho el esfuerzo inicial de intentar entender un poco como funciona, se puede sacar un gran rendimiento del mismo en muchas situaciones.\n\n\nY después de haber leído todo esto pensad que, en la mayoría de situaciones, CPython no es tan lento como lo pintan (sobretodo con numpy) y que ¡¡¡LA OPTIMIZACIÓN PREMATURA ES LA RAÍZ DE TODOS LOS MALES!!!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kaushikpavani/neural_networks_in_python | src/linear_regression/linear_regression.ipynb | mit | [
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline ",
"Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by: $y = mx + b$.",
"def generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise):\n # randomly select x\n x = np.random.uniform(-abs_value, abs_value, num_points)\n # y = mx + b + noise\n y = slope*x + intercept + np.random.uniform(-abs_noise, abs_noise, num_points)\n return x, y\n\ndef plot_points(x,y):\n plt.scatter(x, y)\n plt.title('Scatter plot of x and y')\n plt.xlabel('x')\n plt.ylabel('y')\n\nslope = 4\nintercept = -3\nnum_points = 20\nabs_value = 4\nabs_noise = 2\nx, y = generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise)\nplot_points(x, y)",
"If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as:\n$C = \\sum_{i=0}^{N} (y-(mx+b))^2$\nTo perform gradient descent, we need the partial derivatives of Cost $C$ with respect to slope $m$ and intercept $b$.\n$\\frac{\\partial C}{\\partial m} = \\sum_{i=0}^{N} -2(y-(mx+b)).x$\n$\\frac{\\partial C}{\\partial b} = \\sum_{i=0}^{N} -2(y-(mx+b))$",
"# this function computes gradient with respect to slope m\ndef grad_m (x, y, m, b):\n return np.sum(np.multiply(-2*(y - (m*x + b)), x))\n\n# this function computes gradient with respect to intercept b\ndef grad_b (x, y, m, b):\n return np.sum(-2*(y - (m*x + b)))\n\n# Performs gradient descent\ndef gradient_descent (x, y, num_iterations, learning_rate):\n # Initialize m and b\n m = np.random.uniform(-1, 1, 1)\n b = np.random.uniform(-1, 1, 1)\n # Update m and b in direction opposite to that of the gradient to minimize loss \n for i in range(num_iterations):\n m = m - learning_rate * grad_m (x, y, m, b)\n b = b - learning_rate * grad_b (x, y, m, b)\n # Return final slope and intercept\n return m, b\n\n# Plot point along with the best fit line\ndef plot_line (m, b, x, y):\n plot_points(x,y)\n plt.plot(x, x*m + b, 'r')\n plt.show()\n\n# In general, keep num_iterations high and learning_rate low.\nnum_iterations = 1000\nlearning_rate = 0.0001\n\nm, b = gradient_descent (x, y, num_iterations, learning_rate)\nplot_line (m, b, x, y)\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
AtmaMani/pyChakras | udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb | mit | [
"Introduction to Spark and Python\nLet's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.\nThis notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing.\nCreating a SparkContext\nFirst we need to create a SparkContext. We will import this from pyspark:",
"from pyspark import SparkContext",
"Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.\nNote! You can only have one SparkContext at a time the way we are running things here.",
"sc = SparkContext()",
"Basic Operations\nWe're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.\n\nLet's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:",
"%%writefile example.txt\nfirst line\nsecond line\nthird line\nfourth line",
"Creating the RDD\nNow we can take in the textfile using the textFile method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all\nnodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.",
"textFile = sc.textFile('example.txt')",
"Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. \nActions\nWe have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.\nRDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:",
"textFile.count()\n\ntextFile.first()",
"Transformations\nNow we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.",
"secfind = textFile.filter(lambda line: 'second' in line)\n\n# RDD\nsecfind\n\n# Perform action on transformation\nsecfind.collect()\n\n# Perform action on transformation\nsecfind.count()",
"Notice how the transformations won't display an output and won't be run until an action is called. In the next lecture: Advanced Spark and Python we will begin to see many more examples of this transformation and action relationship!\nGreat Job!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mlund/kirkwood-buff | nacl-water/nacl.ipynb | mit | [
"Kirkwood-Buff example: NaCl in water\nIn this example we calculate Kirkwood-Buff integrals in a solute (c) and solvent (w) system and correct for finite size effects as described at http://dx.doi.org/10.1073/pnas.0902904106 (see Supporting Information).",
"%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport mdtraj as md\nfrom math import pi\nfrom scipy import integrate\nplt.rcParams.update({'font.size': 16})",
"Load gromacs trajectory/topology\nGromacs was used to sample a dilute solution of sodium chloride in SPC/E water for 100 ns.\nThe trajectory and .gro loaded below have been stripped from hydrogens to reduce disk space.",
"traj = md.load('gmx/traj_noh.xtc', top='gmx/conf_noh.gro')\ntraj",
"Calculate average number densities for solute and solvent",
"volume=0\nfor vec in traj.unitcell_lengths: \n volume = volume + vec[0]*vec[1]*vec[2] / traj.n_frames\nN_c = len(traj.topology.select('name NA or name CL'))\nN_w = len(traj.topology.select('name O'))\nrho_c = N_c / volume\nrho_w = N_w / volume\nprint \"Simulation time = \", traj.time[-1]*1e-3, 'ns'\nprint \"Average volume = \", volume, 'nm-3'\nprint \"Average side-length = \", volume**(1/3.), 'nm'\nprint \"Number of solute molecules = \", N_c\nprint \"Number of water molecules = \", N_w\nprint \"Solute density = \", rho_c, 'nm-3'\nprint \"Water density = \", rho_w, 'nm-3'\n\nsteps=range(traj.n_frames)\nplt.xlabel('steps')\nplt.ylabel('box sidelength, x (nm)')\nplt.plot(traj.unitcell_lengths[:,0])",
"Compute and plot RDFs\nNote: The radial distribution function in mdtraj differs from i.e. Gromacs g_rdf in\nthe way data is normalized and the $g(r)$ may need rescaling. It seems that densities\nare calculated by the number of selected pairs which for the cc case exclude all the\nself terms. This can be easily corrected and is obviously not needed for the wc case.",
"rmax = (volume)**(1/3.)/2\nselect_cc = traj.topology.select_pairs('name NA or name CL', 'name NA or name CL')\nselect_wc = traj.topology.select_pairs('name NA or name CL', 'name O')\nr, g_cc = md.compute_rdf(traj, select_cc, r_range=[0.0,rmax], bin_width=0.01, periodic=True)\nr, g_wc = md.compute_rdf(traj, select_wc, r_range=[0.0,rmax], bin_width=0.01, periodic=True)\ng_cc = g_cc * len(select_cc) / (0.5*N_c**2) # re-scale to account for diagonal in pair matrix\n\nnp.savetxt('g_cc.dat', np.column_stack( (r,g_cc) ))\nnp.savetxt('g_wc.dat', np.column_stack( (r,g_wc) ))\n\nplt.xlabel('$r$/nm')\nplt.ylabel('$g(r)$')\nplt.plot(r, g_cc, 'r-')\nplt.plot(r, g_wc, 'b-')",
"Calculate KB integrals\nHere we calculate the number of solute molecules around other solute molecules (cc) and around water (wc).\nFor example,\n$$ N_{cc} = 4\\pi\\rho_c\\int_0^{\\infty} \\left ( g(r)_{cc} -1 \\right ) r^2 dr$$\nThe preferential binding parameter is subsequently calculated as $\\Gamma = N_{cc}-N_{wc}$.",
"dr = r[1]-r[0]\nN_cc = rho_c * 4*pi*np.cumsum( ( g_cc - 1 )*r**2*dr )\nN_wc = rho_c * 4*pi*np.cumsum( ( g_wc - 1 )*r**2*dr )\nGamma = N_cc - N_wc\nplt.xlabel('$r$/nm')\nplt.ylabel('$\\\\Gamma = N_{cc}-N_{wc}$')\nplt.plot(r, Gamma, 'r-')",
"Finite system size corrected KB integrals\nAs can be seen in the above figure, the KB integrals do not converge since in a finite sized $NVT$ simulation,\n$g(r)$ can never exactly go to unity at large separations.\nTo correct for this, a simple scaling factor can be applied, as describe in the link on top of the page,\n$$ g_{gc}^{\\prime} (r) = g_{jc}(r) \\cdot\n \\frac{N_j\\left (1-V(r)/V\\right )}{N_j\\left (1-V(r)/V\\right )-\\Delta N_{jc}(r)-\\delta_{jc}} $$\nLastly, we take a little extra care in producing a refined PDF file for the uncorrected and\ncorrected integrals.",
"Vn = 4*pi/3*r**3 / volume\ng_ccc = g_cc * N_c * (1-Vn) / ( N_c*(1-Vn)-N_cc-1)\ng_wcc = g_wc * N_w * (1-Vn) / ( N_w*(1-Vn)-N_wc-0)\nN_ccc = rho_c * 4*pi*dr*np.cumsum( ( g_ccc - 1 )*r**2 )\nN_wcc = rho_c * 4*pi*dr*np.cumsum( ( g_wcc - 1 )*r**2 )\nGammac = N_ccc - N_wcc\nplt.xlabel('$r$/nm')\nplt.ylabel('$\\\\Gamma = N_{cc}-N_{wc}$')\nplt.plot(r, Gamma, color='red', ls='-', lw=2, label='uncorrected')\nplt.plot(r, Gammac, color='green', lw=2, label='corrected')\nplt.legend(loc=0,frameon=False, fontsize=16)\nplt.yticks( np.arange(-0.4, 0.5, 0.1))\nplt.ylim((-0.45,0.45))\nplt.savefig('gamma.pdf', bbox_inches='tight')",
"Exercises\n\nPlot the average solute concentration as a function of simulation time.\nPlot $\\Gamma$ for $g(r)$'s calculated with only $1/_2$ and $1/_4$ of the frames in the trajectory. Discuss how long one needs to simulate to get a good estimate.\nExplain the finite size correction factor for the KB integrals.\nThe preferential binding parameter is related to the activity coefficient derivative with respect to the molar salt concentration (see article link at the top). Collect experimental data of the activity coefficient vs. concentration; load it into this Notebook and judge if the current NaCl model is sound. Good sources are\nRobinson and Stokes \"Electrolyte Solutions\" and the CRC Press Handbook of Chemistry and Physics.\n\nCredits\nVidar Aspelin & Mikael Lund. http://www.teokem.lu.se"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kimkipyo/dss_git_kkp | 통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/14.Pandas 고급 인덱싱.ipynb | mit | [
"Pandas 고급 인덱싱\npandas는 numpy 행렬과 같이 comma를 사용한 복수 인덱싱을 지원하기 위해 다음과 같은 특별한 인덱서 속성을 제공한다.\n\nix : 라벨과 숫자를 동시에 지원하는 복수 인덱싱\nloc : 라벨 기반의 복수 인덱싱\niloc : 숫자 기반의 복수 인덱싱\n\nix 인덱서\n\n행(Row)/열(Column) 양쪽에서 라벨 인덱싱, 숫자 인덱싱, 불리언 인덱싱(행만) 동시 가능\n단일 숫자 인덱싱 가능\n열(column)도 라벨이 아닌 숫자 인덱싱 가능\n열(column)도 라벨 슬라이싱(label slicing) 가능",
"data = {\n 'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],\n 'year': [2000, 2001, 2002, 2001, 2002],\n 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]\n}\n\ndf = pd.DataFrame(data)\ndf\n\n# 순차적 indexing 과 동일\ndf.ix[1:3, [\"state\", \"pop\"]]\n\ndf2 = pd.DataFrame(data,\n columns=['year', 'state', 'pop'],\n index=['one', 'two', 'three', 'four', 'five'])\ndf2\n\n# , 이용\ndf2.ix[[\"two\", \"three\"], [\"state\", \"pop\"]]\n\n# column에도 integer 기반 indexing 가능\ndf2.ix[[\"two\", \"three\"], :2]\n\n# column에도 Label Slicing 가능\ndf2.ix[[\"two\", \"three\"], \"state\":\"pop\"]\n\n# `:` 사용\ndf2.ix[:, [\"state\", \"pop\"]]\n\n# `:` 사용\ndf2.ix[[\"two\", \"five\"], :]",
"Index Label이 없는 경우의 주의점\n\nLabel이 지정되지 않는 경우에는 integer slicing을 label slicing으로 간주하여 마지막 값을 포함한다",
"df = pd.DataFrame(np.random.randn(5, 3))\ndf\n\ndf.columns = [\"c1\", \"c2\", \"c3\"]\ndf.ix[0:2, 1:2]",
"loc 인덱서\n\n\n라벨 기준 인덱싱\n\n\n숫자가 오더라도 라벨로 인식한다.\n\n라벨 리스트 가능\n라벨 슬라이싱 가능\n불리언 배열 가능\n\niloc 인덱서\n\n\n숫자 기준 인덱싱\n\n\n문자열 라벨은 불가\n\n숫자 리스트 가능\n숫자 슬라이싱 가능\n불리언 배열 가능",
"np.random.seed(1)\ndf = pd.DataFrame(np.random.randint(1, 11, size=(4,3)), \n columns=[\"A\", \"B\", \"C\"], index=[\"a\", \"b\", \"c\", \"d\"])\ndf\n\ndf.ix[[\"a\", \"c\"], \"B\":\"C\"]\n\ndf.ix[[0, 2], 1:3]\n\ndf.loc[[\"a\", \"c\"], \"B\":\"C\"]\n\ndf.ix[2:4, 1:3]\n\ndf.loc[2:4, 1:3]\n\ndf.iloc[2:4, 1:3]\n\ndf.iloc[[\"a\", \"c\"], \"B\":\"C\"]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.