[PYTHON] Matters examined at the study session in February

I did machine learning (Scikit-learn, TensorFlow) at Mooc (Udacity, Coursera).

Mooc

Andrew Ng is famous, but Google's Deep Learning --Udacity is nice to handle with Scikit-learn or TensorFlow.

Neural Networks for Machine Learning

Deep Learning - Udacity

Deep Learning - Udacity

Setup

Scikit-learn

Scikit-learn

Python: Extracting common elements

v = [1,2,3]
w = [2,3,4]
[x for x in v if x in w] # [2,3]

TensorFlow

It was about performing the same task in Tensorflow.

UNADJUSTEDNONRAW_thumb_1c0e.jpg

I was satisfied because I made a 3-layer NN like this and achieved accuracy 94.2%. When I extract a part of the code, it looks like this.

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=1.732/sum(shape))
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)
  
batch_size = 128
hidden_layer_size = 1024
input_layer_size = 28*28
output_layer_size = 10

graph = tf.Graph()
with graph.as_default():
    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)
    
    # Variables
    weight1 = weight_variable( (input_layer_size, hidden_layer_size) )
    bias1 = bias_variable( [hidden_layer_size] )
    
    # Hidden Layer
    hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, weight1) + bias1)
    
    # Variables
    weight2 = weight_variable( (hidden_layer_size, output_layer_size) )
    bias2 = bias_variable( [output_layer_size] )
    
    logits = tf.matmul(hidden_layer, weight2) + bias2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
    
    # optimizer
    optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
    # optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
    
    # prediction
    train_prediction = tf.nn.softmax(logits)
    valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weight1) + bias1), weight2) + bias2)
    test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weight1) + bias1), weight2) + bias2)
    

Recommended Posts

Matters examined at the study session in February
Matters examined at the study session in May
Matters examined at the August 2017 study session
An example of the answer to the reference question of the study session. In python.
Enter the sudo password at startup in Fabric
Loop variables at the same time in the template
Study at Zundoko
I tried studying on the WEB server side at an in-house Python study session
Read "Ordinary Linux Programming" at an in-house study session