Google has released a machine learning tool "TensorFlow", so I immediately tried a tutorial.
Install as follows.
$ pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
So far, it seems that it only supports Python2, and when I tried to install it in Python3 as well, it failed. Dependencies such as numpy are also installed automatically.
Below, I tried to implement it while writing notes on the one in "MNIST For ML Beginners" on the official website. (Note that input_data is not a tensorflow function, but the one in here is input_data. Must be saved as .py.)
# -*- coding : utf-8 -*-
import tensorflow as tf
import input_data
#mnist data acquisition)
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
#Define a placeholder to have input information.
# shape(2nd argument)If you specify None for the dimension of, you can handle any number of dimensions of any length.
x = tf.placeholder("float", [None, 784])
#Define a Variable that holds weights and biases.
#Variable is a variable whose value can be modified by operation.
W = tf.Variable(tf.zeros([784, 10])) #Weights 784 Receives dimensional input and returns 10 dimensional output
b = tf.Variable(tf.zeros([10])) #Bias applied to 10-dimensional output
#Define a neural net model.
#Matrix product of input x and weight W(tf.matmul)Bias b is added to the output of, and softmax determines the final output.
y = tf.nn.softmax(tf.matmul(x, W) + b)
#This is the end of the definition of the neural network model.
#The following is the implementation of training / evaluation
# (In this sample, we are learning by evaluating with cross entropy.)
#Teacher data(Correct answer)Define a placeholder to hold.
y_ = tf.placeholder("float", [None, 10])
#Define the formula for calculating cross entropy.
#Teacher data y_And the logarithm of the output y from the model is multiplied to calculate the total sum.
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
#Define how to update the model at each step.
#0 for each step.Minimize cross entropy with an update rate of 01.
# (The error back propagation method is used for learning the neural network, but this seems to be judged by what the model is.)
# (Here, y= tf.nn.softmax(...)Therefore, it seems that the error back propagation method is adopted for updating the neural network.)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
#Get ready to initialize all variables.
init = tf.initialize_all_variables()
#Define Session and initialize all variables.
sess = tf.Session()
sess.run(init)
#Do training
#Pass mnist data to Session for training and train_Update the model according to the definition of step.
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#Define an evaluation formula for performance evaluation.
#Model output y and teacher data y for input data_Check if they match.
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
#It is defined that the final performance evaluation is determined by the average value.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
#Evaluate performance using test data and its labels.
print sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
What I thought about implementing it was great as a tool for machine learning, but I thought it was quite excellent as a tool for numerical analysis. Impression that matrix calculation can be easily defined and the calculation speed is fast. The implementation of the learning machine is fairly simple compared to other libraries, but this tutorial alone has a lot of black box parts and I was a little worried about it. Some official tutorials are about deep learning, so I'd love to try that too.
Recommended Posts