[PYTHON] I tried the TensorFlow tutorial 2nd

I tried TensorFlow Official Tutorial Continuation of TensorFlow tutorial 1st

Getting Started with TensorFlow

tf.trainAPI TensorFlow provides an optimizer that slowly modifies each variable to minimize the loss function.

The optimizer settings look like this.

>>>optimizer = tf.train.GradientDescentOptimizer(0.01)
>>>train = optimizer.minimize(loss)

Let's learn 1000 times for the time being.

>>>sess.run(init) # reset values to incorrect defaults.
>>>for i in range(1000):
>>>  sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})
>>>print(sess.run([W, b]))

Then the final model parameters are obtained.

print output


[array([-0.9999969], dtype=float32), array([ 0.99999082], dtype=float32)]

Complete program

Source code that summarizes what has been done with the interpreter What I'm doing is a simple regression problem

import tensorflow as tf

# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)

# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x: x_train, y: y_train})
  
  #I want to see the learning process, so I added it
  if i%100 == 0:
      curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
      print("%d times W: %s b: %s loss: %s"%(i,curr_W, curr_b, curr_loss))

# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

When you run

0 times W: [-0.21999997] b: [-0.456] loss: 4.01814
100 times W: [-0.84270465] b: [ 0.53753263] loss: 0.14288
200 times W: [-0.95284992] b: [ 0.86137295] loss: 0.0128382
300 times W: [-0.98586655] b: [ 0.95844597] loss: 0.00115355
400 times W: [-0.99576342] b: [ 0.98754394] loss: 0.000103651
500 times W: [-0.99873012] b: [ 0.99626648] loss: 9.3124e-06

600 times W: [-0.99961936] b: [ 0.99888098] loss: 8.36456e-07
700 times W: [-0.99988592] b: [ 0.9996646] loss: 7.51492e-08
800 times W: [-0.99996579] b: [ 0.99989945] loss: 6.75391e-09
900 times W: [-0.99998969] b: [ 0.99996972] loss: 6.12733e-10
W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11

You can see that you are learning. There is almost no error in about 300 times.

tf.estimator tf.estimator is a TensorFlow library for simplifying the following machine learning mechanism.

--Execution of training loop --Execution of evaluation loop --Dataset management

Basic usage

The linear regression program above is simplified.

import tensorflow as tf
#NumPy is often used for loading, manipulating, and preprocessing data.
import numpy as np

#Declare a list of features. There is only one numerical function. There are many other types of columns that are more complex and useful.
feature_columns = [tf.feature_column.numeric_column("x", shape=[1])]

#The estimator is the front end for invoking training (fitting) and evaluation (inference). There are many predefined types like linear regression, linear classification, many neural network classifiers and regressionrs. The following code provides an estimator for linear regression.
estimator = tf.estimator.LinearRegressor(feature_columns=feature_columns)

#TensorFlow provides many helper methods for loading and configuring datasets.
#We will use two datasets, one for training and one for evaluation.
#The number of batches of data we want (num_You have to tell the function epochs) and the size of each batch.
x_train = np.array([1., 2., 3., 4.])
y_train = np.array([0., -1., -2., -3.])
x_eval = np.array([2., 5., 8., 1.])
y_eval = np.array([-1.01, -4.1, -7, 0.])
input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)

#You can call 1000 training steps by calling this method and passing in the training dataset.
estimator.train(input_fn=input_fn, steps=1000)

#Model evaluation
train_metrics = estimator.evaluate(input_fn=train_input_fn)
eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
print("train metrics: %r"% train_metrics)
print("eval metrics: %r"% eval_metrics)

output

train metrics: {'loss': 1.2712867e-09, 'global_step': 1000}
eval metrics: {'loss': 0.0025279333, 'global_step': 1000}

Custom model

How to use estimator when creating a neural network model by yourself

import numpy as np
import tensorflow as tf

#If you declare a list of features, there is only one actual feature
def model_fn(features, labels, mode):
  #Build a linear model and predict values
  W = tf.get_variable("W", [1], dtype=tf.float64)
  b = tf.get_variable("b", [1], dtype=tf.float64)
  y = W * features['x'] + b
  # Loss sub-graph
  loss = tf.reduce_sum(tf.square(y - labels))
  # Training sub-graph
  global_step = tf.train.get_global_step()
  optimizer = tf.train.GradientDescentOptimizer(0.01)
  train = tf.group(optimizer.minimize(loss),
                   tf.assign_add(global_step, 1))
  #EstimatorSpec connects the created subgraph to the appropriate function.
  return tf.estimator.EstimatorSpec(
      mode=mode,
      predictions=y,
      loss=loss,
      train_op=train)

estimator = tf.estimator.Estimator(model_fn=model_fn)
# define our data sets
x_train = np.array([1., 2., 3., 4.])
y_train = np.array([0., -1., -2., -3.])
x_eval = np.array([2., 5., 8., 1.])
y_eval = np.array([-1.01, -4.1, -7, 0.])
input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
    {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)

# train
estimator.train(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_metrics = estimator.evaluate(input_fn=train_input_fn)
eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
print("train metrics: %r"% train_metrics)
print("eval metrics: %r"% eval_metrics)

output

train metrics: {'loss': 1.227995e-11, 'global_step': 1000}
eval metrics: {'loss': 0.01010036, 'global_step': 1000}

Recommended Posts

I tried the TensorFlow tutorial 2nd
I tried the TensorFlow tutorial 1st
I tried the TensorFlow tutorial MNIST 3rd
I tried the MNIST tutorial for beginners of tensorflow.
I tried TensorFlow tutorial CNN 4th
I tried tensorflow for the first time
I tried running TensorFlow
I tried running the TensorFlow tutorial with comments (_TensorFlow_2_0_Introduction for beginners)
I tried the changefinder library!
I tried using magenta / TensorFlow
I tried running the TensorFlow tutorial with comments (text classification of movie reviews)
I tried porting the code written for TensorFlow to Theano
I tried to find the average of the sequence with TensorFlow
I tried refactoring the CNN model of TensorFlow using TF-Slim
I tried the Naruro novel API
TensorFlow tutorial tutorial
I tried to move the ball
I tried using the checkio API
I tried to estimate the interval.
[For beginners] I tried using the Tensorflow Object Detection API
I tried the asynchronous server of Django 3.0
I tried to implement Autoencoder with TensorFlow
I tried to summarize the umask command
I tried to visualize AutoEncoder with TensorFlow
I tried to recognize the wake word
I tried playing a ○ ✕ game using TensorFlow
I tried the OSS visualization tool, superset
I tried to classify text using TensorFlow
I tried to summarize the graphical modeling.
I tried to estimate the pi stochastically
I tried to touch the COTOHA API
Python: I tried the traveling salesman problem
I tried playing with the image with Pillow
I tried the Python Tornado Testing Framework
I tried using the BigQuery Storage API
I tried to transform the face image using sparse_image_warp of TensorFlow Addons
I tried scraping
I tried AutoKeras
I tried papermill
I tried django-slack
I tried Django
I tried spleeter
I tried cgo
I tried "smoothing" the image with Python + OpenCV
I tried web scraping to analyze the lyrics.
I tried using scrapy for the first time
I tried the pivot table function of pandas
[Python] I tried substituting the function name for the function name
I tried cluster analysis of the weather map
I tried hitting the Qiita API from go
vprof --I tried using the profiler for Python
I tried "differentiating" the image with Python + OpenCV
I tried to optimize while drying the laundry
I tried to save the data with discord
I tried simulating the "birthday paradox" in Python
I tried the least squares method in Python
I tried non-negative matrix factorization (NMF) with TensorFlow
Before the coronavirus, I first tried SARS analysis
I tried using the Google Cloud Vision API
I tried to touch the API of ebay
I tried python programming for the first time.