[PYTHON] Using TensorFlow in Cloud9 Integrated Development Environment-GetStarted-

Introduction

I am working on the cloud integrated development environment Cloud9. Since I am studying machine learning, I built an environment where I want to study TensorFlow on Cloud9. I will leave that note.

environment

Cloud9 Python 2.7.6 Sample Codes : GitHub

procedure

It is a procedure to be able to execute up to Get Started of TensorFlow on Cloud9.

  1. Create a new workspace in Cloud9. Select python as the template.
  2. In Terminal, do the following:
    export export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
    sudo pip install --upgrade $TF_BINARY_URL
  3. Save and run the Get Started sample program. It is the same as the one posted in GET STARTED of TensorFlow.

getstarted.py


import tensorflow as tf
import numpy as np

# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3

# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but TensorFlow will
# figure that out for us.)
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b

# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# Before starting, initialize the variables.  We will 'run' this first.
init = tf.initialize_all_variables()

# Launch the graph.
sess = tf.Session()
sess.run(init)

# Fit the line.
for step in range(201):
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(W), sess.run(b))

# Learns best fit is W: [0.1], b: [0.3]

If the execution result is close to 0.1 and 0.3, it means that the coefficient of the straight line can be predicted by learning. The result I ran is below, and I see values close to 0.1 and 0.3.

(0, array([ 0.45364389], dtype=float32), array([ 0.13226086], dtype=float32))
(20, array([ 0.18247673], dtype=float32), array([ 0.25206894], dtype=float32))
(40, array([ 0.12017135], dtype=float32), array([ 0.28827751], dtype=float32))
(60, array([ 0.10493329], dtype=float32), array([ 0.29713303], dtype=float32))
(80, array([ 0.10120656], dtype=float32), array([ 0.29929882], dtype=float32))
(100, array([ 0.10029508], dtype=float32), array([ 0.29982853], dtype=float32))
(120, array([ 0.10007217], dtype=float32), array([ 0.29995808], dtype=float32))
(140, array([ 0.10001764], dtype=float32), array([ 0.29998976], dtype=float32))
(160, array([ 0.10000434], dtype=float32), array([ 0.29999751], dtype=float32))
(180, array([ 0.10000106], dtype=float32), array([ 0.29999939], dtype=float32))
(200, array([ 0.10000025], dtype=float32), array([ 0.29999986], dtype=float32))

Run MNIST

The program that recognizes numbers from the image data of handwritten numbers was also successfully executed. Just download and run convolutional.py from GitHub. (It is the same as the source on the official website.) When executed on Cloud9, it took about 100 steps and 2 minutes, and I think it took about 2 hours in total. The specs aren't enough. The specifications of Cloud9 were 1 CPU and 512 MB RAM. The execution result is as follows. (Since it is long, it is omitted in the middle.)

python convolutional.py 

Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
Initialized!
Step 0 (epoch 0.00), 7.7 ms
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Step 100 (epoch 0.12), 706.3 ms
Minibatch loss: 3.287, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
Step 200 (epoch 0.23), 713.8 ms
...
...
Step 5300 (epoch 6.17), 1937.9 ms
Minibatch loss: 1.980, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5400 (epoch 6.28), 2089.9 ms

I'm not sure how to look at it, but I wonder if it's possible to identify it because the percentage of Minibatch error and Validation error is small. This time I just did it for the time being, but I would like to understand the contents in the future.

in conclusion

Now that I have installed TensorFlow on Cloud9, I would like to understand the contents in the future. By the way, TensorFlow seems to use CPU and GPU, but it seems that GPU-based ones cannot be used. I tried to install it, but I got an error at runtime. On Cloud9, let's use the one that uses the CPU.

Change log

2016/10/19 --Added MNIST sample code execution result --Environmental description --Changed the sample code name to match GitHub

Recommended Posts

Using TensorFlow in Cloud9 Integrated Development Environment-GetStarted-
Using TensorFlow in the cloud integrated development environment Cloud9 ~ Basics of usage ~
Easy 3 minutes TensorBoard in Google Colab (using TensorFlow 2.x)
Directory structure for test-driven development using pytest in python
Framework development in Python
Development environment in Python
Jupyter in Cloud9 IDE
Image normalization in TensorFlow
Slackbot development in Python
Use "% tensorflow_version 2.x" when using TPU with Tensorflow 2.1.0 in Colaboratory
Get Python scripts to run quickly in Cloud Run using responder
Test discovery fails when using tensorflow in vscode + pytest environment