[PYTHON] I tried to learn logical operations with TF Learn

Notes for using TensorFlow on Bash on Ubuntu on Windows So, I can use TensorFlow, but I don't know how to use it.

There is a library called TFLearn that makes it easier to use TensorFlow, so I included that as well.

$ pip install tflearn

Looking at the sample code, there was a program called logical.py that seems to learn logical operations, so I tried it this time.

OR learning

In logical.py, learning of multiple logical operations is grouped together, so I extracted only the relevant parts.

import tensorflow as tf
import tflearn

# Logical OR operator
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y = [[0.], [1.], [1.], [1.]]

# Graph definition
with tf.Graph().as_default():
    g = tflearn.input_data(shape=[None, 2])
    g = tflearn.fully_connected(g, 128, activation='linear')
    g = tflearn.fully_connected(g, 128, activation='linear')
    g = tflearn.fully_connected(g, 1, activation='sigmoid')
    g = tflearn.regression(g, optimizer='sgd', learning_rate=2.,
                           loss='mean_square')

    # Model training
    m = tflearn.DNN(g)
    m.fit(X, Y, n_epoch=100, snapshot_epoch=False)

    # Test model
    print("Testing OR operator")
    print("0 or 0:", m.predict([[0., 0.]]))
    print("0 or 1:", m.predict([[0., 1.]]))
    print("1 or 0:", m.predict([[1., 0.]]))
    print("1 or 1:", m.predict([[1., 1.]]))

It's my first look, but somehow I understand the meaning. When this was done, the result was as follows.

--
Training Step: 100  | total loss: 0.00227
| SGD | epoch: 100 | loss: 0.00227 -- iter: 4/4

--
Testing OR operator
0 or 0: [[0.031054211780428886]]
0 or 1: [[0.9823662638664246]]
1 or 0: [[0.9786670207977295]]
1 or 1: [[0.9999874830245972]]

If you look at it digitally as 0 or 1, you have learned OR.

By the way, in the first code, 128 intermediate layers are connected by two layers. I tried to delete the middle layer because it would not be necessary for the middle layer like learning OR. Instead, I increased the number of learnings to 2000.

import tensorflow as tf
import tflearn

# Logical OR operator
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y = [[0.], [1.], [1.], [1.]]

# Graph definition
with tf.Graph().as_default():
    g = tflearn.input_data(shape=[None, 2])
    g = tflearn.fully_connected(g, 1, activation='sigmoid')
    g = tflearn.regression(g, optimizer='sgd', learning_rate=2., loss='mean_square')

    # Model training
    m = tflearn.DNN(g)
    m.fit(X, Y, n_epoch=2000, snapshot_epoch=False)

    # Test model
    print("Testing OR operator")
    print("0 or 0:", m.predict([[0., 0.]]))
    print("0 or 1:", m.predict([[0., 1.]]))
    print("1 or 0:", m.predict([[1., 0.]]))
    print("1 or 1:", m.predict([[1., 1.]]))

The result of doing this.

--
Training Step: 2000  | total loss: 0.00098
| SGD | epoch: 2000 | loss: 0.00098 -- iter: 4/4
--
Testing OR operator
0 or 0: [[0.041201911866664886]]
0 or 1: [[0.9756871461868286]]
1 or 0: [[0.9764388799667358]]
1 or 1: [[0.9999741315841675]]

It seems that OR learning was done properly.

Learning AND

Then I tried learning AND. The code is the same as just changing the teacher signal to AND.

import tensorflow as tf
import tflearn

# Logical AND operator
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y = [[0.], [0.], [0.], [1.]]

# Graph definition
with tf.Graph().as_default():
    g = tflearn.input_data(shape=[None, 2])
    g = tflearn.fully_connected(g, 1, activation='sigmoid')
    g = tflearn.regression(g, optimizer='sgd', learning_rate=2., loss='mean_square')

    # Model training
    m = tflearn.DNN(g)
    m.fit(X, Y, n_epoch=2000, snapshot_epoch=False)

    # Test model
    print("Testing AND operator")
    print("0 and 0:", m.predict([[0., 0.]]))
    print("0 and 1:", m.predict([[0., 1.]]))
    print("1 and 0:", m.predict([[1., 0.]]))
    print("1 and 1:", m.predict([[1., 1.]]))

The result of doing this.

--
Training Step: 2000  | total loss: 0.00137
| SGD | epoch: 2000 | loss: 0.00137 -- iter: 4/4
--
Testing AND operator
0 and 0: [[8.591794176027179e-05]]
0 and 1: [[0.04014528915286064]]
1 and 0: [[0.03964542970061302]]
1 and 1: [[0.9525935053825378]]

Certainly it became AND.

Learning XOR

Since OR and AND can be linearly separated, an intermediate layer is unnecessary, but XOR cannot be linearly separated, so an intermediate layer is required. However, in the sample code, instead of learning XOR directly, NAND and OR are trained and combined and used.

I didn't know why I didn't train it directly, so I wrote a code to train XOR directly.

import tensorflow as tf
import tflearn

# Logical XOR operator
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y = [[0.], [1.], [1.], [0.]]

# Graph definition
with tf.Graph().as_default():
    g = tflearn.input_data(shape=[None, 2])
    g = tflearn.fully_connected(g, 2, activation='sigmoid')
    g = tflearn.fully_connected(g, 1, activation='sigmoid')
    g = tflearn.regression(g, optimizer='sgd', learning_rate=2., loss='mean_square')

    # Model training
    m = tflearn.DNN(g)
    m.fit(X, Y, n_epoch=2000, snapshot_epoch=False)

    # Test model
    print("Testing XOR operator")
    print("0 xor 0:", m.predict([[0., 0.]]))
    print("0 xor 1:", m.predict([[0., 1.]]))
    print("1 xor 0:", m.predict([[1., 0.]]))
    print("1 xor 1:", m.predict([[1., 1.]]))

This just made the teacher signal XOR and added two intermediate layers. So, when I did this, it turned out like this.

--
Training Step: 2000  | total loss: 0.25000
| SGD | epoch: 2000 | loss: 0.25000 -- iter: 4/4
--
Testing XOR operator
0 xor 0: [[0.5000224709510803]]
0 xor 1: [[0.5000009536743164]]
1 xor 0: [[0.49999910593032837]]
1 xor 1: [[0.4999775290489197]]

I can't learn that. So, when I googled, the next page was caught. It's the usual Stack Overflow site.

tflearn / tensorflow does not learn xor

According to this, with the standard setting, the initial value of the weight seems to be quite narrow with a standard deviation of 0.02. Therefore, it seems better to widen the range of initial weight values from -1 to 1.

import tensorflow as tf
import tflearn

# Logical XOR operator
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y = [[0.], [1.], [1.], [0.]]

# Graph definition
with tf.Graph().as_default():
    tnorm = tflearn.initializations.uniform(minval=-1.0, maxval=1.0)
    g = tflearn.input_data(shape=[None, 2])
    g = tflearn.fully_connected(g, 2, activation='sigmoid', weights_init=tnorm)
    g = tflearn.fully_connected(g, 1, activation='sigmoid', weights_init=tnorm)
    g = tflearn.regression(g, optimizer='sgd', learning_rate=2., loss='mean_square')

    # Model training
    m = tflearn.DNN(g)
    m.fit(X, Y, n_epoch=2000, snapshot_epoch=False)

    # Test model
    print("Testing XOR operator")
    print("0 xor 0:", m.predict([[0., 0.]]))
    print("0 xor 1:", m.predict([[0., 1.]]))
    print("1 xor 0:", m.predict([[1., 0.]]))
    print("1 xor 1:", m.predict([[1., 1.]]))

The result of executing by changing the setting of the initial value of the weight in this way.

--
Training Step: 2000  | total loss: 0.00131
| SGD | epoch: 2000 | loss: 0.00131 -- iter: 4/4
--
Testing XOR operator
0 xor 0: [[0.03527239337563515]]
0 xor 1: [[0.9663047790527344]]
1 xor 0: [[0.9607295393943787]]
1 xor 1: [[0.03082425333559513]]

I was able to learn XOR safely.

Impressions of playing with TF Learn

It is easy to understand because the code corresponds to the idea of neural networks. It may not be possible to fine-tune TensorFlow, but for those who don't know what TensorFlow can do in the first place, why not start with TF Learn?

Recommended Posts

I tried to learn logical operations with TF Learn
I tried to learn PredNet
I tried to learn the sin function with chainer
I tried to implement and learn DCGAN with PyTorch
I tried to implement Autoencoder with TensorFlow
I tried to visualize AutoEncoder with TensorFlow
I tried to get started with Hy
I tried to implement CVAE with PyTorch
I tried to solve TSP with QAOA
I tried to expand the size of the logical volume with LVM
I tried to predict next year with AI
I tried to detect Mario with pytorch + yolov3
I tried to implement reading Dataset with PyTorch
I tried to use lightGBM, xgboost with Boruta
I tried to move GAN (mnist) with keras
I tried to save the data with discord
I tried to detect motion quickly with OpenCV
I tried to integrate with Keras in TFv1.1
I tried to let VAE learn motion graphics
I tried to get CloudWatch data with Python
I tried to output LLVM IR with Python
I tried to debug.
I tried to learn the angle from sin and cos with chainer
I tried to detect an object with M2Det!
I tried to automate sushi making with python
I tried to predict Titanic survival with PyCaret
I tried to paste
I tried to operate Linux with Discord Bot
I tried to study DP with Fibonacci sequence
I tried to start Jupyter with Amazon lightsail
DCGAN with TF Learn
I tried to judge Tsundere with Naive Bayes
Mayungo's Python Learning Episode 5: I tried to do four arithmetic operations with numbers
I tried to automate internal operations with Docker, Python and Twitter API + bonus
I tried to summarize the operations that are likely to be used with numpy-stl
I tried to move machine learning (ObjectDetection) with TouchDesigner
I tried to create a table only with Django
I tried to read and save automatically with VOICEROID2 2
I tried to implement Minesweeper on terminal with python
I tried to get started with blender python script_Part 01
I tried to touch the CSV file with Python
I tried to draw a route map with Python
I tried to solve the soma cube with python
I tried to automatically read and save with VOICEROID2
I tried to get started with blender python script_Part 02
I tried to generate ObjectId (primary key) with pymongo
I tried to implement an artificial perceptron with python
I tried to build ML Pipeline with Cloud Composer
I tried to implement time series prediction with GBDT
I tried to uncover our darkness with Chatwork API
I tried to automatically generate a password with Python3
[Introduction to Pytorch] I tried categorizing Cifar10 with VGG16 ♬
I tried to solve the problem with Python Vol.1
I tried to analyze J League data with Python
I tried to implement Grad-CAM with keras and tensorflow
I tried to make an OCR application with PySimpleGUI
I tried to implement SSD with PyTorch now (Dataset)
I tried to interpolate Mask R-CNN with Optical Flow
I tried to step through Bayesian optimization. (With examples)
I tried to find an alternating series with tensorflow
[Introduction to AWS] I tried playing with voice-text conversion ♪