[PYTHON] Recipe collection comparing versions 1 and 2 of TensorFlow (Part 1)

Introduction

Tensorflow is a typical framework for deep learning. This TensorFlow will be version 2.0 in October 2019, The way the source is written has also changed.

However, the majority of articles are still written in the 1.X version, ** How do you write this after 2.0? ** ** I think there are many people who get stuck.

I was one of them, so I wrote an article with a memorandum to get back to the basics. I arrange it myself while referring to various things, so If you have any mistakes, please leave a comment.

In addition, the sample to be introduced is not explained in detail. It is very simple and specializes in the difference between ver1 and ver2.

environment

Recipe collection

Data flow graph

addition

For ver 1.15.0

in


import tensorflow as tf

a = tf.constant(1, name='a')
b = tf.constant(2, name='b')
c = a + b

with tf.Session() as sess:
    print(sess.run(c))
    print(c)
    print(type(c))

out


3
Tensor("add:0", shape=(), dtype=int32)
<class 'tensorflow.python.framework.ops.Tensor'>

For ver 2.1.0

in


import tensorflow as tf

a = tf.constant(1, name='a')
b = tf.constant(2, name='b')
c = a + b

tf.print(c)
print(c)
print(type(c))

out


3
tf.Tensor(3, shape=(), dtype=int32)
<class 'tensorflow.python.framework.ops.EagerTensor'>

【reference】 tf.print

Definition output

For ver 1.15.0

in


import tensorflow as tf

a = tf.constant(1, name='a')
b = tf.constant(2, name='b')
c = a + b

with tf.Session() as sess:
    print(sess.run(c))
    print(c)

graph = tf.get_default_graph()
print(graph.as_graph_def())

out


node {
  name: "a"
  op: "Const"
  ...(Omission)...
node {
  name: "add"
  op: "AddV2"
  input: "a"
  input: "b"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
}
versions {
  producer: 134
}

For ver 2.1.0

in


import tensorflow as tf

graph = tf.Graph()
with graph.as_default():
    a = tf.constant(1, name='a')
    b = tf.constant(2, name='b')
    c = a + b
    print(graph.as_graph_def())

out


# ver 1.15.Omitted because it is the same as 0

Assign constants to variables

For ver 1.15.0

in


import tensorflow as tf

a = tf.Variable(10, name='a')
b = tf.constant(2, name='b')
c = tf.assign(a, a + b)

with tf.Session() as sess:
    # global_variables_initializer() :Initialize all variables
    sess.run(tf.global_variables_initializer())
    print(sess.run(c))
    print(sess.run(c))

out


12
14

For ver 2.1.0

in


import tensorflow as tf

a = tf.Variable(10, name='a')
b = tf.constant(2, name='b')
tf.print(a.assign_add(b))
tf.print(a.assign_add(b))

out


12
14

The disappeared placeholder

For ver 1.15.0

in


import tensorflow as tf

a = tf.placeholder(dtype=tf.int32, name='a')
b = tf.constant(2, name='b')
c = a + b

with tf.Session() as sess:
    print(sess.run(c, feed_dict={a: 10}))
    print(a, b, c)

out


12
Tensor("a:0", dtype=int32) Tensor("b:0", shape=(), dtype=int32) Tensor("add:0", dtype=int32)

For ver 2.1.0

in


import tensorflow as tf

a = tf.Variable(10, name='a')
b = tf.constant(2, name='b')

# @tf.AutoGraph with function
@tf.function
def add(x, y):
    return x + y

c = add(a,b)
tf.print(c)
print(type(c))
print(a, b, c)

out


12
<class 'tensorflow.python.framework.ops.EagerTensor'>
<tf.Variable 'a:0' shape=() dtype=int32, numpy=10> tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(12, shape=(), dtype=int32)

【reference】 Migrate your TensorFlow 1 code to TensorFlow 2

Four arithmetic operations

For ver 1.15.0

in


import tensorflow as tf

a = tf.constant(5, name='a')
b = tf.constant(2, name='b')
add = tf.add(a, b) #Addition
subtract = tf.subtract(a, b) #Subtraction
multiply = tf.multiply(a, b) #Multiply
truediv = tf.truediv(a, b) #division

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(sess.run(add))
    print(sess.run(subtract))
    print(sess.run(multiply))
    print(sess.run(truediv))
    print(type(add))

out


7
3
10
2.5
<class 'tensorflow.python.framework.ops.Tensor'>

For ver 2.1.0

in


import tensorflow as tf

a = tf.constant(5, name='a')
b = tf.constant(2, name='b')
add = tf.math.add(a, b) #Addition
dif = tf.math.subtract(a,b) #Subtraction
multiply = tf.math.multiply(a, b) #Multiply
truediv = tf.math.truediv(a, b) #division

tf.print(add)
tf.print(dif)
tf.print(multiply)
tf.print(truediv)
print(type(add))

out


7
3
10
2.5
<class 'tensorflow.python.framework.ops.EagerTensor'>

【reference】 tf.math

Matrix operation

For ver 1.15.0

in


import tensorflow as tf

a = tf.constant([[1, 2], [3, 4]], name='a')
b = tf.constant([[1], [2]], name='b')
c = tf.matmul(a, b) #Matrix a,Multiply b

with tf.Session() as sess:
    print(a.shape)
    print(b.shape)
    print(c.shape)
    print('a', sess.run(a))
    print('b', sess.run(b))
    print('c', sess.run(c))

out


(2, 2)
(2, 1)
(2, 1)
a [[1 2]
 [3 4]]
b [[1]
 [2]]
c [[ 5]
 [11]]

For ver 2.1.0

in


import tensorflow as tf

a = tf.constant([[1, 2], [3, 4]], name='a')
b = tf.constant([[1], [2]], name='b')
c = tf.linalg.matmul(a, b) #Matrix a,Multiply b

print(a.shape)
print(b.shape)
print(c.shape)
tf.print('a', a)
tf.print('b', b)
tf.print('c', c)

out


# ver 1.15.Omitted because it is the same as 0

in conclusion

This time I summarized the basics in the basics. Next time, I would like to describe the gradient method.

Recommended Posts

Recipe collection comparing versions 1 and 2 of TensorFlow (Part 1)
Visualization of CNN feature maps and filters (Tensorflow 2.0)
Data analysis planning collection processing and judgment (Part 1)
Data analysis planning collection processing and judgment (Part 2)
Maximum likelihood estimation of mean and variance with TensorFlow
I checked out the versions of Blender and Python
DNN (Deep Learning) Library: Comparison of chainer and TensorFlow (1)
Collection and automation of erotic images using deep learning