TensorFlow API memo (Python)

I don't understand matrix calculation and statistics well, and I have never used numpy. So I can't understand TensorFlow API. ... So, a memo to check what kind of movement it will do by moving the APIs that you do not understand well one by one. I don't have any expertise, so I'd love to hear from you. Reference: Official API Reference

How to try

Try the following one by one in Python's interactive mode.

>>> import tensorflow as tf
>>> sess = tf.Session()
>>> sess.run(<API to execute>)

API tf.size(input, name=None)

>>> sess.run(tf.size([0,1,2]))
3
>>> sess.run(tf.size([[0,1,2]]))
3
>>> sess.run(tf.size([[0,1,2],[4,5,6]]))
6

It seems to return the number of elements.

tf.expand_dims(input, dim, name=None)

>>> sess.run(tf.expand_dims([0,1,2,3], 1))
array([[0],
       [1],
       [2],
       [3]], dtype=int32)
>>> sess.run(tf.expand_dims([[0,1,2,3],[4,5,6,7]], 1))
array([[[0, 1, 2, 3]],

       [[4, 5, 6, 7]]], dtype=int32)
>>> sess.run(tf.expand_dims([[0,1,2,3],[4,5,6,7]], 2))
array([[[0],
        [1],
        [2],
        [3]],

       [[4],
        [5],
        [6],
        [7]]], dtype=int32)

It seems that the dimension (Rank) specified by dim is added to the input. If Rank of input is 1 and 1 is specified for dim, the return value will be Rank2.

tf.range(start, limit=None, delta=1, name='range')

>>> sess.run(tf.range(5))
array([0, 1, 2, 3, 4], dtype=int32)
>>> sess.run(tf.range(1,5))
array([1, 2, 3, 4], dtype=int32)

Returns the specified one-dimensional array (strictly Tensor).

tf.concat(concat_dim, values, name='concat')

>>> sess.run(tf.concat(0, [[1,2,3], [10,20,30]]))
array([ 1,  2,  3, 10, 20, 30], dtype=int32)
>>> sess.run(tf.concat(1, [[[1,2,3],[4,5,6]], [[10,20,30],[40,50,60]]]))
array([[ 1,  2,  3, 10, 20, 30],
       [ 4,  5,  6, 40, 50, 60]], dtype=int32)

It seems to connect the Tensors given in values. The Tensors that are the source of the connection are given in the form of [\ <Tensor1 >, \ <Tensor2 >].

tf.pack(values, name='pack')

>>> sess.run(tf.pack([1,2]))
array([1, 2], dtype=int32)
>>> sess.run(tf.pack([1,5]))
array([1, 5], dtype=int32)
>>> sess.run(tf.pack([1,5,10]))
array([ 1,  5, 10], dtype=int32)

I'm not sure, but the same Tensor given in values is returned. Official Reference

tf.pack([x, y, z]) = np.asarray([x, y, z])

It is written that np.asarray () seems to be a function that creates a reference to the array given as an argument ...

>>> a = [0,1,2]
>>> b = sess.run(tf.pack(a))
>>> b
array([0, 1, 2], dtype=int32)
>>> b[0] = 5
>>> b
array([5, 1, 2], dtype=int32)
>>> a
[0, 1, 2]

It doesn't seem to be a reference ...? ??

tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)

>>> sess.run(tf.sparse_to_dense([0,1],[5], 1.0, 0.0))
array([ 1.,  1.,  0.,  0.,  0.], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,1],[3], 1.0, 0.0))
array([ 1.,  1.,  0.], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,2],[5], 1.0, 0.0))
array([ 1.,  0.,  1.,  0.,  0.], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,2],[5], 2.0, 0.0))
array([ 2.,  0.,  2.,  0.,  0.], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,2],[5], 2.0, 5.0))
array([ 2.,  5.,  2.,  5.,  5.], dtype=float32)

Below is a description of the official reference.

# If sparse_indices is scalar dense[i] = (i == sparse_indices ? sparse_values : >default_value)

# If sparse_indices is a vector, then for each i dense[sparse_indices[i]] = sparse_values[i]

# If sparse_indices is an n by d matrix, then for each i in [0, n) dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]

>>> sess.run(tf.sparse_to_dense([3],[5], 0.1, 0.0))
array([ 0. ,  0. ,  0. ,  0.1,  0. ], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0],[5], 0.1, 0.0))
array([ 0.1,  0. ,  0. ,  0. ,  0. ], dtype=float32)

If sparse_indices is scalar (Rank 0), the index of the number given by sparse_indices will be the value of sparse_values.

>>> sess.run(tf.sparse_to_dense([0,1],[5], [0.1,0.2], 0.0))
array([ 0.1,  0.2,  0. ,  0. ,  0. ], dtype=float32)
>>> sess.run(tf.sparse_to_dense([1,2],[5], [0.1,0.2], 0.0))
array([ 0. ,  0.1,  0.2,  0. ,  0. ], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,2],[5], [0.1,0.2], 0.0))
array([ 0.1,  0. ,  0.2,  0. ,  0. ], dtype=float32)
>>> sess.run(tf.sparse_to_dense([0,2],[5],0.1, 0.0))
array([ 0.1,  0. ,  0.1,  0. ,  0. ], dtype=float32)

If sparse_indices = [0,2] and sparse_values is [0.1,0.2], then output [0] = 0.1 and output [2] = 0.2. Hard to understand. If sparse_values are scalar values, the values entered are the same. It seems that an error will occur if sparse_indices is not in ascending order (such as [1,0]) or if it contains the same value (such as [1,1]).

>>> sess.run(tf.sparse_to_dense([[0,1],[0,2]],[5,5],0.1, 0.0))
array([[ 0. ,  0.1,  0.1,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ]], dtype=float32)

... Well, that's what it is. I don't understand the meaning of this API. .. ..

tf.nn.softmax(logits, name=None)

>>> sess.run(tf.nn.softmax([[0.0,1.0]]))
array([[ 0.26894143,  0.7310586 ]], dtype=float32)
>>> sess.run(tf.nn.softmax([[0.0,1.0],[0.0,1.0]]))
array([[ 0.26894143,  0.7310586 ],
       [ 0.26894143,  0.7310586 ]], dtype=float32)
>>> sess.run(tf.nn.softmax([[1.0],[1.0]]))
array([[ 1.],
       [ 1.]], dtype=float32)
>>> sess.run(tf.nn.softmax([[0.0,1.0,2.0]]))
array([[ 0.09003057,  0.24472848,  0.66524094]], dtype=float32)
>>> sess.run(tf.nn.softmax([[0.0,1.0,2.0,10.0]]))
array([[  4.53770481e-05,   1.23347607e-04,   3.35293560e-04,
          9.99495983e-01]], dtype=float32)
>>> sess.run(tf.nn.softmax([[0.0,1.0,2.0],[1.0,1.0,1.0]]))
array([[ 0.09003057,  0.24472848,  0.66524094],
       [ 0.33333334,  0.33333334,  0.33333334]], dtype=float32)

logits must be Rank 2 and float. Returns something that adds up in one dimension to make them all 1.

tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None)

>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0]],[[1.0]]))
array([ 0.], dtype=float32)
>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0,0.0]],[[1.0,1.0]]))
array([ 1.62652326], dtype=float32)
>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0]],[[2.0]]))
array([ 0.], dtype=float32)
>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0,0.0],[0.0,2.0]],[[1.0,1.0],[0.0,1.0]]))
array([ 1.62652326,  0.12692805], dtype=float32)
>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0,2.0]],[[3.0,4.0]]))
array([ 5.19283152], dtype=float32)
>>> sess.run(tf.nn.softmax_cross_entropy_with_logits([[1.0,2.0]],[[4.0,3.0]]))
array([ 6.19283152], dtype=float32)

logits and labels must be float and Rank2. It seems to return cross entropy. I don't really understand what the cross entropy is. Maybe it's returning "how close the two are after softmaxing logits and labels" ... huh?

tf.nn.in_top_k(predictions, targets, k, name=None)

>>> sess.run(tf.nn.in_top_k([[0,2,1]],[1],1))
array([ True], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1]],[1],1))
array([False], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1]],[0],1))
array([ True], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1]],[0],2))
array([ True], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1]],[1],2))
array([ True], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1]],[2],2))
array([False], dtype=bool)
>>> sess.run(tf.nn.in_top_k([[3,2,1],[1,3,5]],[1,0],2))
array([ True, False], dtype=bool)

The point is, "Which is the largest number?" If predictions is [[3,2,1]] and targets is [0], the largest is the 0th "3", so the answer is correct and the return value is True. If targets are [1] or [2], it will be False. k is specified as "OK if the size is up to the third". If k is 2, it returns True even if the number is the second largest. For example, if predictions is [[3,2,1]] and k is 2, targets is True with [0](0th number "3") or [1](1st number "2"). become.

Recommended Posts

TensorFlow API memo (Python)
TensorFlow API memo
Python memo
python memo
Python memo
python memo
Python memo
Python memo
Python memo
[Python] Memo dictionary
Tensorflow API: tf.truncated_normal
python beginner memo (9.2-10)
Tensorflow API: FLAGS
python beginner memo (9.1)
★ Memo ★ Python Iroha
Python 3 operator memo
[My memo] python
Python3 metaclass memo
[Python] Basemap memo
Python beginner memo (2)
[Python] Numpy memo
Python class (Python learning memo ⑦)
My python environment memo
python openCV installation (memo)
Visualization memo by Python
[TF] About Tensorflow API
Python --bitfinex public API memo --ticker, trades acquisition
[Python] Memo about functions
python regular expression memo
Binary search (python2.7) memo
[My memo] python -v / python -V
Python3 List / dictionary memo
[Memo] Python3 list sort
Python Tips (my memo)
[Python] Memo about errors
OpenCV3 Python API list
DynamoDB Script Memo (Python)
python recipe book Memo
Basic Python command memo
Python OpenCV tutorial memo
Python basic grammar memo
C API in Python 3
python useful memo links
Python decorator operation memo
Python basic memo --Part 1
Effective Python Memo Item 3
Divisor enumeration Python memo
Hit Mastodon's API in Python
Use Trello API with python
Python memo (for myself): Array
Python exception handling (Python learning memo ⑥)
Twitter graphing memo with Python
AWS CDK-Lambda + API Gateway (Python)
EXE Web API by Python
Use Twitter API with Python
[Line / Python] Beacon implementation memo
Python and ruby slice memo
Python Basic Grammar Memo (Part 1)
Run Tensorflow 2.x on Python 3.7
Web API with Python + Falcon
Raspberry Pi + Python + OpenGL memo