[PYTHON] Does TensorFlow change the image of deep learning? What I thought after touching a little

The other day, Google released the deep learning library TensorFlow. Many people have already tried it, and their impressions and usage have been written. I would like to touch it myself and write what I thought.

When I first read the explanation of deep learning

In the explanation of common deep learning, the following figure is used.

pic1.png

It is introduced as a mechanism reminiscent of the mechanism of nerves. Data is transmitted from nerve cell to nerve cell through synapses, and the nerve cell that receives the data sees the data and sends new data.

I when I understood a little about the mechanism of deep learning

However, when it comes to looking at this figure and reflecting it in the program, I am afraid that many cells must be reproduced. Therefore, it would be easier to understand if you think of nerve cells with the same function as a mass and simplify the figure using a high-dimensional vector space.

pic2.png

Think of nerve cells as synaptic connections in vector space as some kind of mapping (think of it as a function if you don't understand it). This map is often non-linear, but I think many use something that is represented by a composite of a linear map (represented by a matrix) and a simple non-linear function. In machine learning, the performance of this map is improved by learning. If you have a composite of a linear map and a non-linear function, you can represent the part of the linear map with a matrix and update the matrix each time you learn.

Now here

When I was reading the explanation of TensorFlow based on this understanding, I was immediately convinced that I was doing deep learning using graphs, but I sometimes wondered, "What?" That is, although there are many explanations about vertices, there are almost no parts that correspond to edges. In the previous figure, the part of the map, that is, the side of the graph is strengthened by learning, but there is no explanation for that side. Instead, there is Variable, and it seems that you are supposed to learn by updating it.

Apparently, I need to rewrite the image of deep learning within myself.

Note that the mapping part is usually divided into a variable, learning-updated part (a linear mapping part) and an invariant part (a non-linear function part). The variable part is represented by some column of numbers. It can be understood that TensorFlow's style is to treat this as a variable and treat it as a vertex instead of embedding it in an edge. By separating from the edge to the vertex, the data and the function are separated, and the data coming from the input and the training data are treated equally. Then, the image will be as shown in the following figure. pic3.png

At the time of learning, the side flows backward and the learning data part is updated. If you think about it from such an image, you can think of deep learning as something similar to the mechanism of other applications.

Recommended Posts

Does TensorFlow change the image of deep learning? What I thought after touching a little
I just changed the sample source of Python a little.
I thought a little because the Trace Plot of the stan parameter is hard to see.
I tried hosting a TensorFlow deep learning model using TensorFlow Serving
What I thought after working on the "No comment at all" project for a year
I made a dot picture of the image of Irasutoya. (part1)
I tried a little bit of the behavior of the zip function
I made a dot picture of the image of Irasutoya. (part2)
I searched for a similar card of Hearthstone with Deep Learning
[Go] Create a CLI command to change the extension of the image
After researching the Python library, I understood a little about egg.info.
What happens when I change the hyperparameters of SVM (RBF kernel)?
I tried to transform the face image using sparse_image_warp of TensorFlow Addons
Graph of the history of the number of layers of deep learning and the change in accuracy
I tried using the trained model VGG16 of the deep learning library Keras
I made a GAN with Keras, so I made a video of the learning process.
I tried the common story of using Deep Learning to predict the Nikkei 225
What Java users thought of using the Go language for a day
I tried the common story of predicting the Nikkei 225 using deep learning (backtest)
Othello-From the tic-tac-toe of "Implementation Deep Learning" (3)
Visualize the effects of deep learning / regularization
Deep learning image recognition 3 after model creation
Othello-From the tic-tac-toe of "Implementation Deep Learning" (2)
Build a python environment to learn the theory and implementation of deep learning
How to install the deep learning framework Tensorflow 1.0 in the Anaconda environment of Windows
Deep Learning! The story of the data itself that is read when it does not follow after handwritten number recognition
The story of doing deep learning with TPU
A memorandum of studying and implementing deep learning
I thought a little about TensorFlow's growing API
I tried to correct the keystone of the image
Implementation of Deep Learning model for image recognition
I tried using the image filter of OpenCV
I installed Chainer, a framework for deep learning
I did a little research on the class
I tried to make something like a chatbot with the Seq2Seq model of TensorFlow
The concept of reference in Python collapsed for a moment, so I experimented a little.
What I thought about in the entrance exam question of "Bayesian statistics from the basics"
I took a look at the contents of sklearn (scikit-learn) (1) ~ What about the implementation of CountVectorizer? ~
How to change the generated image of GAN to a high quality one to your liking
I thought about why Python self is necessary with the feeling of a Python interpreter
I made a twitter app that identifies and saves the image of a specific character on the twitter timeline by pytorch transfer learning