[PYTHON] Simple neural network implementation using Chainer-optimization algorithm setting-

From Last time, I am writing an article to actually build a neural network using Chainer, which is a framework for deep learning. This time

  1. Data preparation

  2. Model description

  3. Optimization algorithm settings

  4. Learning

  5. Result output Of

  6. Optimization algorithm settings I will write about.

Also, as it is 4. I will also write about learning.

Calculation of optimal parameters

The Iris model I wrote last time was the following code.

class IrisChain(Chain):
    def __init__():
        super(IrisChain, self).__init__(
             l1 = L.Linear(4, 6),
             l2 = L.Linear(6, 3),
    )

    def __call__(self, x, y):
        return F.mean_squared_error(self.fwd(x), y)

    def fwd(self, x):
        h1 = F.sigmoid(self.l1(x))
        h2 = self.l2(h1)
        return h2

The flow of what we are doing is

  1. Linear definition in the constructor
  2. Use fwd to determine whether to fire from node to node
  3. Error between output and teacher data by call

Irisニューラルネット_4.png

Conversion from input layer to intermediate layer

v = w_1x + b_1 ...(1)

Convert from intermediate layer to output layer

y = w_2v + b_2 ...(2)

However, what we want to finally find is this parameter, w and b.

This time here Uses the Stochastic Gradient Descent (SGD) optimization algorithm.

And learning. The number of learning repetitions is 10,000 this time.

>>> model = IrisChain()
>>> optimizer = optimizers.SGD()
>>> optimizer.setup(model)
>>> for i range(10000):
...     x = Variable(xtrain)
...     y = Variable(ytrain)
...     model = zerograds()
...     loss = model(x, y)
...     loss.backward()
...     optimizer.update()

Below 4 lines

model = zerograds()
loss = model(x, y)
loss.backward()
optimizer.update()

Is here This is the state of error propagation. It's almost a promised pattern. Now you have the appropriate parameters w and b, and you have a classifier. Next time, I will try this classifier.

reference

Takayoshi Yamashita Deep learning Kodansha that can be seen in the illustration Hiroyuki Shinno Practical deep learning with Chainer-How to implement complex NN-Ohmsha

Recommended Posts

Simple neural network implementation using Chainer-optimization algorithm setting-
Simple neural network implementation using Chainer
Simple neural network implementation using Chainer-Data preparation-
Simple neural network implementation using Chainer-Model description-
Simple neural network theory and implementation
Implementation of "blurred" neural network using Chainer
Implementation of a convolutional neural network using only Numpy
Neural network implementation in python
Neural network implementation (NumPy only)
Rank learning using neural network (Implementation of RankNet by Chainer)
Implementation of a two-layer neural network 2
PRML Chapter 5 Neural Network Python Implementation
Simple classification model with neural network
Survivor prediction using kaggle's titanic neural network [80.8%]
Implementation of 3-layer neural network (no learning)
Try using TensorFlow-Part 2-Convolutional Neural Network (MNIST)
Reinforcement learning 10 Try using a trained neural network.
Learning neural networks using the genetic algorithm (GA)
Another style conversion method using Convolutional Neural Network
Parametric Neural Network
Author estimation using neural network and Doc2Vec (Aozora Bunko)
Model using convolutional neural network in natural language processing
Bayesian optimization implementation of neural network hyperparameters (Chainer + GPyOpt)