[PYTHON] Learning record No. 24 (28th day)

Learning record (28th day)

Start studying: Saturday, December 7th

Teaching materials, etc .: ・ Miyuki Oshige "Details! Python3 Introductory Note ”(Sotec, 2017): 12/7 (Sat) -12/19 (Thu) read ・ Progate Python course (5 courses in total): 12/19 (Thursday) -12/21 (Saturday) end ・ Andreas C. Müller, Sarah Guido "(Japanese title) Machine learning starting with Python" (O'Reilly Japan, 2017): 12/21 (Sat) -December 23 (Sat) ・ Kaggle: Real or Not? NLP with Disaster Tweets: Posted on Saturday, December 28th to Friday, January 3rd Adjustment ・ Wes Mckinney "(Japanese title) Introduction to data analysis by Python" (O'Reilly Japan, 2018): 1/4 (Wednesday) to 1/13 (Monday) read ・ ** Yasuki Saito "Deep Learning from Zero" (O'Reilly Japan, 2016): 1/15 (Wed) ~ **

"Deep Learning from scratch"

p.239 Chapter 7 Finish reading up to the convolutional neural network.

Chapter 6 Learning Techniques

-Optimization: Finding the optimal parameters that can reduce the value of the loss function as much as possible. The parameter space is complex and a very difficult problem. There are several optimizers.

-** Stochastic gradient descent **: The method learned up to Chapter 5. Using the gradient of the parameter, update it many times in the direction of the gradient and gradually approach it.

W ← W - η\frac{\partial L}{\partial W}

η is the learning rate. Update the left side with the value on the right side. The disadvantage of SGD is that the search path tends to be inefficient if the function is elongated, that is, if it is not isotropic.

-** Momentum **: Use concepts like momentum. By introducing a new variable called v, which acts like frictional force and air resistance, it slows down when no force is received.

v ← αv - η\frac{\partial L}{\partial L}
W ← W + v

-** AdaGrad **: A method of changing the value of the learning rate according to the degree of learning. Make it bigger at first and then make it smaller.

h ← h + \frac{\partial L}{\partial W} ⊙ \frac{\partial L}{\partial W}
W ← W - η\frac{1}{\sqrt{h}}\frac{\partial L}{\partial W}

⊙ is the Hadamard operator. It means multiplication for each element of the matrix. The larger h (the more it moves), the smaller the learning coefficient. In other words, the learning scale is adjusted when the parameters are updated.

-** Adam **: A method that combines momentum and AdaGrad.

-As mentioned above, there are various methods for optimizer, but each has its own strengths and weaknesses, so it cannot be said which one is superior. (However, many studies say that SGD is the preferred choice.)

・ ** Weight decay **: A method aimed at learning so that the weight parameter becomes small. By reducing the weight, overfitting is less likely to occur, which is close to improving generalization performance. (However, "0" breaks the contrasting structure of weights, and all have similar values.)

-** Xavier initial value **: For n nodes, use a Gaussian distribution with a standard deviation of (1 / √n) as the initial value. It is used as standard in deep learning frameworks. Suitable for sigmoid and tanh.

-Initial value of ** He **: A Gaussian distribution with a standard deviation of (2 / √n) is used as the initial value for n nodes. Suitable for ReLU. In the case of ReLU, the negative region becomes 0, so it can be interpreted as multiplying by a double coefficient in order to have a wider spread.

・ ** Batch Normalization **: It is often used in methods and competitions devised in 2015. There are advantages such as being able to proceed with learning quickly, not being so dependent on initial values, and suppressing overfitting. Data distribution is normalized by inserting what is called a Batch Norm layer between Affine and ReLU. ** Normalize for each mini-batch in units of mini-batch for learning **.

-** Dropout **: Similar to Weight decay, it is used as a method to suppress overfitting. During training, neurons in the hidden layer are randomly selected and the selected neurons are deleted. All neuron signals are transmitted during the test, but are output by multiplying the ratio erased during training. There is something close to a kind of ensemble method because it can be interpreted that neurons are randomly erased each time during learning, that is, a different model is trained each time.

-** Hyperparameters **: The number of neurons in each layer, batch size, learning rate and weight decay are applicable. Adjusting hyperparameters using test data leads to overfitting, so special data called validation data is used. I make this myself. (Np shuffle, sclearn train_data_split, etc. Used in Kaggle.)

・ For optimization, first set roughly, observe the result of recognition accuracy, and gradually narrow down to the range where good values exist. ** In the case of neural networks, it has been reported that random sampling and searching gives better results than regular searching such as grid search. ** ** Roughly speaking, a power scale of 10 is about 10 ^ (-3) to 10 ^ (3). It is effective to make the epoch for learning small because it is necessary to give up on things that seem to be bad at an early stage. An epoch is a unit when all the data is used up. If you want to train 10000 data in 100 mini-batch, 100 times = 1 epoch, learning record 22 as shown.

-Bayesian optimization is also effective. I had many chances to see it on Kaggle.

Chapter 7 Convolutional Neural Network

・ ** Convolutional neural network (CNN) ** In addition to the usual neural network, the concept of ** "Convoluntion layer" and "Pooling layer" ** is added. Two typical examples are ** LeNet ** and ** AlexNet **.

-Replace the layer connection "Affine --ReLU (sigmoid)" with the connection "Convoluntion --ReLU (sigmoid)-(Pooling)". (However, the part near the output layer is as usual.)

-The Affine layer used a fully connected layer that connects all neurons. The problem with this is that by treating all input data as equivalent neurons (same dimension), information about the shape cannot be utilized. On the other hand, the Convoluntion layer outputs the input data to the next layer in the same dimension, so the data can be understood more correctly (possibly).

-Convolution operation: Apply the filter window to the input data while sliding it at regular intervals. The variable that adjusts the adaptation interval of the filter is called stride. (Talk about how much to shift and adapt)

-Padding: Fill the area around the input data with fixed data (such as 0).

-Pooling: An operation that reduces the vertical and horizontal spaces. Look at the 4x4 matrix for each 2x2 area, and in the case of Max pooling, for example, perform operations such as extracting and outputting the maximum value for each area.

-There is a function called ** im2col ** that applies these convolution operations. im2col is a function that expands the input data so that it is convenient for the filter, and expands the applicable area of the filter one column at a time from the beginning. After expansion, it becomes larger than the number of elements of the original block and consumes a lot of memory, but ** The matrix calculation itself is highly optimized, so it is very possible to reduce it to the shape of this matrix. There are many benefits to. ** **

-The convolution layer filter can extract primitive information such as blobs (locally clustered areas) and edges (edges where colors change) and output them to the next area. ..

Recommended Posts

Learning record No. 21 (25th day)
Learning record No. 10 (14th day)
Learning record No. 24 (28th day)
Learning record No. 23 (27th day)
Learning record No. 25 (29th day)
Learning record No. 26 (30th day)
Learning record No. 20 (24th day)
Learning record No. 14 (18th day) Kaggle4
Learning record No. 15 (19th day) Kaggle5
Learning record 4 (8th day)
Learning record 9 (13th day)
Learning record 3 (7th day)
Learning record 5 (9th day)
Learning record 6 (10th day)
Learning record 8 (12th day)
Learning record 1 (4th day)
Learning record 7 (11th day)
Learning record 2 (6th day)
Learning record 16 (20th day)
Learning record 22 (26th day)
Learning record 13 (17th day) Kaggle3
Learning record No. 17 (21st day)
Learning record 12 (16th day) Kaggle2
Learning record No. 18 (22nd day)
Learning record No. 19 (23rd day)
Learning record No. 29 (33rd day)
Learning record No. 28 (32nd day)
Learning record No. 27 (31st day)
Learning record 11 (15th day) Kaggle participation
Programming learning record day 2
Learning record
Learning record # 3
Learning record # 1
Learning record # 2
Python learning day 4
Learning record (2nd day) Scraping by #BeautifulSoup
Learning record (4th day) #How to get the absolute path from the relative path
Learning record so far
Go language learning record
Linux learning record ① Plan
Effective Python Learning Memorandum Day 15 [15/100]
<Course> Deep Learning: Day2 CNN
Effective Python Learning Memorandum Day 6 [6/100]
Effective Python Learning Memorandum Day 12 [12/100]
Effective Python Learning Memorandum Day 9 [9/100]
Effective Python Learning Memorandum Day 8 [8/100]
Learning record (3rd day) #CSS selector description method #BeautifulSoup scraping
Rabbit Challenge Deep Learning 1Day
<Course> Deep Learning: Day1 NN
Learning record (6th day) #Set type #Dictionary type #Mutual conversion of list tuple set #ndarray type #Pandas (DataFrame type)
Effective Python Learning Memorandum Day 14 [14/100]
Effective Python Learning Memorandum Day 1 [1/100]
Subjects> Deep Learning: Day3 RNN
Rabbit Challenge Deep Learning 2Day
Effective Python Learning Memorandum Day 13 [13/100]
Effective Python Learning Memorandum Day 3 [3/100]
Effective Python Learning Memorandum Day 5 [5/100]
Effective Python Learning Memorandum Day 4 [4/100]
Effective Python Learning Memorandum Day 7 [7/100]
Effective Python Learning Memorandum Day 2 [2/100]
Thoroughly study Deep Learning [DW Day 0]