[PYTHON] [Deep Learning from scratch] About hyperparameter optimization

Introduction

This article is an easy-to-understand output of ** Deep Learning from scratch Chapter 7 Learning Techniques **. I was able to understand it myself in the humanities, so I hope you can read it comfortably. Also, I would be more than happy if you could refer to it when studying this book.

What are hyperparameters?

Hyperparameters are parameters that must be set by human power, which is required when creating a neural network. For example, the number of layers and the number of neurons.

This hyperparameter has a great influence on the performance of the neural network, so I definitely want to optimize it, but it is very difficult when it comes to human power. So, let's leave this to the machine, so we will try to implement it to automatically find the optimum value of hyperparameters.

To put it simply, we use random values as hyperparameters to measure the learning results and narrow down the optimum value range from the results.

#Hyperparameter tuning
from sklearn.model_selection import train_test_split
def hayper_tyning(lr_m, lr_d, wd_m, wd_d, x_train, t_train, sanpule = 2):
    lr_list = []
    wd_list = []
    x_train = x_train[:500]
    t_train = t_train[:500]#Because it takes a lot of time
    (train_x, hayper_x, train_t, hayper_t) = train_test_split(x_train, t_train, test_size=0.2, random_state=0)

    for i in range(sanpule):
        train_acc_list = []
        test_acc_list = []
        lr = 10 ** np.random.uniform(lr_d, lr_m)
        weight_decay_lambda = 10 ** np.random.uniform(wd_d, wd_m)
        lr_list.append(lr)
        wd_list.append(weight_decay_lambda)
        
        network = MutiLayerNet(input_size = 784, hiden_size_list = [50], output_size = 10, weight_decay_lambda = weight_decay_lambda)
        
        for i in range(1, 101):
            grads = network.gradient(train_x, train_t)
            
            for p in ['W1','b1','W2','b2']:
                network.params[p] = network.params[p] - (lr * grads[p])
            
            
            if i % 100 == 0:
                train_acc = network.accuracy(train_x, train_t)
                test_acc = network.accuracy(hayper_x, hayper_t)
                train_acc_list.append(train_acc)
                test_acc_list.append(test_acc)
                
        #The rest is a graph or narrowing down

Recommended Posts

[Deep Learning from scratch] About hyperparameter optimization
Deep Learning from scratch
Deep Learning from scratch 1-3 chapters
Deep learning from scratch (cost calculation)
Deep Learning memos made from scratch
[Learning memo] Deep Learning made from scratch [Chapter 7]
Deep learning from scratch (forward propagation edition)
Deep learning / Deep learning from scratch 2-Try moving GRU
Deep learning / Deep learning made from scratch Chapter 6 Memo
[Learning memo] Deep Learning made from scratch [Chapter 5]
[Learning memo] Deep Learning made from scratch [Chapter 6]
"Deep Learning from scratch" in Haskell (unfinished)
Deep learning / Deep learning made from scratch Chapter 7 Memo
[Windows 10] "Deep Learning from scratch" environment construction
Learning record of reading "Deep Learning from scratch"
"Deep Learning from scratch" Self-study memo (Part 12) Deep learning
[Learning memo] Deep Learning made from scratch [~ Chapter 4]
"Deep Learning from scratch" self-study memo (unreadable glossary)
Deep Learning from scratch ① Chapter 6 "Techniques related to learning"
Good book "Deep Learning from scratch" on GitHub
[Learning memo] Deep Learning from scratch ~ Implementation of Dropout ~
Python vs Ruby "Deep Learning from scratch" Summary
"Deep Learning from scratch" Self-study memo (10) MultiLayerNet class
"Deep Learning from scratch" Self-study memo (No. 11) CNN
[Deep Learning from scratch] I implemented the Affine layer
"Deep Learning from scratch 2" Self-study memo (No. 21) Chapters 3 and 4
[Deep Learning from scratch] I tried to explain Dropout
[Deep Learning from scratch] Implementation of Momentum method and AdaGrad method
An amateur stumbled in Deep Learning from scratch Note: Chapter 1
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 2
Create an environment for "Deep Learning from scratch" with Docker
An amateur stumbled in Deep Learning from scratch Note: Chapter 7
An amateur stumbled in Deep Learning from scratch Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 7
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 4
"Deep Learning from scratch" self-study memo (No. 18) One! Meow! Grad-CAM!
"Deep Learning from scratch" self-study memo (No. 19-2) Data Augmentation continued
An amateur stumbled in Deep Learning from scratch Note: Chapter 4
I tried to implement Perceptron Part 1 [Deep Learning from scratch]
"Deep Learning from scratch" self-study memo (No. 15) TensorFlow beginner tutorial
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 6
Deep Learning / Deep Learning from Zero 2 Chapter 4 Memo
Deep Learning / Deep Learning from Zero Chapter 3 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 5 Memo
About Deep Learning (DNN) Project Management
Deep Learning / Deep Learning from Zero 2 Chapter 7 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 8 Memo
Deep Learning / Deep Learning from Zero Chapter 5 Memo
Deep Learning / Deep Learning from Zero Chapter 4 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 3 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 6 Memo
Deep learning tutorial from environment construction
[Deep Learning from scratch] Main parameter update methods for neural networks
Lua version Deep Learning from scratch Part 6 [Neural network inference processing]
Why ModuleNotFoundError: No module named'dataset.mnist' appears in "Deep Learning from scratch".
Write an impression of Deep Learning 3 framework edition made from scratch
"Deep Learning from scratch" self-study memo (No. 13) Try using Google Colaboratory
Deep Learning
"Deep Learning from scratch" Self-study memo (No. 10-2) Initial value of weight
[Deep Learning from scratch] About the layers required to implement backpropagation processing in a neural network