[PYTHON] Non-information graduate student studied machine learning from scratch # 1: Perceptron

Introduction

A non-information graduate student studied machine learning from scratch. Write in an article to keep a record of what you have studied. I will decide how to proceed while doing it, but for the time being, I will gradually step up from the basics while tracing the famous "Deep-Learning made from scratch". The environment will be operated by Google Colab. The first is Perceptron, the basis of neural networks.

table of contents

  1. What is Perceptron?
  2. Logic circuit by perceptron
  3. The limits of perceptron
  4. Multilayer Perceptron (MLP)

1. What is Perceptron?

In the human brain, cells called neurons ignite and transmit signals one after another. A neural network is a system that imitates the function of these neurons, and the one that functions as each neuron is called a perceptron (also called an artificial neuron or a simple perceptron). Considering a system with two inputs $ x_1 and x_2 $, 1 is output when the sum of each input multiplied by the weights $ w_1 and w_2 $ exceeds a certain threshold $ \ theta . In other words, it becomes as follows. $ y = \begin{cases} 0 & (w_1x_1+w_2x_2 \leq \theta) \
1 & (w_1x_1+w_2x_2 > \theta) \end{cases} $$ Considering the following, I will correct the notation to add the commonly used bias term $ b . $ y = \begin{cases} 0 & (w_1x_1+w_2x_2+b \leq 0) \
1 & (w_1x_1+w_2x_2+b > 0) \end{cases} $$ The perceptron outputs 1 when the sum of the weighted value and the bias exceeds 0 (ignition), otherwise it outputs 1. パーセプトロン.jpg

2. Logic circuit by perceptron

Implement the AND gate with a perceptron. The AND circuit is a gate represented by the following truth table.

x_1 x_2 y
0 0 0
0 1 0
1 0 0
1 1 1

It implements biased expressions obediently.

AND


import numpy as np

def AND(x1, x2):
    x = np.array([x1, x2])      #input
    w = np.array([0.5, 0.5])    #weight
    b = -0.7                    #bias
    tmp = np.sum(w*x) + b
    if tmp <= 0:
        return 0
    else:
        return 1

OR gates and NAND gates can also be implemented by changing the input and bias as follows, for example. These changes mean that you can make AND, OR, NAND gates in the same system by adjusting the weights and biases in the perceptron.

OR_NAND


#OR gate
w = np.array([0.5, 0.5])
b = -0.2
#NAND gate
w = np.array([-0.5, -0.5])
b = 0.7

3. The limits of perceptron

So can any logic circuit be implemented using a perceptron? No, that's never the case. Consider the implementation of an XOR gate as an example. The XOR gate (exclusive OR) is represented by the following truth table.

x_1 x_2 y
0 0 0
0 1 1
1 0 1
1 1 0

Actually, this truth table cannot be implemented with Perceptron (single-layer perceptron) no matter how hard you try. To consider the reason, consider linear separability. Substituting the weight and bias values ​​given in the above example

\begin{cases} x_2 = -x_1 + 1.4 & (\ text {AND gate}) \\\ x_2 = -x_1 + 0.4 & (\ text {OR gate}) \end{cases}
AND_OR.jpg

Yes, AND and OR gates allow you to linearly separate 0 and 1 regions. So what about XOR gates? XOR.jpg

The XOR gate does not allow the 0 and 1 regions to be separated by a straight line. In other words, the perceptron can only represent linearly separable ones.

4. Multilayer Perceptron (MLP)

As mentioned in the previous section, a single-layer perceptron cannot represent an XOR gate. But the nice thing about Perceptron is that it can be layered. A stack of multiple perceptrons is called a multi-layer perceptron (MLP). XOR gates can be created as a combination of AND, OR, and NAND gates using MLP. Assuming that the NAND output is $ s_1 $ and the OR output is $ s_2 $, the XOR output $ y $ can be expressed as the AND of $ s_1 $ and $ s_2 $. That is, it is represented by the following truth table.

x_1 x_2 s_1 s_2 y
0 0 1 0 0
0 1 1 1 1
1 0 1 1 1
1 1 0 1 0

We will implement it based on the above. I used the AND, OR, and NAND functions defined above.

XOR


def XOR(x1, x2):
    s1 = NAND(x1, x2)
    s2 = OR(x1, x2)
    y = AND(s1, s2)
    return y

#XOR(0,0) --> 0
#XOR(0,1) --> 1
#XOR(1,0) --> 1
#XOR(1,1) --> 0

In this way, when the input layer is counted as 0 layer, the XOR gate can be implemented with the 2 layer MLP of the 1st layer composed of NAND and OR and the 2nd layer by AND (usually this is 2 layers instead of 3 layers). It seems that). MLP will be involved in the neural network that will be summarized next time, so I think it will come out again there.

References

Deep-Learning from scratch Deep-Learning GitHub made from scratch

Recommended Posts

Non-information graduate student studied machine learning from scratch # 1: Perceptron
Non-information graduate student studied machine learning from scratch # 2: Neural network
Non-information graduate students studied machine learning from scratch # 3: MNIST Handwritten digit recognition
Deep Learning from scratch Chapter 2 Perceptron (reading memo)
Machine learning starting from scratch (machine learning learned with Kaggle)
Deep Learning from scratch
Study method for learning machine learning from scratch (March 2020 version)
Create a machine learning environment from scratch with Winsows 10
An introduction to machine learning from a simple perceptron
Deep Learning from scratch 1-3 chapters
Machine learning algorithm (simple perceptron)
I tried to implement Perceptron Part 1 [Deep Learning from scratch]
[Machine learning] Understanding uncorrelatedness from mathematics
Deep learning from scratch (cost calculation)
Deep Learning memos made from scratch
Python vs Ruby "Deep Learning from scratch" Chapter 2 Logic circuit by Perceptron
[Learning memo] Deep Learning made from scratch [Chapter 7]
Deep learning from scratch (forward propagation edition)
Use machine learning APIs A3RT from Python
Deep learning / Deep learning from scratch 2-Try moving GRU
Deep learning / Deep learning made from scratch Chapter 6 Memo
[Learning memo] Deep Learning made from scratch [Chapter 5]
[Learning memo] Deep Learning made from scratch [Chapter 6]
"Deep Learning from scratch" in Haskell (unfinished)
Deep learning / Deep learning made from scratch Chapter 7 Memo
[Windows 10] "Deep Learning from scratch" environment construction
Learning record of reading "Deep Learning from scratch"
[Deep Learning from scratch] About hyperparameter optimization
"Deep Learning from scratch" Self-study memo (Part 12) Deep learning
[Learning memo] Deep Learning made from scratch [~ Chapter 4]
Machine learning
"Deep Learning from scratch" self-study memo (unreadable glossary)
[Python / Machine Learning] Why Deep Learning # 1 Perceptron Neural Network
"Deep Learning from scratch" Self-study memo (9) MultiLayerNet class
Deep Learning from scratch ① Chapter 6 "Techniques related to learning"
Good book "Deep Learning from scratch" on GitHub
Machine learning starting from 0 for theoretical physics students # 1
[Learning memo] Deep Learning from scratch ~ Implementation of Dropout ~
Notes on machine learning (updated from time to time)
Machine learning algorithms (from two-class classification to multi-class classification)
Overview of machine learning techniques learned from scikit-learn
Machine learning starting from 0 for theoretical physics students # 2
Python vs Ruby "Deep Learning from scratch" Summary
"Deep Learning from scratch" Self-study memo (10) MultiLayerNet class
"Deep Learning from scratch" Self-study memo (No. 11) CNN