[PYTHON] Chapter 2 Implementation of Perceptron Cut out only the good points of deep learning made from scratch

What is Perceptron?

A perceptron receives multiple signals as inputs and outputs one signal. The perceptron signal is a binary value of 1 or 0. This time, I will describe it corresponding to not flowing 0 and flowing 1.

Gate type

AND

x1 x2 y
0 0 0
1 0 0
0 1 0
1 1 1

NAND

x1 x2 y
0 0 1
1 0 1
0 1 1
1 1 0

OR

x1 x2 y
0 0 0
1 0 1
0 1 1
1 1 1

Implementation of perceptron

Easy implementation

--Define AND function

def AND(x1, x2)
    w1, w2, theta = 0.5, 0.5, 0.7
    temp = x1*w1+x2*w2
    if tmp <= theta:
        return 0
    elif tmp > theta:
        return 1

The parameters w1, w2, and theta are initialized in the function and return 1 if the sum of the weighted inputs exceeds the threshold, otherwise 0. Then, let's check if the output is as shown in Figure 2-2.

AND(0, 0) # 0
AND(1, 0) # 0
AND(0, 1) # 0
AND(1, 1) # 1

I got the expected behavior.

Similarly, both NAND and OR can be implemented.

Introduction of weights and biases

y = 0(b+w1x1+w2x2 <= 0)
    1(b+w1x1+w2x2 >  0)
>>>import numpy as np
>>>x = np.array([0,1])
>>>w = np.array([0.5, 0.5])
>>>b = -0.7
>>>w*x
>>>array([0. , 0.5])
>>>np.sum(w*x)
0.5
>>>np.sum(w*x) + b
-0.19999999996

Implementation by weight and bias

def AND(x1, x2):
   x = np.array([x1, x2])
   w = np.array([0.5, 0.5])
   b = -0.7
   tmp = np.sum(w*x) + b
   if tmp <= 0:
       return 0
   else:
       return 1

def NAND(x1, x2):
   x = np.array([x1, x2])
   w = np.array([-0.5, -0.5])
   b = 0.7
   tmp = np.sum(w*x) + b
   if tmp <= 0
      return 0
   else:
      return 1

def OR(x1, x2):
   x = np.array([x1, x2])
   w = np.array([0.5, 0.5])
   b = -0.2
   tmp = np.sum(w*x) + b
   if tmp <= 0
       return 0
   else:
       return 1

XOR gate

XOR gates are logic circuits, also known as exclusive ORs.

OR

x1 x2 y
0 0 0
1 0 1
0 1 1
1 1 0

It is difficult to express this result in an equation.

Multilayer perceptron

Perceptron could not represent the XOR gate. But that is possible by stacking layers.

Combination of existing gates

The XOR gate can be represented by the value of the combination of NAND and OR and AND.

XOR |x1|x2|s1|s2|y| |---|---|---| |0|0|1|0|0| |1|0|1|1|1| |0|1|1|1|1| |1|1|0|1|0|

XOR implementation

def XOR(x1, x2):
    s1 = NAND(x1, x2)
    s2 = OR(x1, x2)
    y = AND(s1, s2)

XOR(0, 0) # 0
XOR(1, 0) # 1
XOR(0, 1) # 1
XOR(1, 1) # 0

XOR is a two-layer perceptron. Multiple layers of perceptrons are sometimes called multi-layer perceptrons. It is possible to express more flexibly by stacking layers.

Summary

--Perceptron is an algorithm with input and output. Given a certain input, a fixed value is output. --In Perceptron, [Weight] and [Bias] are set as parameters. --By using a perceptron, you can express logic circuits such as AND and OR gates. --The XOR gate cannot be represented by a single-phase perceptron. --The XOR gate can be represented by using a two-layer perceptron. --A single-phase perceptron can only represent a linear region, whereas a multi-layered perceptron can represent a non-linear region. --The multi-layered perceptron can represent a computer.

Recommended Posts

Chapter 2 Implementation of Perceptron Cut out only the good points of deep learning made from scratch
Chapter 3 Neural Network Cut out only the good points of deep learning made from scratch
Chapter 1 Introduction to Python Cut out only the good points of deep learning made from scratch
Deep Learning from scratch The theory and implementation of deep learning learned with Python Chapter 3
[Learning memo] Deep Learning made from scratch [Chapter 7]
Deep learning / Deep learning made from scratch Chapter 6 Memo
[Learning memo] Deep Learning made from scratch [Chapter 5]
[Learning memo] Deep Learning made from scratch [Chapter 6]
[Learning memo] Deep Learning made from scratch [~ Chapter 4]
Deep Learning from scratch Chapter 2 Perceptron (reading memo)
[Learning memo] Deep Learning from scratch ~ Implementation of Dropout ~
Python vs Ruby "Deep Learning from scratch" Chapter 4 Implementation of loss function
Python vs Ruby "Deep Learning from scratch" Chapter 3 Implementation of 3-layer neural network
Application of Deep Learning 2 made from scratch Spam filter
Othello ~ From the tic-tac-toe of "Implementation Deep Learning" (4) [End]
[Deep Learning from scratch] Implementation of Momentum method and AdaGrad method
Deep Learning memos made from scratch
Write an impression of Deep Learning 3 framework edition made from scratch
Learning record of reading "Deep Learning from scratch"
Othello-From the tic-tac-toe of "Implementation Deep Learning" (2)
Python vs Ruby "Deep Learning from scratch" Chapter 2 Logic circuit by Perceptron
Deep Learning from scratch ① Chapter 6 "Techniques related to learning"
Good book "Deep Learning from scratch" on GitHub
Deep Learning from scratch
[Deep Learning from scratch] I implemented the Affine layer
Python vs Ruby "Deep Learning from scratch" Chapter 3 Graph of step function, sigmoid function, ReLU function
Python vs Ruby "Deep Learning from scratch" Chapter 1 Graph of sin and cos functions
Deep Learning from scratch 1-3 chapters
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 2
An amateur stumbled in Deep Learning from scratch Note: Chapter 3
An amateur stumbled in Deep Learning from scratch Note: Chapter 7
An amateur stumbled in Deep Learning from scratch Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 7
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 1
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 4
An amateur stumbled in Deep Learning from scratch Note: Chapter 4
"Deep Learning from scratch" Self-study memo (No. 14) Run the program in Chapter 4 on Google Colaboratory
"Deep Learning from scratch" Self-study memo (Part 8) I drew the graph in Chapter 6 with matplotlib
An amateur stumbled in Deep Learning from scratch Note: Chapter 2
I tried to implement Perceptron Part 1 [Deep Learning from scratch]
Deep reinforcement learning 2 Implementation of reinforcement learning
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 6
"Deep Learning from scratch" Self-study memo (No. 10-2) Initial value of weight
Deep Learning / Deep Learning from Zero 2 Chapter 4 Memo
Deep Learning / Deep Learning from Zero Chapter 3 Memo
Deep learning from scratch (cost calculation)
Deep Learning / Deep Learning from Zero 2 Chapter 7 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 8 Memo
Deep Learning / Deep Learning from Zero Chapter 5 Memo
Deep Learning / Deep Learning from Zero Chapter 4 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 3 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 6 Memo
[Deep Learning from scratch] Layer implementation from softmax function to cross entropy error
Deep learning from scratch (forward propagation edition)
Learning notes from the beginning of Python 1
Visualize the effects of deep learning / regularization
"Deep Learning from scratch" in Haskell (unfinished)
[Windows 10] "Deep Learning from scratch" environment construction
Learning notes from the beginning of Python 2
[Deep Learning from scratch] About hyperparameter optimization