# Introduction

I suddenly started studying in Chapter 2 of "Deep Learning from scratch-The theory and implementation of deep learning learned with Python". It is a memo of the trip.

The execution environment is macOS Mojave + Anaconda 2019.10. For details, refer to Chapter 1 of this memo.

(To other chapters of this memo: Chapter 1 / Chapter 2 / Chapter 3 ) / Chapter 4 / Chapter 5 / [Chapter 6](https: / /qiita.com/segavvy/items/ca4ac4c9ee1a126bff41) / Chapter 7 / Chapter 8 / Summary)

# Chapter 2 Perceptron

This chapter describes Perceptron.

# 2.1 What is Perceptron?

I didn't have any particular stumbling blocks, but why call them neurons? I thought that it would be more interesting to have a little more background such as, so I will supplement it a little as far as I can understand.

In 1943, neurophysiologist / surgeon McCulloch and logician / mathematician Pitts teamed up to come up with a model that realizes nerve cells in the brain on a computer [^ 1]. If you investigate the mechanism of the brain and manage it with a computer, you should be able to complete an AI that can think like a human being! It's such a strange approach.

In the human brain, it is known that multiple nerve cells are intricately connected to form a network. These individual nerve cells are called neurons in English, and one of the models that tried to realize these nerve cells on a computer is the perceptron (artificial neuron, simple perceptron). That is why the word neuron is used in the description of perceptron.

A nerve cell receives an electrical signal from another nerve cell, and when it exceeds a certain amount, it becomes a firing state (excited state) for a moment and transmits it to another nerve cell. The mechanism of Perceptron expresses exactly that. Input signals \$ x_1 \$ and \$ x_2 \$ indicate inputs from other neurons, weights \$ w_1 \$ and \$ w_2 \$ indicate the ease with which the firing state is transmitted from other neurons, and the threshold \$ \ theta \$ is It shows how many signals your nerve cells need to fire.

In addition, the neural network that will appear in the next chapter is an attempt to realize a network of multiple nerve cells (neurons) on a computer in order to realize the human brain.

The approach of imitating the human brain is interesting, isn't it? Since the mechanism of the human brain has not been clarified yet, the algorithms in the world do not faithfully reproduce the mechanism of the human, and AI researchers should also faithfully reproduce the human brain. I don't seem to be particular about it.

# 2.2 Simple logic circuit

You can't do much with just one neuron, but you can achieve three types of calculations: AND, OR, and NAND by simply changing the parameters \$ w_1 \$, \$ w_2 \$, and \$ \ theta \$ without changing the logic. Is interesting.

# 2.3 Implementation of Perceptron

The appearance of the bias \$ b \$ is a little confusing, but it is the same as \$ \ theta \$ in the sense that it shows the ease of firing itself, so let's proceed without thinking too much because the expression is different. ..

# 2.4 Limits of Perceptron

As you can see in the book, XOR ◯ and △ cannot be realized in a straight line no matter how hard you try. This is the limit for a perceptron with one neuron.

In addition, it seems that the words linear and non-linear should be advanced with an understanding that they can be / cannot be divided by straight lines as in the book.

# 2.5 Multilayer Perceptron

It turns out that there are more things that can be done, just as XOR was possible with multiple layers.

# 2.6 From NAND to computer

I didn't understand the story that a computer can be made only with NAND, but the book "From NAND to Tetris" that is introduced seems to solve it. I can't afford to do anything right now, but I'd love to read it if I get the chance.

# 2.7 Summary

I didn't trip over Chapter 2 either.

That's all for this chapter. If you have any mistakes, I would be grateful if you could point them out. (To other chapters of this memo: Chapter 1 / Chapter 2 / Chapter 3 ) / Chapter 4 / Chapter 5 / [Chapter 6](https: / /qiita.com/segavvy/items/ca4ac4c9ee1a126bff41) / Chapter 7 / Chapter 8 / Summary)

[^ 1]: [Wikipedia: Formal Neuron](https://ja.wikipedia.org/wiki/%E5%BD%A2%E5%BC%8F%E3%83%8B%E3%83%A5%E3 From% 83% BC% E3% 83% AD% E3% 83% B3)