[PYTHON] Introduction to Deep Learning ~ Backpropagation ~

Target person

This is a continuation of [Previous article](# https://qiita.com/kuroitu/items/d22c8750e34d5d75fb6c). In this article, we will briefly explain backpropagation using computational graphs. The chain rule is briefly mentioned in [here](# https://qiita.com/kuroitu/items/221e8c477ffdd0774b6b#chain rule), so I will omit it. Backpropagation tends to have many mathematical formulas, but I would like to explain it with implementation so that it can be understood as intuitively as possible.

table of contents

-[Backpropagation with scalar](#backpropagation with scalar) -[Backpropagation theory in scalar](#Backpropagation theory in scalar) -[Backpropagation implementation in scalar](#backpropagation implementation in scalar) -[Backpropagation in matrix](#Backpropagation in matrix) -[Backpropagation theory in matrix](#Backpropagation theory in matrix) -[Backpropagation implementation in matrix](#Backpropagation implementation in matrix) -[Backpropagation theory batch consideration version in matrix](#Backpropagation theory batch consideration version in matrix) -[Implementation of __init__ method](Implementation of #init method) -Conclusion

Backpropagation in scalar

As in the example, let's start with backpropagation with scalars. However, backpropagation is quite difficult to follow theoretically even with scalars. However, I will explain it very briefly.

Backpropagation theory in scalar

First, let's briefly consider the following calculation graph. backprop_neuron.png If you write the forward propagation in the figure carefully

\begin{align}
  m &= wx \\
  n &= m + b \\
  y &= \sigma(n)
\end{align}

It looks like this. Consider this backpropagation. Consider each partial derivative.

$\cfrac{\partial m}{\partial x} = w \quad \quad$ $\cfrac{\partial m}{\partial w} = x$ $\cfrac{\partial n}{\partial m} = 1 \quad \quad$ $\cfrac{\partial n}{\partial b} = 1$ $\cfrac{\partial y}{\partial n} = \sigma'(n)$
It looks like this. Now, from here, the partial differential of the input $ x $, weight $ w $, and bias $ b $ for the output $ y $ is [chain rule](https://qiita.com/kuroitu/items/221e8c477ffdd0774b6b#%E9%80% Let's calculate using A3% E9% 8E% 96% E5% BE% 8B).
$\cfrac{\partial y}{\partial x} = $ $\cfrac{\partial y}{\partial n}$ $\cfrac{\partial n}{\partial x} = $ $\cfrac{\partial y}{\partial n}$ $\cfrac{\partial n}{\partial m}$ $\cfrac{\partial m}{\partial x}$ $=\cfrac{}{}$ $\sigma'(n) \cfrac{}{}$ $\times \cfrac{}{} $ $1 \cfrac{}{}$ $\times\cfrac{}{} $ $w \cfrac{}{} $
$\cfrac{\partial y}{\partial w} = $ $\cfrac{\partial y}{\partial n}$ $\cfrac{\partial n}{\partial w} = $ $\cfrac{\partial y}{\partial n}$ $\cfrac{\partial n}{\partial m}$ $\cfrac{\partial m}{\partial w}$ $=\cfrac{}{}$ $\sigma'(n) \cfrac{}{}$ $\times \cfrac{}{} $ $1 \cfrac{}{}$ $\times\cfrac{}{} $ $x \cfrac{}{} $
$\cfrac{\partial y}{\partial b} = $ $\cfrac{\partial y}{\partial n}$ $\cfrac{\partial n}{\partial b}$ $=\cfrac{}{}$ $\sigma'(n) \cfrac{}{}$ $\times \cfrac{}{} $ $1 \cfrac{}{}$

** << Caution >> ** ** Because the heights are forcibly aligned, "$ \ frac {} {} $" is mixed in some places. Please let me know if there is any good way. Also, there are other ways to add color to the formula ... **

It will be. If the error $ E $ from the upstream is passed through this, the back propagation will be as shown in the figure. After all, there are many mathematical formulas and it looks complicated, that is, it only multiplies the partial derivative when passing through the ** compute node. ** ** Even this is a scalar, so it may be easier.

Backpropagation implementation in scalar

Let's implement it. Let's code according to the formula. The implementation destination code is [here](https://qiita.com/kuroitu/items/884c62c48c2daa3def08#layer module code preparation).

baselayer.py


    def backward(self, grad):
        """
Implementation of backpropagation
        """
        dact = grad*self.act.backward(self.x, self.y)
        self.dw = dact*self.x
        self.db = dact
        self.dx = dact*self.w

        return self.dx

The middle layer is OK as it is, but it is necessary to add some processing for the output layer. I will touch on it later.

Backpropagation in a matrix

Next, let's consider backpropagation in a matrix. This is a lot of formulas and difficult, but you can see it if you follow it properly. As before, I'll add color to the formula so that you can see it at a glance. ~~ I feel heavy ... ~~

Backpropagation theory in matrix

Consider the following calculation graph. backprop_layer.png It's the same as the one used in [Forward propagation](https://qiita.com/kuroitu/items/d22c8750e34d5d75fb6c# Forward propagation in matrix). It is a layer model with only 2 neurons, but as you can see, there are quite a lot of formulas. First, let's find the partial differential of each part as in the case of scalar.

$\cfrac{\partial l}{\partial x_1} = w\_{1, 1} \quad$ $\cfrac{\partial l}{\partial w_{1, 1}} = x\_1$ $\cfrac{\partial m}{\partial l} = 1 \quad$ $\cfrac{\partial m}{\partial b_1} = 1 \quad$ $\cfrac{\partial m}{\partial r} = 1$ $\cfrac{\partial n}{\partial x_1} = w\_{1, 2} \quad$ $\cfrac{\partial n}{\partial w_{1, 2}} = x\_1 \quad$ $\cfrac{\partial y_1}{\partial m} = \sigma'\_1(m)$ $\cfrac{\partial p}{\partial x_2} = w\_{2, 2} \quad$ $\cfrac{\partial p}{\partial w_{2, 2}} = x\_2$ $\cfrac{\partial q}{\partial p} = 1 \quad$ $\cfrac{\partial q}{\partial b_2} = 1 \quad$ $\cfrac{\partial q}{\partial n} = 1$ $\cfrac{\partial r}{\partial x_2} = w\_{2, 1} \quad$ $\cfrac{\partial r}{\partial w_{2, 1}} = x\_2$ $\cfrac{\partial y_2}{\partial q} = \sigma'\_2(q)$
I'm sick of all the formulas ... but I can't help it. Let's apply the chain rule.
$\cfrac{\partial y_1}{\partial w_{1, 1}} =$ $\cfrac{\partial y_1}{\partial m}$ $\cfrac{\partial m}{\partial l}$ $\cfrac{\partial l}{\partial w_{1, 1}}$ $=\cfrac{}{}$ $\sigma'\_1(m) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $x\_1 \cfrac{}{}$ $\cfrac{\partial y_2}{\partial w_{1, 2}} =$ $\cfrac{\partial y_2}{\partial q}$ $\cfrac{\partial q}{\partial n}$ $\cfrac{\partial n}{\partial w_{1, 2}}$ $=\cfrac{}{}$ $\sigma'\_2(q) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $x\_1 \cfrac{}{}$ $\cfrac{\partial y_1}{\partial w_{2, 1}} =$ $\cfrac{\partial y_1}{\partial m}$ $\cfrac{\partial m}{\partial r}$ $\cfrac{\partial r}{\partial w_{2, 1}}$ $=\cfrac{}{}$ $\sigma'\_1(m) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $x\_2 \cfrac{}{}$ $\cfrac{\partial y_2}{\partial w_{1, 2}} =$ $\cfrac{\partial y_2}{\partial q}$ $\cfrac{\partial q}{\partial p}$ $\cfrac{\partial p}{\partial w_{2, 2}}$ $=\cfrac{}{}$ $\sigma'\_2(q) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $x\_2 \cfrac{}{}$
\Rightarrow
\boldsymbol{dW} =
\left(
  \begin{array}{cc}
    \cfrac{\partial y_1}{\partial w_{1, 1}} & \cfrac{\partial y_2}{\partial w_{1, 2}} \\
    \cfrac{\partial y_1}{\partial w_{2, 1}} & \cfrac{\partial y_2}{\partial w_{2, 2}}
  \end{array}
\right)
=
\left(
  \begin{array}{c}
    x_1 \\
    x_2
  \end{array}
\right)
\left(
  \begin{array}{cc}
    \sigma'_1(m) & \sigma'_2(q)
  \end{array}
\right)
=
\boldsymbol{X}^{\top}\boldsymbol{\sigma'}
$\cfrac{\partial y_1}{\partial b_1} =$ $\cfrac{\partial y_1}{\partial m}$ $\cfrac{\partial m}{\partial b_1}$ $=\cfrac{}{}$ $\sigma'_1(m)\cfrac{}{}$ $\times \cfrac{}{}$ $1\cfrac{}{}$
$\cfrac{\partial y_2}{\partial b_2} =$ $\cfrac{\partial y_2}{\partial q}$ $\cfrac{\partial q}{\partial b_2}$ $=\cfrac{}{}$ $\sigma'_2(m)\cfrac{}{}$ $\times \cfrac{}{}$ $1\cfrac{}{}$
\Rightarrow
\boldsymbol{dB} =
\left(
  \begin{array}{cc}
    \cfrac{\partial y_1}{\partial b_1} & \cfrac{\partial y_2}{\partial b_2}
  \end{array}
\right)
=
\left(
  \begin{array}{cc}
     \sigma'_1(m) & \sigma'_2(q)
  \end{array}
\right)
$\cfrac{\partial y_1}{\partial x_1} + \cfrac{\partial y_2}{\partial x_1} =$ $\cfrac{\partial y_1}{\partial m}$ $\cfrac{\partial m}{\partial l}$ $\cfrac{\partial l}{\partial x_1}$ $+\cfrac{}{}$ $\cfrac{\partial y_2}{\partial q}$ $\cfrac{\partial q}{\partial n}$ $\cfrac{\partial n}{\partial x_1}$ $=\cfrac{}{}$ $\sigma'_1(m) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $w_{1, 1} \cfrac{}{}$ $+\cfrac{}{}$ $\sigma'_2(q) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $w_{1, 2} \cfrac{}{}$
$\cfrac{\partial y_1}{\partial x_2} + \cfrac{\partial y_2}{\partial x_2} =$ $\cfrac{\partial y_1}{\partial m}$ $\cfrac{\partial m}{\partial r}$ $\cfrac{\partial r}{\partial x_2}$ $+\cfrac{}{}$ $\cfrac{\partial y_2}{\partial q}$ $\cfrac{\partial q}{\partial p}$ $\cfrac{\partial p}{\partial x_2}$ $=\cfrac{}{}$ $\sigma'_1(m) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $w_{2, 1} \cfrac{}{}$ $+\cfrac{}{}$ $\sigma'_2(q) \cfrac{}{}$ $\times \cfrac{}{}$ $1 \cfrac{}{}$ $\times \cfrac{}{}$ $w_{2, 2} \cfrac{}{}$
\Rightarrow
\boldsymbol{dX} =
\left(
  \begin{array}{cc}
    \cfrac{\partial y_1}{\partial x_1} + \cfrac{\partial y_2}{\partial x_1} & \cfrac{\partial y_1}{\partial x_2} + \cfrac{\partial y_2}{\partial x_2}
  \end{array}
\right)
=
\left(
  \begin{array}{cc}
    \sigma'_1(m) & \sigma'_2(q)
  \end{array}
\right)
\left(
  \begin{array}{cc}
    w_{1, 1} & w_{2, 1} \\
    w_{1, 2} & w_{2, 2}
  \end{array}
\right)
=
\boldsymbol{\sigma'}\boldsymbol{W}^{\top}

It looks like. Add the error from the upstream to these [element product](https://qiita.com/kuroitu/items/d22c8750e34d5d75fb6c#%E8%A1%8C%E5%88%97%E3%81%AE%E8%A6%81 By multiplying by% E7% B4% A0% E7% A9% 8D), the error as shown in the above figure is propagated. As you can see from the fact that $ x_1 $ and $ x_2 $ are branched and the values are sent to other calculation nodes, the partial differentials that have flowed need to be added and calculated. So </ font>

\left(
  \begin{array}{cc}
    \cfrac{\partial y_1}{\partial x_1} + \cfrac{\partial y_2}{\partial x_1} & \cfrac{\partial y_1}{\partial x_2} + \cfrac{\partial y_2}{\partial x_2}
  \end{array}
\right)

It's . This formula intuitively (and theoretically) represents the effect of input $ x_1 $ and $ x_2 $ on output $ y_1 $ and $ y_2 $. </ font>

Backpropagation theory in matrix Batch consideration version

By the way, up to this point, the batch size $ N = 1 $ is implicitly assumed. So what happens when it's $ N \ ne 1 $? The answer is simple. Only the error with respect to the bias $ B $ changes. Let's look at this with a mathematical formula. Input $ N \ times L $ and output $ N \ times M $ </ font>

\begin{align}
  \underbrace{\boldsymbol{dW}}_{L \times M} &= \underbrace{\boldsymbol{X}^{\top}}_{L \times N} \underbrace{\boldsymbol{\sigma'}}_{N \times M} \\
  \underbrace{\boldsymbol{dB}}_{1 \times M} &= \underbrace{\boldsymbol{\sigma'}}_{N\times M} \\
  \underbrace{\boldsymbol{dX}}_{N \times L} &= \underbrace{\boldsymbol{\sigma'}}_{N \times M} \underbrace{\boldsymbol{W}^{\top}}_{M \times L}
\end{align}

It will be . Since the flowing gradient must be the same as each shape at the time of input, $ \ underbrace {\ boldsymbol {dW}} \ _ {L \ times M} $ and $ \ underbrace {\ boldsymbol {dB}} \ _ {1 \ times M} $, $ \ underbrace {\ boldsymbol {dX}} \ _ {N \ times L} $, of which $ \ underbrace {\ boldsymbol {dB}} \ _ {1 \ The shapes do not match by times M} $. So what to do is that when the bias $ \ underbrace {\ boldsymbol {B}} _ {1 \ times M} $ propagates forward, the ** broadcast function automatically $ \ underbrace {\ boldsymbol {B}} \ Recall that ** was supposed to be _ {N \ times M} $. In other words, applying the exact same bias to all batch data is synonymous with ** branching the same bias into $ N $ pieces **, so if you take the sum, It will be good **. </ font>

\underbrace{\boldsymbol{dB}}_{1 \times M} = \sum_{i=1}^{N}{\underbrace{\boldsymbol{\sigma'}_i}_{1\times M}} \xrightarrow{\textrm{coding}} \textrm{sum}(\boldsymbol{\sigma'}, \textrm{axis}=0)

This concludes the theory for implementation.

Backpropagation implementation in matrix

Let's implement it. Rewrite the implementation in scalar.

baselayer.py


    def backward(self, grad):
        """
Implementation of backpropagation
        """
        dact = grad*self.act.backward(self.x, self.y)
        self.dw = self.x.T@dact
        self.db = np.sum(dact, axis=0)
        self.dx = [email protected]

        return self.dx

Each has changed to matrix operation. Also note that the numpy.sum function will return the result of adding all the elements unless you specify ʻaxis = 0`.

Output layer override

A little processing needs to be added to the back propagation of the output layer.

outputlayer.py


    def backward(self, t):
        """
Implementation of backpropagation
        """
        #When the activation function of the output layer is the softmax function and the loss function is the cross entropy error
        #Separate cases of error propagation
        if isinstance(self.act, Softmax) \
        and isinstance(self.errfunc, CrossEmtropy):
            dact = self.y - t
            self.dw = self.x.T@dact
            self.db = np.sum(dact, axis=0)
            self.dx = [email protected]
            
            return self.dx
        else:
            grad = self.errfunc.backward(self.y, t)
            return super().__init__(grad)

The content of the override itself is easy, isn't it? If the activation function of the output layer is the softmax function and the loss function uses the cross entropy error, the error that backpropagates the activation function can be skipped like a code. In other cases, it is necessary to calculate the derivative of the loss function and pass the error.

Implementation of the __init__ method

Now, let's have the members have the ʻerrfunc` mentioned above.

baselayer.py


    def __init__(self, *, prev=1, n=1, 
                 name="", wb_width=1,
                 act="ReLU", err_func="square",
                 **kwds):
        self.prev = prev  #Number of outputs of the previous layer=Number of inputs to this layer
        self.n = n        #Number of outputs in this layer=Number of inputs to the next layer
        self.name = name  #The name of this layer

        #Set weight and bias
        self.w = wb_width*np.random.randn(prev, n)
        self.b = wb_width*np.random.randn(n)

        #Activation function(class)Get
        self.act = get_act(act)

        #Loss function(class)Get
        self.errfunc = get_errfunc(err_func)

Eventually I will write an article about the loss function.

in conclusion

I wonder if the difficulty of coloring mathematical formulas can be solved ...

Deep learning series

-Introduction to Deep Learning ~ Basics ~ -Introduction to Deep Learning ~ Coding Preparation ~ -Introduction to Deep Learning ~ Forward Propagation ~ -Introduction to Deep Learning ~ Backpropagation ~ -List of activation functions (2020) -Thorough understanding of im2col -Complete understanding of numpy.pad function

Recommended Posts