[PYTHON] Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 1-4

TOP PAGE

Chapter 7 Policy Network

7.1~7.4 policy.py

Meaning of 194 filters

Considering the input layer, there are 104 9x9 phase diagrams (images) (= 104ch). The number of filters is 194. Use the first filter to get one 9x9 image filtered against one image. Do this for 104 channels and obtain 104 images. Combine 104 images into one image. This is 1ch. Do the same with the second filter. Do the same for the 194th filter. This gives 194 images. That is, the number of output channels is 194 channels.

Meaning of 1x1 filter

pointwise convolution The purpose is to reduce the number of dimensions Image of elongated filter of "1 pixel x 1 pixel x number of layers" https://www.robotech-note.com/entry/2017/12/24/191936 Input: 1 pixel for each channel of 1 to 192ch, Output: 1 pixel value This is performed for all pixels to obtain a one-screen image. This is carried out for the number of output channels by changing the parameters. Obtain output images for the number of output channels. After all, the filter size of the above 194 filters is only 1x1 and it is the same.

python-dlshogi\pydlshogi\network\policy.py


#!/usr/bin/env python3
# -*- coding: utf-8 -*-

from chainer import Chain
import chainer.functions as F
import chainer.links as L

from pydlshogi.common import *

ch = 192
class PolicyNetwork(Chain):
#Input 104ch
#Input is 104 9x9 phase diagrams.
#Number of filters 194.
#9x9 values filtered for each input channel are added by the input channels and output as 1 channel.
#Since the number of filters is 194, the number of output channels is 194.
    def __init__(self):
        super(PolicyNetwork, self).__init__()
        with self.init_scope():
            self.l1 = L.Convolution2D(in_channels = 104, out_channels = ch, ksize = 3, pad = 1)
            self.l2 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l3 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l4 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l5 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l6 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l7 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l8 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l9 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l10 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l11 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l12 = L.Convolution2D(in_channels = ch, out_channels = ch, ksize = 3, pad = 1)
            self.l13 = L.Convolution2D(in_channels = ch, out_channels = MOVE_DIRECTION_LABEL_NUM,
                                        ksize = 1, nobias = True)
            #Filter size 1x1 (ksize)=1)Meaning of
            #Input: 1 pixel for each channel of 1 to 192ch, Output: 1 pixel value
            #This is performed for all pixels to obtain a one-screen image.
            #This is carried out for the number of output channels by changing the parameters.
            #Obtain output images for the number of output channels.
            self.l13_bias = L.Bias(shape=(9*9*MOVE_DIRECTION_LABEL_NUM)) 
            # MOVE_DIRECTION_LABEL_NUM=27. Shows 20 movement directions and 7 pieces.

    def __call__(self, x):
        h1 = F.relu(self.l1(x))
        h2 = F.relu(self.l2(h1))
        h3 = F.relu(self.l3(h2))
        h4 = F.relu(self.l4(h3))
        h5 = F.relu(self.l5(h4))
        h6 = F.relu(self.l6(h5))
        h7 = F.relu(self.l7(h6))
        h8 = F.relu(self.l8(h7))
        h9 = F.relu(self.l9(h8))
        h10 = F.relu(self.l10(h9))
        h11 = F.relu(self.l11(h10))
        h12 = F.relu(self.l12(h11))
        h13 = self.l13(h12)
        return self.l13_bias(F.reshape(h13,(-1, 9*9*MOVE_DIRECTION_LABEL_NUM)))
        #The reason why the softmax function is not described in the output is
        #F when learning.softmax_cross_To use the entropy function.
        #This function calculates the softmax function and the cross entropy error at the same time.

common.py bb_rotate180() Arguments: 81-digit binary number. (One element of piece_bb and occupied, that is, a bitboard is included) Output: Inverts 81 digits and outputs.

python-dlshogi\pydlshogi\common.py


#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import shogi

#Constant of movement
# UP=0、UP_LEFT=1、、、、UP_RIGHT_PROMOTE=It is defined as 19.
#When using, for example, 〇〇=If it is UP, 0 is assigned to the variable 〇〇.
MOVE_DIRECTION = [
    UP, UP_LEFT, UP_RIGHT, LEFT, RIGHT, DOWN, DOWN_LEFT, DOWN_RIGHT,
    UP2_LEFT, UP2_RIGHT,
    UP_PROMOTE, UP_LEFT_PROMOTE, UP_RIGHT_PROMOTE, LEFT_PROMOTE, RIGHT_PROMOTE,
    DOWN_PROMOTE, DOWN_LEFT_PROMOTE, DOWN_RIGHT_PROMOTE,
    UP2_LEFT_PROMOTE, UP2_RIGHT_PROMOTE
] = range(20)

#Conversion table
# **_The variable named PROMOTE is MOVE_Predefined in DIRECTION.
# UP_PROMOTE=10,・ ・ ・,UP_RIGHT_PROMOTE=It is defined as 19.
MOVE_DIRECTION_PROMOTED = [
    UP_PROMOTE, UP_LEFT_PROMOTE, UP_RIGHT_PROMOTE, LEFT_PROMOTE, RIGHT_PROMOTE,
    DOWN_PROMOTE, DOWN_LEFT_PROMOTE, DOWN_RIGHT_PROMOTE,
    UP2_LEFT_PROMOTE, UP2_RIGHT_PROMOTE
]

#Number of labels representing moves
MOVE_DIRECTION_LABEL_NUM = len(MOVE_DIRECTION) + 7 #7 is the type of piece

# rotate 180degree
# shogi.I1 is 80, ..., shogi.A9 is 0. In other words, SQUARES_R180 = [80, 79,・ ・ ・, 1, 0]
SQUARES_R180 = [
    shogi.I1, shogi.I2, shogi.I3, shogi.I4, shogi.I5, shogi.I6, shogi.I7, shogi.I8, shogi.I9,
    shogi.H1, shogi.H2, shogi.H3, shogi.H4, shogi.H5, shogi.H6, shogi.H7, shogi.H8, shogi.H9,
    shogi.G1, shogi.G2, shogi.G3, shogi.G4, shogi.G5, shogi.G6, shogi.G7, shogi.G8, shogi.G9,
    shogi.F1, shogi.F2, shogi.F3, shogi.F4, shogi.F5, shogi.F6, shogi.F7, shogi.F8, shogi.F9,
    shogi.E1, shogi.E2, shogi.E3, shogi.E4, shogi.E5, shogi.E6, shogi.E7, shogi.E8, shogi.E9,
    shogi.D1, shogi.D2, shogi.D3, shogi.D4, shogi.D5, shogi.D6, shogi.D7, shogi.D8, shogi.D9,
    shogi.C1, shogi.C2, shogi.C3, shogi.C4, shogi.C5, shogi.C6, shogi.C7, shogi.C8, shogi.C9,
    shogi.B1, shogi.B2, shogi.B3, shogi.B4, shogi.B5, shogi.B6, shogi.B7, shogi.B8, shogi.B9,
    shogi.A1, shogi.A2, shogi.A3, shogi.A4, shogi.A5, shogi.A6, shogi.A7, shogi.A8, shogi.A9,
]

def bb_rotate_180(bb):
#It is piece to enter bb_One element of bb and one element of occupied. That is, an 81-digit binary number.
    bb_r180 = 0
    for pos in shogi.SQUARES: #SQUARES is range(0, 81)That
        if bb & shogi.BB_SQUARES[pos] > 0:
        # BB_SQUARES[0b000 ・ ・ ・ 0001,0b000 ・ ・ ・ 0010,0b000 ・ ・ ・ 0100,・ ・ ・,0b100 ・ ・ ・ 0000].. 81 elements.
        # &Is the bitwise operator AND.
            bb_r180 += 1 << SQUARES_R180[pos]
            # a<<b is an operator that shifts the bit of a to the left by b digits.
    return bb_r180

features.py make_input_features() Arguments: piece_bb, occupied, pieces_in_hand Output: Where the first piece is, the first piece, the second piece, the second piece [(9x9 matrix), (9x9 matrix), ... (18 + 4 + 4 + 4 + 4 + 2 + 2), (9x9 matrix), (9x9 matrix), ... (18 + 4 + 4 + 4 + 4 + 2 + 2)]

make_input_features_from_board() Take out piece_bb, occupied, pieces_in_hand, turn with board as an argument and perform make_input_features.

make_output_label() Arguments: move (move_from and move_to), color (first move or second move) color is used to allow move_direction to be used both first and second.

Output: move_direction (81 digits) + move_to (1 digit) 81-ary image. move_direction is 81 digits and move_to is 1 digit. 81 digits ... 27 1 digit ・ ・ ・ 9x9

In other words, move is converted into one numerical value and output. By the way, the numerical value of this output is also the index of the output of NN. It is used when you want to extract the desired element from the output of NN.

make_features() Take out piece_bb, occupied, pieces_in_hand with position as an argument and perform make_input_features.

python-dlshogi\pydlshogi\features.py


#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import numpy as np
import shogi
import copy

from pydlshogi.common import *

def make_input_features(piece_bb, occupied, pieces_in_hand):
    features = []
    for color in shogi.COLORS:
        # board pieces
        for piece_type in shogi.PIECE_TYPES_WITH_NONE[1:]: #PIECE_TYPES_WITH_N ONE is range(0, 16)That
            bb = piece_bb[piece_type] & occupied[color] #Obtain the position of the piece on the turn side for each piece (&Is a bit operator)
            feature = np.zeros(9*9)
            for pos in shogi.SQUARES: #SQUARES is range(0, 81)That
                if bb & shogi.BB_SQUARES[pos] > 0: #BB_SQUARES[0b1, 0b10, 0b100,・ ・ ・,0b1 ・ ・ ・ 0].. 81 elements.
                    feature[pos] = 1                #Each bit of the bit board is decomposed into elements. For example, 1010 ...[1,0,1,0,・ ・ ・]Disassembled into.
            features.append(feature.reshape((9, 9))) #Each bit of the bit board is decomposed into elements and returned as a 9x9 matrix.

        # pieces in hand
        for piece_type in range(1, 8):
            for n in range(shogi.MAX_PIECES_IN_HAND[piece_type]):
            #shogi.MAX_PIECES_IN_HAND is the number of pieces you have. Index 1~7 probably looks like this:
            #shogi.MAX_PIECES_IN_HAND[1] =18: Ayumu
            #shogi.MAX_PIECES_IN_HAND[2] =4: Lance
            #shogi.MAX_PIECES_IN_HAND[3] =4: Keima
            #shogi.MAX_PIECES_IN_HAND[4] =4: Silver
            #shogi.MAX_PIECES_IN_HAND[5] =4: Fri
            #shogi.MAX_PIECES_IN_HAND[6] =2: Corner
            #shogi.MAX_PIECES_IN_HAND[7] =2: Rook
                if piece_type in pieces_in_hand[color] and n < pieces_in_hand[color][piece_type]:
                    feature = np.ones(9*9)
                else:
                    feature = np.zeros(9*9)
                features.append(feature.reshape((9, 9)))

    return features
    #Output: Where the first piece is, the first piece, the second piece, the second piece
    # [(9x9 matrix), 
    # (9x9 matrix),... is (18 + 4 + 4 + 4 + 4 + 2 + 2), 
    # (9x9 matrix), 
    # (9x9 matrix),... is (18 + 4 + 4 + 4 + 4 + 2 + 2)]

def make_input_features_from_board(board): #make with board as an argument_input_Do features.
    if board.turn == shogi.BLACK:
        piece_bb = board.piece_bb
        occupied = (board.occupied[shogi.BLACK], board.occupied[shogi.WHITE])
        pieces_in_hand = (board.pieces_in_hand[shogi.BLACK], board.pieces_in_hand[shogi.WHITE])
    else:
        piece_bb = [bb_rotate_180(bb) for bb in board.piece_bb]
        occupied = (bb_rotate_180(board.occupied[shogi.WHITE]), bb_rotate_180(board.occupied[shogi.BLACK]))
        pieces_in_hand = (board.pieces_in_hand[shogi.WHITE], board.pieces_in_hand[shogi.BLACK])

    return make_input_features(piece_bb, occupied, pieces_in_hand)


def make_output_label(move, color):
    move_to = move.to_square
    move_from = move.from_square
                #■ Move class
                #   from_square variable: The value of the movement source when the board surface is represented by a numerical value from 0 to 80.
                #The quotient when divided by 9 is the y coordinate, and the remainder is the x coordinate. The xy coordinate is 0 origin.
                #   to_square variable: Same as above (destination).
                #
                #x coordinate
                #   0   1   2   3   4   5   6   7   8
                #
                #0 1 2 3 4 5 6 7 8 0 y coordinates
                #   9   10  11  12  13  14  15  16  17      1
                #   18  19  20  21  22  23  24  25  26      2
                #   27  28  29  30  31  32  33  34  35      3
                #   36  37  38  39  40  41  42  43  44      4
                #   45  46  47  48  49  50  51  52  53      5
                #   54  55  56  57  58  59  60  61  62      6
                #   63  64  65  66  67  68  69  70  71      7
                #   72  73  74  75  76  77  78  79  80      8

    #If white, rotate the board
    if color == shogi.WHITE:
        move_to = SQUARES_R180[move_to]
        if move_from is not None: #When you move a piece on the board instead of the piece you have
            move_from = SQUARES_R180[move_from]

    # move direction
    if move_from is not None: #When you move a piece on the board instead of the piece you have
        to_y, to_x = divmod(move_to, 9)
        from_y, from_x = divmod(move_from, 9)
        dir_x = to_x - from_x
        dir_y = to_y - from_y
        if dir_y < 0 and dir_x == 0:
            move_direction = UP
        elif dir_y == -2 and dir_x == -1:
            move_direction = UP2_LEFT
        elif dir_y == -2 and dir_x == 1:
            move_direction = UP2_RIGHT
        elif dir_y < 0 and dir_x < 0:
            move_direction = UP_LEFT
        elif dir_y < 0 and dir_x > 0:
            move_direction = UP_RIGHT
        elif dir_y == 0 and dir_x < 0:
            move_direction = LEFT
        elif dir_y == 0 and dir_x > 0:
            move_direction = RIGHT
        elif dir_y > 0 and dir_x == 0:
            move_direction = DOWN
        elif dir_y > 0 and dir_x < 0:
            move_direction = DOWN_LEFT
        elif dir_y > 0 and dir_x > 0:
            move_direction = DOWN_RIGHT

        # promote
        if move.promotion:
            move_direction = MOVE_DIRECTION_PROMOTED[move_direction]
    else:
        #Own piece
        # len(MOVE_DIRECTION)Is 20
        # move.drop_piece_Is type the type of piece you have placed?
        # -Is 1 the number?
        move_direction = len(MOVE_DIRECTION) + move.drop_piece_type - 1


    move_label = 9 * 9 * move_direction + move_to
    #81-ary image. move move_direction is 81 digits, move_to is a digit of 1.
    #81 digits ... 27
    #1 digit ・ ・ ・ 9x9

    return move_label

def make_features(position):
    piece_bb, occupied, pieces_in_hand, move, win = position
    features = make_input_features(piece_bb, occupied, pieces_in_hand)

    return(features, move, win)

Recommended Posts

Deep Learning with Shogi AI on Mac and Google Colab Chapter 11
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 10 6-9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 10
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 5-7
Deep Learning with Shogi AI on Mac and Google Colab Chapter 9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 1-2
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3 ~ 5
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8 5-9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8 1-4
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 8
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 1-4
Deep Learning with Shogi AI on Mac and Google Colab
Deep Learning with Shogi AI on Mac and Google Colab Chapters 1-6
Learn with Shogi AI Deep Learning on Mac and Google Colab Use Google Colab
Deep Learning on Mac and Google Colab Words Learned with Shogi AI
Machine learning with Pytorch on Google Colab
About learning with google colab
Steps to quickly create a deep learning environment on Mac with TensorFlow and OpenCV
Play with Turtle on Google Colab
Use MeCab and neologd with Google Colab
Deep Learning from scratch The theory and implementation of deep learning learned with Python Chapter 3
Install selenium on Mac and try it with python
Deep learning image analysis starting with Kaggle and Keras
[AI] Deep Metric Learning
Extract music features with Deep Learning and predict tags
[Google Colab] How to interrupt learning and then resume it
Recognize your boss and hide the screen with Deep Learning
An error that stumbled upon learning YOLO on Google Colab
HIKAKIN and Max Murai with live game video and deep learning
Easy deep learning web app with NNC and Python + Flask
Try deep learning with TensorFlow
Plotly Dash on Google Colab
Try Deep Learning with FPGA
Catalina on Mac and pyenv
Generate Pokemon with Deep Learning
Create AtCoder Contest appointments on Google Calendar with Python and GAS
Error and solution when installing python3 with homebrew on mac (catalina 10.15)
How to run Jupyter and Spark on Mac with minimal settings
The strongest way to use MeCab and CaboCha with Google Colab
[Reading Notes] Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow Chapter 1
Install lp_solve on Mac OS X and call it with python.
Deep Learning / Deep Learning from Zero 2 Chapter 4 Memo
Try Deep Learning with FPGA-Select Cucumbers
Cat breed identification with deep learning
Deep Learning / Deep Learning from Zero Chapter 3 Memo
MQTT on Raspberry Pi and Mac
Make ASCII art with deep learning
Deep Learning / Deep Learning from Zero 2 Chapter 5 Memo
Try deep learning with TensorFlow Part 2
Introducing OpenCV on Mac with homebrew
Solve three-dimensional PDEs with deep learning.
Organize machine learning and deep learning platforms
Deep Learning / Deep Learning from Zero 2 Chapter 7 Memo
Deep Learning / Deep Learning from Zero Chapter 5 Memo