[PYTHON] I tried to classify Shogi players Takami 7th Dan and Masuda 6th Dan by CNN [For CNN beginners]

What I did in this article

-** Implemented image classification by CNN with keras ** -** I tried to classify Takami 7th Dan and Masuda 6th Dan, which are said to be similar in the shogi world ** --Takami 7th Dan and Masuda 6th Dan had poor classification results after learning. --Instead, when I tried to classify "Takami 7th Dan and Garri Gasparov" and "Takami 7th Dan and Fujii Nikan", they were able to classify well. ――After all, Takami 7th Dan and Masuda 6th Dan are similar ...?

Introduction

Suddenly, do you know Takami 7th Dan and Masuda 6th Dan, which are said to be similar in the shogi world? (Upper is Takami 7th dan, lower is Masuda 6th dan) 高見七段 images (9).jfif I feel that there are many similar elements such as long eyes and black-rimmed glasses. In fact, I also like shogi, but a few years ago I was really confused.

** I learned CNN with keras, so I tried to classify these two people. ** **

Implementation overview

The list of libraries used is as follows. Implementation is done by google colab.

python


import cv2
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import random
import pandas as pd

from PIL import Image
from sklearn.model_selection import train_test_split
from google.colab.patches import cv2_imshow

Cut out the face

First of all, I downloaded the image by google image search. ** About 40 sheets were gathered for each person. ** ** The number of photos is small, but the number of photos is not so large and this was the limit ...

Next, cut out ** only the face part ** from the downloaded image. The trained model required for face detection can be downloaded from the following. https://github.com/opencv/opencv/tree/master/data/haarcascades I referred to the following site for how to use it.

-[Python] Face recognition using OpenCV was easier than expected: https://chusotsu-program.com/opencv-frontalface/

python


#Download from github
HAAR_FILE = "haarcascade_frontalface_alt2.xml"
cascade = cv2.CascadeClassifier(HAAR_FILE)

m_list = os.listdir("masuda_orig")

#Masuda 6-dan face cut
for m_num,m in enumerate(m_list):
  image = cv2.imread("masuda_orig/" + m)
  face_list = cascade.detectMultiScale(image, minSize=(10, 10))
  for i, (x, y, w, h) in enumerate(face_list):
      trim = image[y: y+h, x:x+w]
      trim = cv2.resize(trim,(size_im,size_im))
      cv2.imwrite(DATA_DIR + 'masuda_tmp/masuda'+str(m_num) +"_" + str(i+1) + '.jpg', trim)

Save the data obtained by cutting out only the face image from the file in the "masuda_orig" folder to the "masuda_tmp" folder. This was also done for Takami 7th Dan. As an example, the image of Takami 7-dan above is cut out as follows. takami_40_1.jpg

Creation of training data and test data

** Next, from the image with the face cut out, exclude the image that was accidentally cut out manually or the image of another person in the picture. ** ** This part was done manually.

The images obtained in this way are classified into training data and test data.

python


m_list = os.listdir("masuda")
t_list = os.listdir("takami")

X = []
y = []

for m in m_list:
  image = Image.open("masuda/" + m)
  image = image.convert("RGB")
  image = np.asarray(image)
  X.append(image)
  y.append([1])

for t in t_list:
  image = Image.open("takami/" + t)
  image = image.convert("RGB")
  image = np.asarray(image)
  X.append(image)
  y.append([0])

X=np.asarray(X)
y=np.asarray(y)

X_train,X_test, y_train,y_test = train_test_split(X,y,shuffle=True,test_size=0.3)
print(X_train.shape,X_test.shape, y_train.shape, y_test.shape)

X_train = X_train.astype("float") / 255
X_test  = X_test.astype("float")  / 255

Creating a learning model and executing learning

The training model is binarized through the fully connected layer after passing through the CNN. The model referred to the following site.

--Create an AI that "selects Ayataka" with image recognition: https://qiita.com/tomo_20180402/items/e8c55bdca648f4877188 --Introduction to keras learned from brain death: https://qiita.com/wataoka/items/5c6766d3e1c674d61425

python


from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPool2D
from keras.optimizers import Adam

from keras.layers import Dense, Dropout, Activation, Flatten


model = Sequential()
model.add(Conv2D(32,(3,3),activation="relu",input_shape=(size_im,size_im,3)))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.1))
model.add(Conv2D(64,(3,3),activation="relu"))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.1))
model.add(Conv2D(128,(3,3),activation="relu"))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.1))
model.add(Conv2D(128,(3,3),activation="relu"))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.1))
model.add(Flatten())
model.add(Dense(512,activation="relu"))
model.add(Dense(1,activation="sigmoid"))

model.summary()

Classification result after learning

** Check the accuracy of the model after training and the correctness of the test results, and confirm which data was misclassified. ** **

python


#Model learning
optim=Adam()
model.compile(loss="binary_crossentropy",
              optimizer=optim,
              metrics="acc")

model.fit(X_train,y_train,
          epochs=20,
          batch_size=2,
          validation_data=(X_test,y_test))

python


#Display images with incorrect model classification
df = pd.DataFrame()
df["pred"] =  model.predict(X_test).flatten()
df["test"] = y_test.flatten()
df["pred"] = df["pred"].apply(lambda x: 0 if x < 0.5 else 1)
df["acc"] = df["pred"] == df["test"]

fig, ax = plt.subplots(1,len(mistake_list),figsize=(20,5))
mistake_list = df[df["acc"] == 0].index
for i,test_i in enumerate(mistake_list):
  ax[i].imshow(X_test[i,...])

Takami 7th Dan and Masuda 6th Dan

As a result of executing with the number of epochs set to 20, the correct answer rate is The results were ** training data 90% and test data 67% **. ** 40 learning data and 19 test data **. You can see the tendency to ** overfit the training data **. In addition, 67% of the results of binary classification seems to be low. Images with incorrect classification are shown below. The two on the left certainly seem difficult to classify.

ダウンロード (3).png

Takami 7th Dan and Fujii 2nd Crown

Since Takami 7th Dan and Masuda 6th Dan are similar, in order to verify the possibility that the learning result was low I also tried to classify "Takami 7th Dan and Fujii 2 Crown". ** 40 learning data and 19 test data **.

** The result is 92% training data and 88% test data. ** ** The accuracy is slightly higher than the classification with Masuda 6th Dan. After all, it is difficult to classify Takami 7th Dan and Masuda 6th Dan ...? The images with the wrong classification are below. There are no particular features ... ダウンロード (2).png

Takami 7th Dan and Garuri are Slov

As a different person, I also tried to classify it as former chess world champion Garrigasparov. images (19).jfif The training data and test data are 48 and 21.

The result is 100% learning data and 100% test data. 100% is a little suspicious, but it is more accurate than the classification with Masuda 6th Dan and Fujii Nikan.

Summary

The classification results are as follows. **-"Takami 7th Dan and Masuda 6th Dan": 67% --"Takami 7th Dan and Fujii 2 Crown": 88% --"Takami 7th Dan and Garriga Sparov": 100% **

** It seems that these two are similar from the machine's point of view **. However, this time, the problem was that the learning data was very small, about 40 sheets. As a result, the results obtained are also inaccurate and will vary depending on which test data is used. The more data you have, the more accurate it is likely to be.

Code used

Click here for the code actually used. https://colab.research.google.com/drive/14Dg2-uQWSf4NT2OnxTWGVSEDST3O68d8?usp=sharing

Recommended Posts

I tried to classify Shogi players Takami 7th Dan and Masuda 6th Dan by CNN [For CNN beginners]
I tried to classify dragon ball by adaline
I tried to classify Oba Hana and Emiri Otani by deep learning (Part 2)
I tried to classify MNIST by GNN (with PyTorch geometric)
[Pandas] I tried to analyze sales data with Python [For beginners]
I tried to extract players and skill names from sports articles
I tried TensorFlow tutorial CNN 4th
I tried to refer to the fun rock-paper-scissors poi for beginners with Python
I tried moving the image to the specified folder by right-clicking and left-clicking
765 I tried to identify the three professional families by CNN (with Chainer 2.0.0)
I tried to classify mnist numbers by unsupervised learning [PCA, t-SNE, k-means]
I tried to classify text using TensorFlow
Introduction to AI creation with Python! Part 3 I tried to classify and predict images with a convolutional neural network (CNN)
[Python] I tried to solve 100 past questions that beginners and intermediates should solve [Part 5/22]
[Python] I tried to solve 100 past questions that beginners and intermediates should solve [Part7 / 22]
[Python] I tried to solve 100 past questions that beginners and intermediates should solve [Part 4/22]
[Python] I tried to solve 100 past questions that beginners and intermediates should solve [Part 1/22]
I tried to predict the change in snowfall for 2 years by machine learning
I tried to process and transform the image and expand the data for machine learning
I tried to pass the G test and E qualification by training from 50
[Python] I tried to solve 100 past questions that beginners and intermediates should solve [Part 6/22]
I tried to program bubble sort by language
An introduction to object-oriented programming for beginners by beginners
I implemented DCGAN and tried to generate apples
I tried to get an image by scraping
[Introduction to PID] I tried to control and play ♬
I tried to classify Mr. Habu and Mr. Habu with natural language processing × naive Bayes classifier
[Series for busy people] I tried to summarize by parsing to call news in 30 seconds
[For beginners in competition professionals] I tried to solve 40 AOJ "ITP I" questions with python