[PYTHON] I wanted to classify Shadowverse card images by reader class


This article is a relay article of "2021 New Year Advent Calendar TechConnect!" of Link Information Systems. TechConnect! Is a self-starting Advent calendar that is relayed by a self-made group called engineer.hanzomon. (From Facebook here of the link information system)


Introduction

Continuing from the last time, this article will be free. Since I attended the AI ​​seminar, as a review, I used CNN with scikit-learn and keras to classify Shadowverse images by leader class.

result

It's an indescribable result, but this was the best. (Is it permissible to have a 100% match rate with the training data?) Train is the training data match rate, and Test is the test data match rate. Figure_1.png

Tested with the latest card

Click here for the number of correct answers when each 11 cards of the latest bullet (Tenten Awakening) that are not included in the data set are classified. 予測結果(数).png All Shadow (Necromancer) cards could be classified as Shadow. On the other hand, Rune (witch) only hits three. As a result, I feel that the class with the characteristic of the card picture is well done. ・ Necro: Where there are many skulls and other special motifs, it may be characteristic that it is dark overall. Witch: Humans appear to be confused with Royal/Nemesis, and monsters with Necro/Dragon.

The wrong answer is interesting

Looking at the wrongly answered card, it certainly looks like a wrongly answered class card and is interesting.

Example: Rune's card that mistakenly answered Shadow (Elmott looks like Necro ...) https://shadowverse-portal.com/card/119321010?lang=ja Example: Dragon card that mistakenly answered Sword (Royal because it is a humanoid with a sword) https://shadowverse-portal.com/card/119411010?lang=ja Example: Portal card that mistakenly answered Rune (rather, CNN says why it wasn't a witch) https://shadowverse-portal.com/card/119831010?lang=ja

Preparation

First, prepare the Shadowverse card image data. (This time from basic to the collapse of Natera) スクリーンショット 2021-01-11 185800.png

Since I didn't think about anything at the time of collection, the Japanese card name is the file name and the reader class is not divided. This is organized as follows. ・ Stored in a folder for each reader class スクリーンショット 2021-01-11 190315.png ・ Rename to XXX_class name スクリーンショット 2021-01-11 190529.png

Neutral cards have no common features and are likely to interfere with learning, so discard them.

learn

First, format the image using OpenCV. (Stretch into a square to fit in CNN)

from glob import glob
import cv2
import os

path = glob('./img_class/*/')
data  = [] #image data
label = [] #Label (classification)

class_num = 0 #Classification ID (0 because it is 8 readers)-7 will be allocated)
image_size = 256 #The size of the square to resize

for f in path:
    files = os.listdir(directory)
    for image in files: #Format and list all images in the Directory
        if image.endswith(".png "):
            image = cv2.imread(directory + image)
            image =cv2.resize(image, (image_size, image_size))
            image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) #Somehow make it RGB (live feeling)
            image_rgb = image_rgb.flatten() #Flattening
            data.append(image_rgb) 
            label.append(class_num)
    class_num += 1

You now have a set of card images and labels stretched to 256 * 256.

Next, with the power of scikit-learn, the training data is shaped according to the CNN.

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import train_test_split

#Change data type
data = np.array(data, dtype=np.float32)
label = np.array(label, dtype=np.float32)

#Divided into training data and test data(8:2)
train_data, test_data, train_label, test_label = train_test_split(
        data,label, random_state = 1, stratify = label, test_size = 0.2)

#Transform the list[The number of data, 256, 256,3 (number of RGB channels)]
train_data = train_data.reshape((len(train_data)),image_size,image_size,3)
test_data = test_data.reshape((len(test_data)),image_size,image_size,3)

#Normalization
train_data /= 255
test_data /= 255

#Label data one-hot encoding(The number of types is 8 of the number of leaders)
train_label = to_categorical(train_label, num_classes = 8)
test_label = to_categorical(test_label, num_classes = 8)

The training data is now ready. Next, I will somehow build CNN. keras is so easy and amazing.

from tensorflow.keras import models,layers
from tensorflow.keras import optimizers

model = models.Sequential()

model.add(layers.Conv2D(20, (5, 5), activation='relu', padding='same', input_shape=(image_size, image_size, 3)))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Conv2D(50, (5, 5), activation='relu', padding='same'))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(500, activation='relu'))
model.add(layers.Dense(500, activation='relu'))
model.add(layers.Dense(8, activation='softmax'))

model.compile(optimizer='adam',loss = 'categorical_crossentropy',metrics = ["accuracy"])

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d (Conv2D)              (None, 256, 256, 20)      1520
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 128, 128, 20)      0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 128, 128, 50)      25050
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 64, 50)        0
_________________________________________________________________
flatten (Flatten)            (None, 204800)            0
_________________________________________________________________
dense (Dense)                (None, 500)               102400500
_________________________________________________________________
dense_1 (Dense)              (None, 500)               250500
_________________________________________________________________
dense_2 (Dense)              (None, 8)                 4008
=================================================================
Total params: 102,681,578
Trainable params: 102,681,578
Non-trainable params: 0

To learn (As shown in the first result, Loss becomes 0 in an instant, so Early Stopping is required (1 loss))


epoch = 100
batchsize = 25

# EarlyStopping
from tensorflow.keras.callbacks import EarlyStopping

early_stopping = EarlyStopping(monitor='val_loss',
                               patience=10,
                               min_delta= 0.01,
                               verbose=1)

hist = model.fit(train_data, train_label, batch_size=batchsize, 
                    epochs=epoch, verbose=2, 
                    validation_data=(test_data, test_label),
                    callbacks=[early_stopping])

It will take 15 to 16 minutes, so save the result. Also, I will put out the learning process.

from keras.models import load_model
model.save('./my_model.h5') 

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

As a result of various trials such as playing with hyperparameters, changing the image size, grayscale, etc. The above procedure was a good feeling because the study time was short.

By the way, the worst result was when the image size was 128 and the gray scale, and the match rate of the test data was 0.2.

Try to classify using a learning model

Let's classify the card of the latest bullet (as of 2021/1/11) "Tenten Awakening". I will try to judge the older brother who represents the puck. (The correct answer is Sword.) https://shadowverse-portal.com/card/119241020?lang=ja


#Load the file, resize it to the same size as when learning, and format it into data for CNN
image = cv2.imread(path)
image = cv2.resize(image, (image_size, image_size))
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_rgb = image_rgb.flatten()
data.append(image_rgb)

data = np.array(data, dtype=np.float32)
data = data.reshape((len(data)),image_size,image_size,3)
data /= 255 # to 0-1

#Model loading
mymodel = keras.models.load_model('./Color_my_model.h5', compile=False)
mymodel.summary()

#Predict using a model
predict = mymodel.predict(data)

#The result is 0-Since it will be returned in 7, convert and display
base = ['Forest', 'Sword', 'Rune', 'Dragon', 'Shadow', 'Blood', 'Haven', 'Portal']
for fil, pred in zip(file_name, predict):
    pred_index = np.argmax(pred)
    print(str(fil)+" :" + base[pred_index])
    score.append(pred_index)
 Seofon.png:Sword

You can judge it properly! !! !! !! (In addition, the overall correct answer rate)

At the end

If you prepare the data yourself and try it, it will be interesting and more interesting no matter what the result. There is a sense of entertainment that you can understand the wrong answer by looking at the card image.

Next time is @ hs-lis.

Recommended Posts

I wanted to classify Shadowverse card images by reader class
[I want to classify images using Tensorflow] (2) Let's classify images
I tried to classify dragon ball by adaline
[Keras] Personal memo to classify images by folder [Python]
[Django] I tried to implement access control by class inheritance.
I tried to classify MNIST by GNN (with PyTorch geometric)
Hash chain I wanted to avoid (2)
I wanted to evolve cGAN to ACGAN
I want to scrape images to learn
Hash chain I wanted to avoid (1)
Challenge image classification by TensorFlow2 + Keras 5 ~ Observe images that fail to classify ~
[Google Colab] I want to display multiple images side by side in tiles
I tried to classify mnist numbers by unsupervised learning [PCA, t-SNE, k-means]
I tried to classify Oba Hana and Emiri Otani by deep learning
I tried to analyze the New Year's card by myself using python
I wanted to solve ABC160 with Python
Function to save images by date [python3]
I tried to classify text using TensorFlow
I wanted to solve ABC159 in Python
Try to classify O'Reilly books by clustering
I wanted to solve ABC172 with Python
I really wanted to copy with selenium
Implemented DQN in TensorFlow (I wanted to ...)
I made a module in C language to filter images loaded by Python
I tried to classify Oba Hana and Emiri Otani by deep learning (Part 2)