[PYTHON] I tried to get the batting results of Hachinai using image processing

Postscript (2017/09/02)

I changed the program because walks were added to the grades.

What is Hachinai

This is a youth experience-based baseball social game released in June 2017. The official title is "August Cinderella Nine"

In a nutshell, it's a game where you raise characters, make orders, and watch the game. It feels like I'm doing a power pro opener. It seems that the processing of the game is well done, and in this way (although there are some points to be worried about), you can check the hitting results for each game.

Screenshot_20170901-014026.png

However, it is not possible to check the player's total results in this game. To make an appropriate order, you need to manage your own grades, condition, and opponents. Since it is troublesome to manually enter the grades, I made a program to automatically obtain the grades using image processing.

Target

--Get the starting lineup condition (5 levels) and grades from the above image. --Pinch hitter and succession are not considered. —— Make it possible to paste the acquired grades into Excel.

environment

Language used

Python3.5

Library used

Specific method

Number detection

This time, we will detect numbers using template matching without using difficult techniques such as machine learning. Template matching is a method of searching the target image by gradually shifting the part that matches the template, as shown in the figure below. テンプレートマッチング.PNG

This time, template matching is performed using the functions provided by OpenCV. In addition, the following images are prepared for template matching. Since it is converted to a grayscale image when processing, I think that the difference in color does not have much effect. テンプレ群.PNG

Recognition of tone

Tone recognition is simpler, take the difference between the pixel values of the target image and the template, and select the one with the smallest difference. 差分計算.PNG

Source code

# -*- coding: utf-8 -*-

import sys
import cv2
import numpy as np
import PyQt5.QtCore as QtCore
import PyQt5.QtGui as QtGui
import PyQt5.QtWidgets as QtWidgets
import pandas as pd

#Convert OpenCV images so that they can be displayed in PyQt
#Use this source code
#http://qiita.com/odaman68000/items/c8c4093c784bff43d319
def create_QPixmap(image):
    qimage = QtGui.QImage(image.data, image.shape[1], image.shape[0], image.shape[1] * image.shape[2], QtGui.QImage.Format_RGB888)
    pixmap = QtGui.QPixmap.fromImage(qimage)
    return pixmap

#Perform template matching
def matching(img,num,threshold,img_res,cell_y,cell_x):
    template = cv2.imread('./template/number/{}.png'.format(num),0)
    template = template[6:-6,:]
    w, h = template.shape[::-1]

    res = cv2.matchTemplate(img,template,cv2.TM_CCOEFF_NORMED)
    loc = np.where( res >= threshold)
    res_loc = []
    for pt in zip(*loc[::-1]):
        #Exclude duplicates detected
        flag=True
        for pt2 in res_loc:
            if pt2[0] + w > pt[0]:
                flag = False
        if flag:
            res_loc.append(pt)
            #Draw the detected numbers and frame on the original image
            cv2.rectangle(img_res, (pt[0]+cell_x, pt[1]+cell_y), (pt[0]+cell_x+w, pt[1]+cell_y+h), (0,0,255), 2)
            n = "-" if num == "mai" else num
            cv2.putText(img_res, str(n), (pt[0]+cell_x,pt[1]+cell_y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 3)
    return res_loc

#The window that opens when you drop an image
class Add_widget(QtWidgets.QDialog):

    def __init__(self,frame,clipboard,parent=None):
        super(Add_widget, self).__init__(parent)
        self.initUI(frame,clipboard,parent)

    def initUI(self,frame,clipboard,parent):
        self.lbl = QtWidgets.QLabel()
        self.frame = frame

        self.datatable = QtWidgets.QTableWidget()
        self.datatable.setColumnCount(9)
        self.datatable.setRowCount(9)

        self.spinlbl = QtWidgets.QLabel("threshold")
        self.spinbox = QtWidgets.QDoubleSpinBox()
        self.spinbox.setRange(0,1)
        self.spinbox.setSingleStep(0.01)
        self.spinbox.setValue(0.90)
        self.spinbox.valueChanged.connect(self.get_result)
        self.sbin_hbox = QtWidgets.QHBoxLayout()
        self.sbin_hbox.addWidget(self.spinlbl)
        self.sbin_hbox.addWidget(self.spinbox)
        self.sbin_hbox.addStretch(1)

        self.button = QtWidgets.QPushButton("copy to clipboard")
        self.button.clicked.connect(self.copy_to_clipboard)

        self.vbox = QtWidgets.QVBoxLayout()
        self.vbox.addWidget(self.lbl)
        self.vbox.addWidget(self.datatable)
        self.vbox.addLayout(self.sbin_hbox)
        self.vbox.addWidget(self.button)
        self.setLayout(self.vbox)
        self.setWindowTitle('result')
        self.clipboard = clipboard

        self.get_result()

    #Update with the grades obtained from the table
    def update_table(self,df):
        for i in range(len(df.index)):
            for j in range(len(df.columns)):
                self.datatable.setItem(i,j,QtWidgets.QTableWidgetItem(str(df.get_value(i, j))))

    #Identify tone and detect numbers
    def detection_value(self,frame,threshold):
        img_res = frame.copy()
        img_gray = cv2.cvtColor(img_res, cv2.COLOR_BGR2GRAY)

        df = pd.DataFrame()
        li=[0,2,3,2,2,3,2,3,2]

        #Get grades line by line
        for row in range(9):
            player_list = []

            #Identification of tone
            condi_cell = frame[210+sum(li[:row+1])+(84*(row)):210+sum(li[:row+1])+(84*(row+1)),687:758]
            condi_list = np.zeros(5)

            for i in range(5):
                condi = cv2.imread("./template/condition/{}.png ".format(i))
                #Calculate the difference value
                sad = np.sum(np.abs(np.mean(condi_cell.astype(np.float32),axis=(0,1))-np.mean(condi.astype(np.float32),axis=(0,1))))
                #sad = np.sum(np.abs(condi_cell.astype(np.float32) - condi.astype(np.float32)))
                condi_list[i] = sad
            #Select the image with the smallest difference
            c = np.argmin(condi_list)
            player_list.append(c+1)
            cv2.putText(img_res, str(c+1), (687, 210+sum(li[:row+1])+(84*(row+1))), cv2.FONT_HERSHEY_PLAIN, 4, (0, 0, 0), 5)

            #Split by column
            for col in range(8):
                cell_y = 210+sum(li[:row+1])+(84*(row))
                cell_width = 105 if col < 7 else 128
                cell_x = 759+col*105
                img_cell = img_gray[cell_y:cell_y+84,cell_x:cell_x+cell_width]
                list_num = []

                #0~Perform template matching up to 9
                for num in range(10):
                    loc = matching(img_cell,num,threshold,img_res,cell_y,cell_x)
                    for pt in loc:
                        list_num.append([num,pt[0],pt[1]])

                #Sort by x coordinate
                list_num.sort(key=lambda x:(x[1]))   

                #Concatenate numbers sorted by x coordinate
                s = ""
                for i in range(len(list_num)):
                    #In the case of batting average"0."Attach
                    if col == 6 and i == 0:
                        s += "0."
                    s += "{}".format(list_num[i][0])
                    #For RC, after the first number"."(Assuming that RC is rarely double digit)
                    if col == 7 and i == 0:
                        s += "."
                #The connected batting average is finally"0.100"If it becomes"1.00"(Assuming that there is no 1 hit in 10 at bats in one game)
                if col == 6 and s == "0.100":
                    s = "1.00"
                #If the number cannot be detected-Set to 10000
                try:
                    res_num = float(s)
                except ValueError:
                    res_num = -10000.0
                #When RC is detected, template matching is performed for minus, and if there is minus, it is multiplied by -1.
                if col == 7:
                    loc = matching(img_cell,"mai",threshold,img_res,cell_y,cell_x)
                    if len(loc) > 0:
                        res_num *= -1
                player_list.append(res_num)
            #Add grades line by line using pandas
            se = pd.Series(player_list)
            df = df.append(se, ignore_index=True)

        return img_res, df

    #Copy the contents of the table to the clipboard
    def copy_to_clipboard(self):
        s = ""
        for r in range(self.datatable.rowCount()):
            for c in range(self.datatable.columnCount()):
                try:
                    s += str(self.datatable.item(r,c).text()) + "\t"
                except AttributeError:
                    s += "\t"
            s = s[:-1] + "\n"
        self.clipboard.setText(s)

    #Get grades
    def get_result(self):
        img_res, df = self.detection_value(self.frame,self.spinbox.value())
        self.update_table(df)

        img_res = cv2.cvtColor(img_res, cv2.COLOR_BGR2RGB)
        img_res = cv2.resize(img_res, (1280,720))
        qt_img = create_QPixmap(img_res)
        self.lbl.setPixmap(qt_img)

    def show(self):
        self.exec_()

#QLabel class for drag and drop
class DropLabel(QtWidgets.QLabel):
    def __init__(self,parent):
        super().__init__(parent)
        self.parent = parent
        self.setAcceptDrops(True)
        self.setAlignment(QtCore.Qt.AlignCenter);
        self.setText("Drop here")

    def dragEnterEvent(self, e):
            e.accept()

    def dropEvent(self, e):
        mimeData = e.mimeData()
        files = [u.toLocalFile() for u in mimeData.urls()]
        for f in files:
            print("loading {}".format(f))
            #Load the dropped image
            frame = cv2.imread(f)
            #If reading fails, no processing is performed
            if frame is not None:
                frame = cv2.resize(frame, self.parent.size)
                add_widget = Add_widget(frame,self.parent.clipboard)
                add_widget.show()

#Window to drop an image
class Image_widget(QtWidgets.QWidget):

    def __init__(self,clipboard):
        super().__init__()

        self.initUI(clipboard)

    def initUI(self,clipboard):
        self.height = 1080
        self.width = 1920
        self.size = (self.width,self.height)
        self.clipboard = clipboard

        self.lbl = DropLabel(self)
        self.lbl.resize(640,480)

        self.vbox = QtWidgets.QVBoxLayout()
        self.vbox.addWidget(self.lbl)
        self.setWindowTitle('hachinai')
        self.show()
        sys.exit(app.exec_())

if __name__ == '__main__':
    app = QtWidgets.QApplication(sys.argv)
    clipboard = app.clipboard()
    screen = Image_widget(clipboard)

Program execution

In order to execute this program, it is necessary to put the image as shown in the figure of the concrete method in "template / number /" and "template / condition /". When you run the program, you will see the following bleak window. 起動.png

If you drag and drop the image to this window, you will get the grade. Images containing Japanese in the path cannot be read. 取得.png

Since the grade cell is acquired at the absolute position, it cannot be recognized correctly if the table shifts significantly. If recognition fails in other cases, changing the threshold may work. Then, you can copy the grade by clicking "copy to clipboard" and paste it in Excel. エクセル.PNG

in conclusion

For the time being, I was able to achieve my goal. As for the impression I used, I think it will be a little easier to manage grades if it is combined with an Excel macro.

reference

Template matching using OpenCV Display image data handled by OpenCV / numpy on Qt Widget [GUI with Python] PyQt5 -Widget II- Introduction to cross-platform GUI application creation with PyQt in Python Fastest way to populate QTableView from Pandas data frame copy pyqt table selection, including column and row headers Drag and drop files with PyQt

Recommended Posts

I tried to get the batting results of Hachinai using image processing
I tried to get the index of the list using the enumerate function
I tried to correct the keystone of the image
I tried using the image filter of OpenCV
I tried to transform the face image using sparse_image_warp of TensorFlow Addons
I tried to compress the image using machine learning
I tried to find the entropy of the image with python
I tried to get the location information of Odakyu Bus
I tried to extract the text in the image file using Tesseract of the OCR engine
I tried to get a database of horse racing using Pandas
I tried to build the SD boot image of LicheePi Nano
I tried to get a list of AMI Names using Boto3
I tried to get an image by scraping
I tried to estimate the similarity of the question intent using gensim's Doc2Vec
I tried to solve the 2020 version of 100 language processing [Chapter 3: Regular expressions 25-29]
I tried to touch the API of ebay
I tried to get the authentication code of Qiita API with Python.
I tried to extract and illustrate the stage of the story using COTOHA
I tried to get the RSS of the top song of the iTunes store automatically
I tried to get the movie information of TMDb API with Python
I tried the common story of using Deep Learning to predict the Nikkei 225
Using COTOHA, I tried to follow the emotional course of Run, Melos!
I tried to predict the price of ETF
I tried to vectorize the lyrics of Hinatazaka46!
I tried to predict the deterioration of the lithium ion battery using the Qore SDK
I tried to solve the 2020 version of 100 language processing knocks [Chapter 3: Regular expressions 20 to 24]
I tried to solve the 2020 version of 100 language processing knocks [Chapter 1: Preparatory movement 00-04]
I tried to automate the face hiding work of the coordination image for wear
I tried to solve the 2020 version of 100 language processing knocks [Chapter 1: Preparatory movement 05-09]
I tried to detect the iris from the camera image
I tried to summarize the basic form of GPLVM
I tried to get an AMI using AWS Lambda
I tried to approximate the sin function using chainer
I tried using the API of the salmon data project
I tried to visualize the spacha information of VTuber
I tried to erase the negative part of Meros
[Python] I tried to get Json of squid ring 2
I tried to identify the language using CNN + Melspectogram
I tried to complement the knowledge graph using OpenKE
I tried to classify the voices of voice actors
I tried to summarize the string operations of Python
I tried to get the information of the .aspx site that is paging using Selenium IDE as non-programming as possible.
I tried to predict the victory or defeat of the Premier League using the Qore SDK
I tried to notify the update of "Become a novelist" using "IFTTT" and "Become a novelist API"
Python practice 100 knocks I tried to visualize the decision tree of Chapter 5 using graphviz
I want to collect a lot of images, so I tried using "google image download"
I tried to sort out the objects from the image of the steak set meal-④ Clustering
I tried to compare the processing speed with dplyr of R and pandas of Python
I tried "gamma correction" of the image with Python + OpenCV
I tried to find the average of the sequence with TensorFlow
I tried refactoring the CNN model of TensorFlow using TF-Slim
I tried to simulate ad optimization using the bandit algorithm.
I tried to get Web information using "Requests" and "lxml"
I tried face recognition of the laughter problem using Keras.
I want to get the operation information of yahoo route
100 language processing knock-29: Get the URL of the national flag image
[Python] I tried to visualize the follow relationship of Twitter
[TF] I tried to visualize the learning result using Tensorboard
[Machine learning] I tried to summarize the theory of Adaboost
[Python] I tried collecting data using the API of wikipedia
I tried to fight the Local Minimum of Goldstein-Price Function