[PYTHON] Find card illustrations from images using feature point matching

Purpose

This is a memorandum for finding card illustrations from images using feature point matching.

Preparation

    1. Card illustration
  1. Image with card illustration
    1. Code to find the card illustration from the image containing the card illustration

(Example) 1. Card illustration test2.png Source: [[All of Blue-Eyes White Dragon] The locus of the legendary dragon with the title of Yu-Gi-Oh!](Https://yu-gi-oh.xyz/ Blue-Eyes White Dragon / post- 49725 /) blueeyes2.jpg

(Example) 2. Image including card illustration test1.png Source: Adere blueeyes1.jpg

3. 3. Code to find the card illustration from the image containing the card illustration

Code that calculates feature points and extracts card illustrations from images Reference: Detect Mickey by feature point matching

The feature point uses AKAZE.

sample.py


import cv2
import numpy as np

fname_img1='test1.png'
fname_img2='test2.png'

img1 = cv2.imread(fname_img1)
img2 = cv2.imread(fname_img2)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
akaze = cv2.AKAZE_create()
kp1, des1 = akaze.detectAndCompute(gray1, None)
kp2, des2 = akaze.detectAndCompute(gray2, None)

#img1_sift = cv2.drawKeypoints(gray1, kp1, None, flags=4)
#img2_sift = cv2.drawKeypoints(gray2, kp2, None, flags=4)
#cv2.imwrite('out1.png', img1_sift)
#cv2.imwrite('out2.png', img2_sift)

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key = lambda x:x.distance)
img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=2)
cv2.imwrite('out-match.png', img3)

#homography
good_match_rate = 0.15;
good = matches[:int(len(matches) * good_match_rate)]

min_match=10
if len(good) > min_match:
    src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
    # Find homography
    M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC)
    matchesMask = mask.ravel().tolist()

    print(M)
    print(M[0][2])
    print(M[1][2])

    height = img2.shape[0]
    width = img2.shape[1]
    top_left = (int(M[0][2] +0.5), int(M[1][2] +0.5)); #tx,ty
    bottom_right = (top_left[0] + width, top_left[1] + height)

#result
result = cv2.imread(fname_img1)
cv2.rectangle(result,top_left, bottom_right, (255, 0, 0), 10)
cv2.imwrite("result.png ", result)

test

Execution result

$ python sample.py
[[ 6.43984577e-01 -4.58173428e-02  1.61203033e+02]
 [ 2.85![result.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/344880/23ae3f02-409a-6a3a-5e8b-673a20afa502.png)
230902e-02  4.12312421e-01  1.50286636e+02]
 [ 4.54533377e-04 -5.75705548e-04  1.00000000e+00]]
161.2030333928222
150.28663624674783

Calculate and match feature points

out-match.png

Detect frame

result.png

For webcam images (1 card)

It seems that one sheet can be detected.

out-match.png result.png

Frame deviation needs adjustment (currently fixed value)

For webcam images (multiple cards)

Multiple sheets are strict.

out-match.png result.png

Task

--Supports multiple cards --Video support --Frame misalignment adjustment

It may be used for Yu-Gi-Oh's remote duel.

Coding Error measures

nothing special

reference

[[All about Blue-Eyes White Dragon] The locus of the legendary dragon with the title of Yu-Gi-Oh!](Https://yu-gi-oh.xyz/ Blue-Eyes White Dragon / post-49725 /) Adere Detect Mickey by feature point matching AKAZE

Recommended Posts

Find card illustrations from images using feature point matching
I tried to find the affine matrix in image alignment (feature point matching) using affine transformation.
Geotag prediction from images using DNN
Let's dig a little deeper into feature point matching using OpenCV
Load images from URLs using Pillow in Python 3