Fujitsu Systems Web Technology Advent Calendar 2019 Posted on the 16th day. The content of the article is the personal opinion, and the content of the writing is the responsibility of the author himself. The organization to which you belong does not matter.
This time on the 16th day, I'm familiar with Saizeriya's spot the difference because I've never cleared it on my own
** A program that can take the difference between two images and display the difference in an easy-to-understand manner **
I would like to write about how to make
(There is no guarantee, but if you can use python and various libraries, it will work normally, so please just take a look.)
(Detailed installation method is'[anaconda navigator jupyter installation](https://www.google.com/search?q=anaconda+navigator+jupyter+%E3%82%A4%E3%83%B3%E3%82%B9 % E3% 83% 88% E3% 83% BC% E3% 83% AB)'I think that various things will come out.)
Now, here is the actual content of the source.
There are three main parts
It feels like a mouse
import cv2, matplotlib
import numpy as np
from IPython.display import Image, display_png
#Original image 1
img_01 = "sample1.png "
#Original image 2
img_02 = "sample2.png "
#Please enter the name you want to give to the output image
outImg = "output.png "
#Read the original image 1 in color
img_src01 = cv2.imread(img_01, 1)
#Read original image 2 in color
img_src02 = cv2.imread(img_02, 1)
#Mixed normal distribution(Gaussian Mixture)It seems to be a foreground / background area division algorithm based on.
bg = cv2.bgsegm.createBackgroundSubtractorMOG()
#Generate mask image
mask = bg.apply(img_src01)
mask = bg.apply(img_src02)
#Output mask image
cv2.imwrite("output/machigai_mask.png ", mask)
print("Original image 1")
display_png(Image(img_01))
print("Original image 2")
display_png(Image(img_02))
print("Mask image")
display_png(Image("output/machigai_mask.png "))
The output result here is as follows.
mask_img = cv2.imread("output/machigai_mask.png ", 1)
#Replace the white part of the image with red
red = [240, 20, 20]
white = [255, 255, 255]
mask_img[np.where((mask_img == white).all(axis=2))] = red
# BGR to RGB
mask_img = cv2.cvtColor(mask_img, cv2.COLOR_BGR2RGB)
#Save the result.
cv2.imwrite("output/color_mask_img.png ", mask_img)
color_mask_img = cv2.imread("output/color_mask_img.png ", 1)
display_png(Image("output/color_mask_img.png "))
The output result here is as follows.
If you can do this at the end, it will be completed.
# src1 =Original image 1, src2=Mask image, alpha(beta) =It seems good to adjust it at a ratio of 1 in total, gamma=It felt good when it was 0
diff_image = cv2.addWeighted(src1=img_src01,alpha=0.3,src2=color_mask_img,beta=0.7,gamma=0)
cv2.imwrite(outImg, diff_image)
display_png(Image(outImg))
Yes it's done
For this reason, I searched for mistakes and made it. First of all, please try it by yourself (there are 6 mistakes)
Did you find the mistake?
Next, let's take a look at the results of the fully automatic ~~ egg breaking machine ~~ solved by the spot the difference machine.
it's here
How is it? It's a great correct answer rate.
However, the detection accuracy of ** that the teacher's wig has changed to watermelon ** is not very good.
In the first place, this background subtraction method is used. This is a method for detecting moving objects on a moving image.
in short
With a fixed point camera ** Ah, people passed by ** It ’s a way to judge something like
There are two major problems when using this background subtraction method for finding mistakes.
So, if you try to make something that can be used for general purposes,
I feel like I have to make such software ()
Let's do our best to make such software next time (white eyes)
-Background subtraction method -OpenCV basic knowledge -Image composition / transparency -Image fill
Recommended Posts