This article is the second day article of WebCrew Advent Calendar 2017. Yesterday was @ kouares's "Implement CRUD with PlayFramework 2.6.7, play-slick 3.0.2 (Slick 3.2)".
@Hahegawa will be in charge of the second day. Thank you.
――In my work, I had the opportunity to want to detect an object from an image, so I tried OpenCV. --As for OpenCV, there are many implementation examples and explanations in Python and C ++ in the streets, but considering maintenance for business use, I thought it would be nice if it could be done in Java, so I tried it in Java. --Also, there were few articles that mentioned the "overall flow" from installation to execution when using OpenCV with Java, so I tried to summarize it with that in mind.
--This time, I tried "feature detection" and "feature matching". --For the mechanism and implementation image of OpenCV itself, I referred to the following. --Mechanism: http://labs.eecs.tottori-u.ac.jp/sd/Member/oyamada/OpenCV/html/py_tutorials/py_feature2d/py_table_of_contents_feature2d/py_table_of_contents_feature2d.html#py-table-of-content-feature2d --How to implement: https://qiita.com/hmichu/items/f5f1c778a155c7c414fd
It's a little old, but I tried it in the following environment. ・ Windows 10 ・ Eclipse Oxygen ・ OpenCV3.1 -Apache Tomcat 7.0
Download OpenCV library From the following page, download and unzip the library of the Ver you want to use to an appropriate location https://opencv.org/releases.html
4. Right-click the Eclipse project → Set build path in Properties
6. Add runtime configuration When running OpenCV assuming a web application, set the classpath of the runtime configuration to 5 Add the same user library
(2)~(5)Source code
//Mat conversion of images sent from the screen
Mat src = new Mat(bufImg.getHeight(), bufImg.getWidth(), CvType.CV_8UC3);
byte[] bufImgBinary = ((DataBufferByte)bufImg.getRaster().getDataBuffer()).getData();
src.put(0, 0, bufImgBinary);
//Mat conversion of template image used for feature detection
Mat tempSrc = new Mat(tmpImg.getHeight(), tmpImg.getWidth(), CvType.CV_8UC3);
byte[] tempImgBinary = ((DataBufferByte)tmpImg.getRaster().getDataBuffer()).getData();
tempSrc.put(0, 0, tempImgBinary);
//Grayscale conversion for feature detection
Mat srcGray = new Mat(bufImg.getHeight(), bufImg.getWidth(), CvType.CV_8UC1);
Imgproc.cvtColor(src, srcGray, Imgproc.COLOR_BGR2GRAY);
Core.normalize(srcGray, srcGray, 0, 255, Core.NORM_MINMAX);
Mat tempSrcGray = new Mat(tmpImg.getHeight(), tmpImg.getWidth(), CvType.CV_8UC1);
Imgproc.cvtColor(tempSrc, tempSrcGray, Imgproc.COLOR_BGR2GRAY);
Core.normalize(tempSrcGray, tempSrcGray, 0, 255, Core.NORM_MINMAX);
MatOfKeyPoint keyPointSrc = new MatOfKeyPoint();
MatOfKeyPoint keyPointTemp = new MatOfKeyPoint();
Mat descriptersSrc = new Mat(src.rows(), src.cols(), src.type());
Mat descriptersBase = new Mat(tempSrc.rows(), tempSrc.cols(), tempSrc.type());
//Feature detector generation 1
FeatureDetector akazeDetector = FeatureDetector.create(FeatureDetector.AKAZE);
DescriptorExtractor akazeExtractor = DescriptorExtractor.create(DescriptorExtractor.AKAZE);
akazeDetector.detect(srcGray, keyPointSrc);
akazeDetector.detect(tempSrcGray, keyPointTemp);
akazeExtractor.compute(srcGray, keyPointSrc, descriptersSrc);
akazeExtractor.compute(tempSrcGray, keyPointTemp, descriptersBase);
//Feature detector generation 2
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
MatOfDMatch matche = new MatOfDMatch();
List<MatOfDMatch> matches = new ArrayList<MatOfDMatch>();
List<DMatch> dmatch = new ArrayList<DMatch>();
//Features Cross check available
if(CROSS_CHECK_FLG){
MatOfDMatch srcToBase = new MatOfDMatch();
MatOfDMatch baseToSrc = new MatOfDMatch();
matcher.match(descriptersSrc, descriptersBase, srcToBase);
matcher.match(descriptersBase, descriptersSrc, baseToSrc);
List<DMatch> ldm_srcToBase = srcToBase.toList();
List<DMatch> ldm_baseToSrc = baseToSrc.toList();
for(int i=0; i<ldm_srcToBase.size(); i++) {
DMatch forward = ldm_srcToBase.get(i);
DMatch backward = ldm_baseToSrc.get(forward.trainIdx);
if(backward.trainIdx == forward.queryIdx){
dmatch.add(forward);
}
}
matche.fromList(dmatch);
}
//Features No cross check
else{
System.out.println("Features Normal check execution");
matcher.match(descriptersSrc, descriptersBase, matche);
}
Mat matchResult = new Mat(src.rows()*2, src.cols()*2, src.type());
Features2d.drawMatches(src, keyPointSrc, tempSrc, keyPointTemp, matche, matchResult);
//↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ Verification image output ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
BufferedImage image = ImageConverter.convertMToBI(matchResult);
File ouptut = new File("C:\\testlog\\sample_20171202.jpg ");
ImageIO.write(image, "jpg", ouptut);
Load OpenCV library
static{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
Although there are some misrecognitions, the features could be detected.
――It took me a long time to write the code because it was slow, but if you are quick to come up with or write the code, you can easily process the image if you understand the basics and explanations of various methods described in this article. I felt that I could do it. ――When planning your business, what you really want to use is "what to do before feature detection", so read this article and focus on the processing you really want to implement without extra man-hours. I would appreciate it if you could.
Tomorrow is @wc_moriyama. Thank you.
Recommended Posts