Using an RGBD sensor such as RealSense, deep learning distinguishes and infers multiple arbitrary objects, and calculates the maximum and minimum diameters that pass through the center of gravity on the image. The result can be obtained as a ROS topic (mrcnn / result). https://github.com/moriitkys/mrcnn_measurement If you find it helpful, please use LGTM.
This is how to measure the size of snap peas. I also learned the bolts and logs shown below by myself. Please refer to here (https://github.com/matterport/Mask_RCNN) for the learning method. (I will write a commentary article if requested) Also, rosbag of snap peas can be downloaded with mrcnn_measurement.
Bolt measurement.
Measurement of raw wood (logs). This is so numerous that it is difficult to execute in real time.
Finally, it can also be executed with coco.
The execution speed is greatly affected by the estimation speed of Mask RCNN (about 0.2s per estimation in my execution environment), so improving the grabber or making the backbone for mobile (mobilenet etc.) will improve the real-time performance. .. Note that the time required for diameter measurement increases as the number of objects increases.
If there is a sensor
# 0, Download h5 & rosbag data <= First time only
sh download_files.sh # in this package directory
cd ~/catkin_ws
catkin_make
# 1, Turn on RealSnese D435
roslaunch realsense2_camera rs_aligned_depth.launch
# 2, Start mrcnn_measurement
roslaunch mrcnn_measurement mrcnn_measurement.launch
Test with rosbag if no sensor
# 0, Download h5 & rosbag data <= First time only
sh download_files.sh # in this package directory
cd ~/catkin_ws
catkin_make
# 1, Start rosbag & mrcnn_measurement
roslaunch mrcnn_measurement mrcnn_measurement_rosbag.launch
--scripts / print_result.py is a sample program that reads and displays the contents of the mrcnn / result topic. If there is only one object, it is easy to understand, and it is easy to write to csv etc. (This package recommends measurement of one object in the first place). --scripts / mrcnn_measurement may need to be executable. (Chmod + x mrcnn_measurement in that directory, etc.) --When you run the sh file, the h5 file and rosbag are downloaded. Since it is several hundred GB, it may be necessary to wait for a while depending on the communication environment. --If mymodel of in the launch file is set to coco, mask_rcnn_coco.h5 will be read. --Note that the estimation result will change greatly depending on the image size input to Mask RCNN. For example, in this package, when using the learning result of coco, the image size is 320 * 240, but if you enter it at 640 * 480, it changes considerably. --In the definition of self.array_lines (array for diameter measurement), the angle resolution self.angle_step is set to 6 degrees, but if you want to find the maximum and minimum of the shape more finely, make self.angle_step smaller. --You can change "object" in scripts / files / mymodel_classes.csv to another name.
1, Receive RGB & Depth topics from RealSense (message_filter). 2, RGB image is estimated by Mask RCNN (ResNet). 3, Execute the Hadamard product of the edges (mask_i_xy_edges) of the mask image generated by guessing and the Hadamard product of the diameter measurement array (self.array_lines) created at the time of class initialization at the mask center of gravity, calculate the object diameter at each angle, diameter_points Append to. 4, Find the maximum diameter max (diameter_points) and minimum diameter min (diameter_points) that pass through the center of gravity of the object, and geometrically calculate the actual diameter of the object from the RealSense camera parameter fx and the depth average of the mask (excluding outliers). Calculation. 5, visualization.
I'm currently investigating, but it's related to the quality of learning, so it may be better to think that there will be an error of about 10%. If it is a circle like a log, it may be more accurate.
To reduce the measurement error. Reduce the time required for diameter measurement (especially the for statement for each ROI of mask is a bottleneck).
The commentary will be enhanced. ** This article will be updated **.
Requirements ros kinetic h5py==2.7.0 Keras==2.1.3 scikit-image==0.13.0 scikit-learn==0.19.1 scipy==0.19.1 tensorflow-gpu==1.4.0 GTX1060, cudnn==6.0, CUDA==8.0 realsense2_camera ( http://wiki.ros.org/realsense2_camera )
http://wiki.ros.org/ja/ROS/Tutorials/WritingPublisherSubscriber%28python%29 http://wiki.ros.org/realsense2_camera https://github.com/matterport/Mask_RCNN https://qiita.com/_akio/items/5469913fce7fdf0c732a
moriitkys Takayoshi Morii Make a robot. I'm interested in AI / Robotics / 3D Graphics. Recently, I've been thinking about how to make money, and I'm planning to make hardware with that money. (Challenge to E qualification) Qualifications / Certifications: G test, Python engineer certification data analysis test, AI implementation test A grade, TOEIC: 810 (2019/01/13)
Recommended Posts