Pepper has a 3D sensor. To access the information on this sensor NAOqi's AL Segmentation 3D and ALCloseObjectDetectionといったAPIを経由してアクセスすることになるのですが、それぞれセグメンテーションと近接物体検知の用途に限定されており、凝った用途には使いづらいです。
On the other hand, a library called PCL (Point Cloud Library) has recently been attracting attention as a library that handles 3D shapes that can be acquired by 3D sensors. In this article, I will introduce how to process the data acquired by Pepper's 3D sensor with PCL.
PCL can access 3D sensors such as Kinect via OpenNI to get Point Cloud, but you can also build Point Cloud objects yourself or pass Point Cloud files (pcd files) directly. I will. This time it seemed difficult to run PCL directly on Pepper, so I will get a Depth Image from Pepper's 3D sensor with a Python script on my local PC and convert it to Point Cloud data. After that, try to display the Point Cloud on the PCL of the local PC.
The full source code is available on GitHub. NAOqi's Python SDK is required to actually run it. Also, PCL will not be explained here, so DERiVE's serialization ["Let's touch PCL!"](Http://derivecv.tumblr.com/tagged/PCL%E3%82%92%E8%A7%A6 % E3% 81% A3% E3% 81% A6% E3% 81% BF% E3% 82% 88% E3% 81% 86% EF% BC% 81), etc.
Depth Image can be obtained like other 2D cameras by using ʻAL Video Device`.
NAME = "depth_camera"
CAMERA_ID = 2 # depth
RESOLUTION = 1 # 320*240
FRAMERATE = 15
COLOR_SPACE = 17 # mono16
video = ALProxy('ALVideoDevice', IP, PORT)
client = video.subscribeCamera(NAME, CAMERA_ID, RESOLUTION, COLOR_SPACE, FRAMERATE)
To get the image from the 3D sensor, specify 2 for the Camera Id specified by ʻALVideoDevice.subscribeCamera ()`. These constant definitions can be found in the "Pepper-3D Sensor" chapter of the documentation that accompanies Choregraphe. As an exception, the definition of Color Space = 17 is not described in the document, but if you specify 17, you can get an image in which each pixel is represented by 16 bits in Gray Scale. This is based on the ROS module source code for Romeo.
Camera parameters are required to convert from a Depth Image to a Point Cloud, a set of 3D coordinates. Camera parameters are parameters that represent the characteristics of the camera, such as the focal length of the camera and the center of the image. This time, the camera is fixed by Pepper's Xtion, so as before, ROS source code Uses the known value used in.
# Focal length
FX = 525.0 / 2
FY = 525.0 / 2
# Optical center
CX = 319.5 / 2
CY = 239.5 / 2
All you have to do now is convert each pixel of the Depth Image to 3D coordinates. Notice that each pixel is divided into upper and lower 2 bytes.
cloud = []
for v in range(height):
for u in range(width):
offset = (v * width + u) * 2
depth = ord(array[offset]) + ord(array[offset+1]) * 256
x = (u - CX) * depth * UNIT_SCALING / FX
y = (v - CY) * depth * UNIT_SCALING / FY
z = depth * UNIT_SCALING
cloud.append((x, y, z))
I will output the last acquired Point Cloud as a PCD file. Here, the header of the PCD file and the actual data are simply output as text.
header = '''# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z
SIZE 4 4 4
TYPE F F F
COUNT 1 1 1
WIDTH %d
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS %d
DATA ascii'''
f = open(OUTPUT_FILE, 'w')
num = len(cloud)
f.write(header % (num, num))
f.write("\n")
for c in cloud:
f.write('%f %f %f' % (c[0], c[1], c[2]))
f.write("\n")
f.close()
You can now save it as a PCD file.
Now, when you display the PCD file with PCL's Visualizer etc., you can see that the 3D shape can be reproduced as shown below.
For reference, the photos taken with Pepper's 2D camera at the same timing are as follows.
I introduced how to get Point Cloud with Pepper's 3D camera. If you cook this well with PCL, you can do something interesting!
Recommended Posts