Run AzureKinect in Python on Christmas Eve.

This is the article on the 23rd day of 3D Sensor Advent Calendar 2019.

Overview

I'm a beginner with 3D sensor history for about 3 months, but since I have AzureKinect at hand, this is the 4th Advent Calendar that I thought I'd try my best. I wrote an article once a week, Qiita, and managed to get here. This time, as a goal, I would like to try using the Azure Kinect Sensor SDK at the beginner level to connect and work with Python.

environment

As for the environment, the above starts with Windows 10 and Azure Kinect Sensor SDK 1.2.0 installed. (Note that it is not 1.3.0!)

First, install Python and Open3D on Windows

First, install Python. There are many ways to do this, but if you're a beginner and want to install on Windows, are there:

  1. Download and install Python
  2. Download and install Anaconda (a package that contains various things such as machine learning and data science)
  3. Download and install Miniconda (minimum installation version of Anaconda)

All of them are easy to install, but for the time being, I will install them with the latest version of Miniconda (Miniconda3 4.7.12 at the moment). コメント 2019-12-20 201529.png

It is not recommended to set it in the environment variable, so check the setting to put in the registry below and install it コメント 2019-12-20 201708.png

Next, to use Azure Kinect with Python, it seemed easy to install a framework called Open3D.

Open3D is an open source library that supports the development of software that handles 3D data.
Open3D is C++And provides a Python front end with carefully selected data structures and algorithms
It can be used in either environment.

... apparently ... Actually, this library is the latest version 0.8.0 and supports "Azure Kinect". With this, you can work.

Open3D installation

Open3D 0.8.0 supports Python 2.7, 3.5, 3.6 as below, but it also worked with the latest 3.7.4. (Open3D works on Mac, but unfortunately AzureKinect modules seem to be only Window and Ubuntu, sorry!) For the time being, if you use the following command, you can enter it in one shot.

(base) D:\>conda install -c open3d-admin open3d

--Run Open3D sample

For the time being, you can check if Open3D is recognized correctly with the following command

python -c "import open3d"

The above does not return anything, but if there is no error, it is successful.

Next, let's run the sample module. If this is okay, the Open3D installation is successful. Please download the sample module from the following github. https://github.com/intel-isl/Open3D

The download works as if it was placed directly under D. If you want to run the sample module, you need a tool called matplotlib. This can be done immediately with the conda install command.

(base) D:\>cd D:\Open3D-master\examples\Python\Basic

(base) D:\Open3D-master\examples\Python\Basic>conda install matplotlib

(base) D:\Open3D-master\examples\Python\Basic>python rgbd_redwood.py

Hopefully you will see a screen like the one below. コメント 2019-12-20 212336.png

Run the AzureKinect sample module

Now, let's run the favorite AzureKinect module! Here is a sample to run azure_kinect_viewer.py

(base) D:\Open3D-master>python examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py --align_depth_to_color

However, there is a caveat here. -It seems that the folder name in the program called SDK1.2.0 is directly referenced, and it does not work with SDK1.3.0 (It may be possible if you rename it to the folder name 1.3.0, but in that case environment variables etc. Needs change) -In my environment, I got the following error.

  File "examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py", line 72, in <module>
    v.run()
  File "examples/Python/ReconstructionSystem/sensors/azure_kinect_viewer.py", line 36, in run
    vis.update_geometry()
TypeError: update_geometry(): incompatible function arguments. The following argument types are supported:
    1. (self: open3d.open3d.visualization.Visualizer, arg0: open3d.open3d.geometry.Geometry) -> bool

Invoked with: VisualizerWithKeyCallback with name viewer

I'm not sure because it's a part that seems to cause no error even if I check it, so for the time being I can only guess that update_geometry () is passing some strange parameters every time it updates while. For the time being, line 36 vis.update_geometry() By deleting, the screen cannot be updated, but only the operation was confirmed.

azure_kinect_viewer.py excerpt


        vis_geometry_added = False
        while not self.flag_exit:
            rgbd = self.sensor.capture_frame(self.align_depth_to_color)
            if rgbd is None:
                continue

            if not vis_geometry_added:
                vis.add_geometry(rgbd)
                vis_geometry_added = True

            vis.update_geometry() <==Delete this part
            vis.poll_events()
            vis.update_renderer()

Then, although it is a still image, I was able to confirm the operation of Azure Kinect.

コメント 2019-12-21 181534.png

Actually, there is a color photo on the left, but since the room was dirty, only Depth ... By the way, the above picture is a picture of the Depth image with the camera corrected. As I introduced in Part 3, Azure Kinect has a function that already applies correction with the internal camera parameter, and you can get the corrected state. Before the correction, you can get a picture like this. コメント 2019-12-21 181554.png

It is helpful to be able to easily do it with Azure Kinect because it is not possible to align correctly in 3D unless it is in the corrected state.

... By the way, this Open3D is very easy. Especially since the drawing process has only 3 lines (4 lines including comments), the total script is very small. This part below is really easy.

python


        vis = o3d.visualization.VisualizerWithKeyCallback()
        vis.register_key_callback(glfw_key_escape, self.escape_callback)
        vis.create_window('viewer', 1920, 540)
        print("Sensor initialized. Press [ESC] to exit.")

I managed to display it in Python ...

Looking back

――This time, I expected it to work not only on Python but also on Mac, but that didn't work. --Open3D, a library that looks promising. However, probably because it is growing at a very high speed, I feel that there are some simple defects. I'm looking forward to it (it seems that there is an unimplemented description in the document), so I think that it will be improved gradually if it is used. ――Somehow, it seems easy to handle with Python, but the drawing process of Azure Kinect is heavy, the asynchronous handling of the sensor part seems to be difficult, and I am vaguely worried whether it can be controlled by Python. However, connecting with Python seems to be connected to machine learning etc., so it looks good.

So, this time too, it ended with a rudimentary beginning. I couldn't get out of the state of being a beginner of beginners in all four articles. However, I would like to continue to carry out activities that deepen my learning of 3D sensors. I would like to thank Mr. Yurufuwa UNA for providing this place, as well as those who read and supported me. Thank you very much.

Tomorrow's 24th Christmas Eve is pisa-kun's "Let's summarize the RealSense l515 pre-order commemorative lidar". RealSense L515 looks good, doesn't it? When I sell it in Japan, I feel like I have it. Next year, I will continue to do my best with xR and 3D Sensor related technologies.

Recommended Posts

Run AzureKinect in Python on Christmas Eve.
Run Python in C ++ on Visual Studio 2017
Run Python YOLOv3 in C ++ on Visual Studio 2017
TensorFlow: Run data learned in Python on Android
Run automatic jobs in python
Run Python unittests in parallel
Run Tensorflow 2.x on Python 3.7
Run Python CGI on CORESERVER
Run CGI written in python on Sakura's rental server
Run unix command on python
Run shell command / python in R
Run TensorFlow Docker Image on Python3
Run unittests in Python (for beginners)
Run a simple algorithm in Python
Periodically run Python on Heroku Scheduler
Find files like find on linux in Python
Notes on nfc.ContactlessFrontend () for nfcpy in python
[Python] Run Flask on Google App Engine
Run the Python interpreter in a script
Run servomotor on Raspberry Pi 3 using python
[Python] Run Headless Chrome on AWS Lambda
Run Python code on A2019 Community Edition
Make cron-like jobs run regularly in Python
Notes on using code formatter in Python
Run python wsgi server on NGINX Unit
Create a shortcut to run a Python file in VScode on your terminal
Modules cannot be imported in Python on EC2 run from AWS Lambda
Quadtree in Python --2
Python in optimization
CURL in python
Metaprogramming in Python
Python 3.3 in Anaconda
Geocoding in python
SendKeys in Python
Put MicroPython on Windows to run ESP32 on Python
Run the task in the background on the sshed server
Meta-analysis in Python
Python on Windows
Unittest in python
twitter on python3
Type Python scripts to run in QGIS Processing
Install python package in personal environment on Ubuntu
Epoch in Python
Sudoku in Python
How to run MeCab on Ubuntu 18.04 LTS Python
Until you run the changefinder sample in python
DCI in Python
Notes on using dict in python [Competition Pro]
A note on optimizing blackbox functions in Python
quicksort in python
nCr in python
Run Python web apps on NGINX + NGINX Unit + Flask
N-Gram in Python
Note on encoding when LANG = C in Python
Programming in python
Run Zookeeper x python (kazoo) on Mac OS X
[Mac] Run the RealSense D415 sample in Python
Plink in Python
Constant in python
python on mac
Try working with Mongo in Python on Mac