Put DeepStream SDK Python Binding in Jetson Nano and try object detection

Overview

DeepStream

By using the DeepStream SDK, high-speed inference is possible even with Jetson Nano (YOLO v3-TIny seems to produce about 25 FPS). This time, I will create a program to get the inference result from Python using DeepStream SDK. By the way, about DeepStream SDK, Detailed article on Macnica is listed, so it is good to refer to it.

environment

Terminal: Jetson Nano Image: JetPack 4.2.2. DeepStream SDK:v4.0.2 Python Binding:v0.5α

DeepStream installation

See here [DeepStream SDK with Jetson Nano to detect objects in real time and deliver RTSP](https://www.space-i.com/post-blog/jetson-nano%E3%81%A7deepstream-sdk% E3% 80% 80usb% E3% 82% AB% E3% 83% A1% E3% 83% A9% E6% 98% A0% E5% 83% 8F% E3% 81% 8B% E3% 82% 89% E6% A4% 9C% E7% 9F% A5% EF% BC% 86iphone% E3% 81% A7% E3% 82% B9% E3% 83% 88% E3% 83% AA% E3% 83% BC% E3% 83% A0 /)

DeepStream Python Binding installation

Download "DeepStream Python Apps and Bindings" from this site Screenshot from 2020-04-24 19-01-24.jpg Save to any location and unzip

$ tar -xjvf deepstream_python_v0.5.tbz2 
deepstream_python_v0.5/
deepstream_python_v0.5/LICENSE.txt
deepstream_python_v0.5/ds_pybind_0.5.tbz2
deepstream_python_v0.5/LicenseAgreement.pdf
deepstream_python_v0.5/README

Furthermore, unzip "ds_pybind_0.5.tbz2"

$ cd deepstream_python_v0.5/
$ ls
LICENSE.txt  LicenseAgreement.pdf  README  ds_pybind_0.5.tbz2
~/deepstream_python_v0.5$ tar -xjvf ds_pybind_0.5.tbz2 

Proceed with the installation while referring to deepstream_python_v0.5 / README. First, put Python Binding in the DeepStream SDK installation directory.

$ tar -xjvf ds_pybind_v0.5.tbz2
$ cp -r python /opt/nvidia/deepstream/deepstream-4.0/sources/

At this time, check that the folder structure is as follows

/opt/nvidia/deepstream/deepstream-4.0/sources/python/bindings
/opt/nvidia/deepstream/deepstream-4.0/sources/python/apps$ ls
common            deepstream-test2  deepstream-test4  
deepstream-test1  deepstream-test3  

Install Gst-python.

   $ sudo apt-get install python-gi-dev
   $ export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0"
   $ export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
   $ git clone https://github.com/GStreamer/gst-python.git
   $ cd gst-python
   $ git checkout 1a8f48a
   $ ./autogen.sh PYTHON=python3
   $ ./configure PYTHON=python3
   $ make
   $ sudo make install

Modify the config file for Jetson Nano.

This time, we will set the sample program deepstream-test1 as an example. First, go inside deepstream-test1.

$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/python/apps/deepstream-test1
$ ls
README                dstest1_pgie_config.txt  
deepstream_test_1.py  

"Dstest1_pgie_config.txt" in this is the default configuration file, but since this is the configuration file for Jetson AGX Xavier, it needs to be modified for Jetson Nano. Therefore, create a new file called "dstest_jetson_nano_config.txt" below. Copy and paste the following content as it is.

dstest_jetson_nano_config.txt


[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector_Nano/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector_Nano/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
labelfile-path=../../../../samples/models/Primary_Detector_Nano/labels.txt
batch-size=8
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1

[class-attrs-all]
threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Also, modify deepstream_test_1.py as follows.

deepstream_test_1.py


    - pgie.set_property('config-file-path', "dstest1_pgie_config.txt")
    + pgie.set_property('config-file-path', "dstest_jetson_nano_config.txt")

   - streammux.set_property('width', 1920)
   + streammux.set_property('width',Video width)
    - streammux.set_property('height', 1080)
    + streammux.set_property('height',Video height)

It is troublesome to check and set the video resolution in advance, so it is also a good idea to try to get the video resolution using OpenCV. For example, like this

deepstream_test_1.py


    import cv2
    cap = cv2.VideoCapture(args[1])
    ret, frame = cap.read()
    width = len(frame[0])
    height = len(frame)
    print (f"width:{width}, height:{height}")
    streammux.set_property('width', width)
    streammux.set_property('height', height)

Run

Create a test video file in the same directory as deepstream_test_1.py and execute the following command to display the DeepStream inference result on the screen. Also, it seems that it will take about 5 minutes for the inference result to be displayed.

$ python3 deepstream_test_1.py "Video file"

Screenshot from 2020-04-28 11-04-20.png

If it doesn't work

If it doesn't work, check the following points.

--Check if the location of Pyhon Binding under DeepStream SDK is correct
→ Does deepstream-test * folder exist properly under deepstream-4.0 / sources / python / apps? --Is the video H.264? --Is the version of DeepStream SDK, Jetpack, and Python Binding appropriate?
→ At the time of writing, the combination of JetPack v.4.2.2., DeepStream SDK v.4.0.2, and Python Binding v.0.5α is the latest. is there --If you still don't understand, try enabling the GST debug option.
`$ GST_DEBUG = 3 python3 deepstream_test_1.py When executed with" video file "`, the GST debug option is enabled and the detailed startup log is displayed on the console.

Recommended Posts

Put DeepStream SDK Python Binding in Jetson Nano and try object detection
Object detection using Jetson Nano (YOLOv3)-(1) Jetson Nano settings-
I tried object detection using Python and OpenCV
Try using ChatWork API and Qiita API in Python
Easy face recognition try with Jetson Nano and webcam
Object oriented in python
Try gRPC in Python
Try 9 slices in Python
Try to make it using GUI and PyQt in Python
Temporarily save a Python object and reuse it in another Python
Image analysis with Object Detection API to try in 1 hour
Just try to receive a webhook in ngrok and python
String object methods in Python
Null object comparison in Python
Try LINE Notify in Python
Stack and Queue in Python
Try implementing Yubaba in Python 3
Face detection summary in Python
Python variables and object IDs
Unittest and CI in Python
[Cloudian # 6] Try deleting the object stored in the bucket with Python (boto3)
[Cloudian # 9] Try to display the metadata of the object in Python (boto3)
[Cloudian # 2] Try to display the object storage bucket in Python (boto3)
Draw a watercolor illusion with edge detection in Python3 and openCV3
Build and try an OpenCV & Python environment in minutes using Docker
How to put OpenCV in Raspberry Pi and easily collect images of face detection results with Python