[PYTHON] LINE notification from real-time video recognition with DeepStream SDK

LINE notification from high-speed video recognition with DeepStream SDK

This article is the 7th day of DoCoMo Advent Calendar 2019.

My name is Sakai from the NTT DoCoMo Service Innovation Department. In my work, I am working on research and development of an image recognition engine using deep learning and making it a service. This time, I used the GPU-based video recognition acceleration tool "DeepStream SDK" to detect vehicles from videos at high speed and notify LINE of the results. It feels like it can be used to create something like an AI camera, detect people with a surveillance camera, or detect illegal parking from camera images.

動画gif

I also challenged messaging using DeepStream SDK and AMQP, which I could not find many cases on the Web.

workflow.jpg

Motivation

In recent years, as the use of deep learning in image recognition technology has expanded, image recognition of video data has gradually become more popular. However, video recognition using deep learning requires more machine specifications than normal image recognition, and the processing time tends to be slower.

In such a case, if you use NVIDIA's DeepSTtream SDK, you can speed up video recognition using GPU and speed up video processing. Now that it can be used with docker, it has become easier to install, so I tried it.

DeepStream SDK

In deep learning inference (+ learning), GPU is often used for speeding up. DeepStream SDK is an SDK for high-speed video processing using GPU. It provides various functions required for video processing in the form of a Gstreamer plug-in that is common in stream processing.

--Speed up video encoding / decoding using dedicated hardware for GPUs and edge GPU devices --Speed up object detection and image classification using deep learning using GPU and TensorRT [^ 1] --Also provides tracking function --Image recognition results can be sent using Kafka, MQTT, AMQP, etc.

Method

environment

We have prepared a GPU server that can use NVIDIA docker 2.0.

gpu was native support in docker, but docker-compose doesn't seem to be able to specify the GPU. , Adopted the old runtime expression specification method.

file organization

deepstream-line
├── docker
│   ├── Dockerfile
│   ├── requirements.txt
│   └── src
│       └── receive.py
├── docker-compose-amqp.yml
├── docker-compose.yml
├── rabbitmq
├   ├── custom_definitions.json
├   └── rabbitmq.conf
└── videos
    └── test.h264

Video data can be obtained from the free video material site Mixkit. I am using Chinatown street at night.

Since it is necessary to make it a h264 stream, convert it using ffmpeg.

ffmpeg -i ./2012-1080.mp4  -vcodec copy -an -bsf:v h264_mp4toannexb videos/test.h264

Start AMQP server

The DeepStream SDK can send the recognition result as a message to Kafka, MQTT, AMQP, etc. This time I decided to launch RabbitMQ using docker. RabbitMQ is a Pub / Sub type messaging service that consists of the following elements.

rabbitmq.jpg

--producer: The sender of the message. This time, the DeepStream SDK is the one --exchange: Allocate the sent message to an appropriate queue --queue: queue --Consumer: Use the message by going to the queue. This time, a python script that sends a notification to LINE

Prepare the AMQP configuration file. Both the user and password are guest.

rabbitmq/custom_definitions.json


{
    "rabbit_version": "3.8.0",
    "users": [{
        "name": "guest",
        "password_hash": "CV/8dVC+gRd5cY08tFu4h/7YlGWEdE5lJo4sn4CfbCoRz8ez",
        "hashing_algorithm": "rabbit_password_hashing_sha256",
        "tags": "administrator"
    }],
    "vhosts": [{
        "name": "/"
    }],
    "permissions": [{
        "user": "guest",
        "vhost": "/",
        "configure": ".*",
        "write": ".*",
        "read": ".*"
    }],
    "topic_permissions": [],
    "parameters": [],
    "global_parameters": [{
        "name": "cluster_name",
        "value": "rabbit@rabbitmq"
    }],
    "policies": [],
    "queues": [{
        "name": "test",
        "vhost": "/",
        "durable": true,
        "auto_delete": false,
        "arguments": {
            "x-queue-type": "classic"
        }
    }],
    "exchanges": [],
    "bindings": [{
        "source": "amq.topic",
        "vhost": "/",
        "destination": "test",
        "destination_type": "queue",
        "routing_key": "deepstream",
        "arguments": {}
    }]
}

In queues, the name of the queue is set to test. In addition, exchange uses amq.topic which is set by default, and when a message with a topic "deepstream" comes in bindings, it is routed to the test queue.

Also prepare a conf file to read the above json.

rabbitmq/rabbitmq.conf


loopback_users.guest = false
listeners.tcp.default = 5672
management.tcp.port = 15672
management.load_definitions = /etc/rabbitmq/custom_definitions.json

Now you can exchange messages on port 15672 while loading the json.

Start RabbitMQ using docker-compose. While fetching the official base image of RabbitMQ, use the volumes option to mount the conf / json so that it can be seen from inside the container.

docker-compose-amqp.yml


version: "2.3"
services:
  rabbitmq:
    image: rabbitmq:3.8-management
    hostname: rabbitmq
    container_name: rabbitmq
    expose: #Publishing for other containers
      - "5672"
      - "15672"
    ports: #
      - "5672:5672"
      - "15672:15672"
    volumes:
      - ./rabbitmq:/etc/rabbitmq
networks:
  default:

Also, by defining a network, it is possible to connect from the DeepStream SDK or consumer container later. Start with the following command.

docker-compose -f docker-compose-amqp.yml up -d

Preparing the DeepStream SDK

For the DeepStream SDK, we will use the official image provided by NVIDIA GPU CLOUD.

使うイメージはnvcr.io/nvidia/deepstream:4.0.1-19.09-samplesです。

Pull it in advance.

docker pull nvcr.io/nvidia/deepstream:4.0.1-19.09-samples

This image contains everything you need, including the compiled DeepStream SDK, samples, TensorRT, and Gstreamer. This time, we will use DeepStream Test 4 in the sample application included in this docker image. This is an application that can detect and recognize cars / people and then send the recognition result as a message every 30 frames.

Directory: /sources/apps/sample_apps/deepstream-test4 Description: This builds on top of the deepstream-test1 sample for single H.264 stream - filesrc, decode, nvstreammux, nvinfer, nvosd, renderer to demonstrate the use of "nvmsgconv" and "nvmsgbroker" plugins in the pipeline for IOT connection. For test4, user have to modify kafka broker connection string for successful connection. Need to setup analytics server docker before running test4. The DeepStream Analytics Documentation has more information on setting up analytics servers.

You can check the README and Usage with the following commands.

docker run --gpus 0 --rm -it \
    nvcr.io/nvidia/deepstream:4.0.1-19.09-samples \
    cat /root/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test4/README
#README is displayed
docker run --gpus 0 --rm -it \
    nvcr.io/nvidia/deepstream:4.0.1-19.09-samples \
    /opt/nvidia/deepstream/deepstream-4.0/bin/deepstream-test4-app
# /opt/nvidia/deepstream/deepstream-4.0/bin/deepstream-test4-app \
# -i <H264 filename> -p <Proto adaptor library> --conn-str=<Connection string

According to the README, you can use AMQP by setting the --proto-lib, --conn-str, and --topic options.

1.Use --proto-lib or -p command line option to set the path of adaptor library. Adaptor library can be found at /opt/nvidia/deepstream/deepstream-/lib

kafka lib - libnvds_kafka_proto.so azure device client - libnvds_azure_proto.so AMQP lib - libnvds_amqp_proto.so

2.Use --conn-str command line option as required to set connection to >backend server. For Azure - Full Azure connection string For Kafka - Connection string of format: host;port;topic For Amqp - Connection string of format: host;port;username. Password to be provided in cfg_amqp.txt

3.Use --topic or -t command line option to provide message topic (optional). Kafka message adaptor also has the topic param embedded within the connection string format In that case, "topic" from command line should match the topic within connection string

In [Starting AMQP server](#AMQP server starting), host is rabbitmq, port is 15672, user is guest, so -c rabbitmq; 5672; guest topic should be deepstream -c cfg_amqp.txt. It will be.

libnvds_amqp_proto.so is stored in /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_amqp_proto.so in the docker image.

Consumer preparation

Prepare a container to get messages from the AMQP server and send notifications to LINE. Use LINE Notify to send notifications to LINE. This means that when you link with a web service, you will receive a notification from the official account "LINE Notify" provided by LINE. Write the equivalent of this "Web service" in python. You can send a notification by acquiring a dedicated TOKEN and POSTing a message to a specific URL while authenticating using TOKEN. This time, I got TOKEN by referring to this article. The acquired TOKEN will be used in [Execute](# Execution), so make a note of it somewhere.

docker/src/receive.py


import pika
import json
import requests
import os

LINE_URL = "https://notify-api.line.me/api/notify"
line_token = os.environ['LINE_TOKEN'] #Bring TOKEN from an environment variable
message_format = '''
Times of Day: {time:s}
place: {place:s}
camera: {camera:s}
Detected object: {object:s}
'''


def main():
    #Connect to mqtt
    credentials = pika.PlainCredentials('guest', 'guest')
    connect_param = pika.ConnectionParameters(
        host='rabbitmq',  #docker-compsoe-Hostname given to container with amqp
        credentials=credentials
    )
    connection = pika.BlockingConnection(connect_param)
    channel = connection.channel()

    #token for line authentication
    line_headers = {"Authorization": "Bearer {}".format(line_token)}

    for method_frame, properties, body in channel.consume(
        'test',
        inactivity_timeout=30  #30 seconds break if there is no new message
    ):
        if method_frame is None:
            break

        message = json.loads(body)
        if 'vehicle' in message['object'].keys():
            obj_data = '{} {} {}'.format(
                message['object']['vehicle']['make'],
                message['object']['vehicle']['model'],
                message['object']['vehicle']['color'],
            )
        elif 'person' in message['object'].keys():
            obj_data = 'Person {} {}'.format(
                message['object']['person']['age'],
                message['object']['person']['gender']
            )
        else:
            obj_data = ''
        payload = {
            "message": message_format.format(
                time=message['@timestamp'],
                place='{}_{}({})'.format(
                    message['place']['id'],
                    message['place']['name'],
                    message['place']['type'],
                ),
                camera=message['sensor']['id'],
                object=obj_data,
            )
        }
        r = requests.post(
            LINE_URL,
            headers=line_headers,
            params=payload
        )
        channel.basic_ack(method_frame.delivery_tag)

    #Cancel consumer and return if there is a pending message
    requeued_messages = channel.cancel()
    print('Requeued %i messages' % requeued_messages)
    connection.close()


if __name__ == '__main__':
    main()

At payload =, we are processing the message sent from the DeepStream SDK via AMQP. The message from DeepStream Test 4 is sent once every 30 frames in json format (DeepStream Test 4). The message contains information about one of the discovered objects in the following format: I decided to retrieve the shooting location information stored in place, camera information stored in camera, time, and object information.

{
    "messageid": "a8c57c62-5e25-478d-a909-3ab064cbf11f",
    "mdsversion": "1.0",
    "@timestamp": "2019-11-17T13:42:39.807Z",
    "place": {
        "id": "1",
        "name": "XYZ",
        "type": "garage",
        "location": {
            "lat": 30.32,
            "lon": -40.55,
            "alt": 100.0
        },
        "aisle": {
            "id": "walsh",
            "name": "lane1",
            "level": "P2",
            "coordinate": {
                "x": 1.0,
                "y": 2.0,
                "z": 3.0
            }
        }
    },
    "sensor": {
        "id": "CAMERA_ID",
        "type": "Camera",
        "description": "\"Entrance of Garage Right Lane\"",
        "location": {
            "lat": 45.293701447,
            "lon": -75.8303914499,
            "alt": 48.1557479338
        },
        "coordinate": {
            "x": 5.2,
            "y": 10.1,
            "z": 11.2
        }
    },
    "analyticsModule": {
        "id": "XYZ",
        "description": "\"Vehicle Detection and License Plate Recognition\"",
        "source": "OpenALR",
        "version": "1.0",
        "confidence": 0.0
    },
    "object": {
        "id": "-1",
        "speed": 0.0,
        "direction": 0.0,
        "orientation": 0.0,
        "vehicle": {
            "type": "sedan",
            "make": "Bugatti",
            "model": "M",
            "color": "blue",
            "licenseState": "CA",
            "license": "XX1234",
            "confidence": 0.0
        },
        "bbox": {
            "topleftx": 585,
            "toplefty": 472,
            "bottomrightx": 642,
            "bottomrighty": 518
        },
        "location": {
            "lat": 0.0,
            "lon": 0.0,
            "alt": 0.0
        },
        "coordinate": {
            "x": 0.0,
            "y": 0.0,
            "z": 0.0
        }
    },
    "event": {
        "id": "4f8436ab-c611-4257-8b83-b9134c6cab0d",
        "type": "moving"
    },
    "videoPath": ""
}

The container image that executes the above consumer is created using Dockerfile. After installing python3, we have installed the necessary python package using pip. Finally, copy the source code and you're done.

docker/Dockerfile


FROM ubuntu:18.04

RUN apt-get update \
    && apt-get install -y python3-pip python3-dev \
    && cd /usr/local/bin \
    && ln -s /usr/bin/python3 python \
    && pip3 install --upgrade pip \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt

COPY src /opt/src
WORKDIR /opt/src

CMD ["python3", "./receive.py"]

requirements.txt


pika
requests

Run

Launch the DeepStream SDK and consumer using docker-compose.

docker-compose.yml


version: "2.3"
services:
  deepstream:
    image: nvcr.io/nvidia/deepstream:4.0.1-19.09-samples
    runtime: nvidia
    hostname: deepstream
    container_name: deepstream
    command: >
      /opt/nvidia/deepstream/deepstream-4.0/bin/deepstream-test4-app
      -i /root/videos/test.h264
      -p /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_amqp_proto.so
      -c cfg_amqp.txt
      -t deepstream
      --conn-str rabbitmq;5672;guest
    cap_add:
      - SYSLOG
    working_dir: /root/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test4
    networks:
      - deepstream-line_default
    volumes:
      - ./videos:/root/videos
      - /tmp/.X11-unix:/tmp/.X11-unix
    environment:
      - DISPLAY=$DISPLAY
  consumer:
    build: ./docker
    hostname: consumer
    container_name: consumer
    environment:
      LINE_TOKEN: "XXX"
    networks:
      - deepstream-line_default
networks:
  deepstream-line_default:
    external: true

"XXXXXXXX"Paste the obtained TOKEN in the place. Now you will be notified on your LINE.



 The process will start when you execute the following.

```bash
xhost + #Display of results can be displayed from docker
docker-compose build #consumer build process
docker-compose up -d

This assumes an environment with a display, so a slight modification [^ 2] is required to execute it on a server or the like.

result

When docker-compose up, the video recognition process will start up in the following form, and a notification will be sent to LINE.

result_all.gif

The above is the PC version of LINE, but if you use a smartphone, you will receive a notification like the one below.

Screenshot_20191206-212712_LINE.jpg

at the end

We introduced how to perform LINE notification by performing high-speed video recognition using the DeepStream SDK. This time I used a normal server, but by using a deepstream-l4t container, for edge GPUs such as jetson You should be able to do the same on your machine. The dream of making something like an AI camera spreads.

Also, this time, I could not use the recognition model of my choice in the application on the DeepStream SDK side or change the format of the output data. I would like to introduce it if I have the opportunity.

Until the end Thank you for reading.

[^ 1]: A library for accelerating Deep Learning inference processing on NVIDIA GPUs. Previously, I tried using TensorRT 5.0, [Speeding up inference with TensorRT 5.0.6 and JETSON NANO](https://qiita.com/104ki/ I am writing articles such as items / 17434efa7ce045f767dd). [^ 2]: If you want to run it on a server without display, you can run it by removing ʻenvironment from deepstreamin docker-compose.yml and adding--no-display option to command`. ..

Recommended Posts

LINE notification from real-time video recognition with DeepStream SDK
Real-time face recognition with video acquired by getUserMedia [HTML5, openCV]
Read line by line from a file with Python