[PYTHON] [Remote development] Video analysis application (Practice 3)

In Exercise 2 of Last time, we practiced the components of the "control" system. This time, we will explain the components of the video analysis system with the theme of "reading barcodes and QR codes".

Create a new app

Create a new app as you did in the previous exercises. 1 アプリの新規作成.png

Add video source

From the menu at the bottom left of the screen on the Configuration tab, select Media to add an element to use as a video input. In "Category" Standard (USB) URL (network file) browser desktop custom Can be selected, and in "Type", video voice Video and audio Can be selected. In this exercise, we will use only the video (no audio) of the camera mounted on the laptop computer or the camera connected via USB, and set as shown in the screen below. 2 メディアの追加ドロップダウン.png 3 メディアの追加ダイアログ.png The Settings tab for this element offers a variety of options, including: In this exercise, let's raise the "maximum frame rate" from the default of 2 to 5. 4 最大フレームレート.png

Added video analysis

Now that you've added the video source, it's time to add the video analysis components. Select "New" from the menu at the bottom left of the screen and add as follows. 5 構成要素の新規作成.png 6 ビデオ解析のダイアログ.png On the "Settings" tab of video analysis, set as follows. In other words Input image: Select the source of the video added in the previous step Limit analysis cycle: Off Limit delivery cycle: off Save analysis results: On To. 7 ビデオ解析の設定.png

Write code for video analysis

Select the Code tab in the Configuration tab. In this exercise, a module called "pyzbar" is used to read barcodes and QR codes, so first use the installation script "Install.ps1".         python -m pip install pyzbar To install this module. 8 Install.ps1.png Let's check if the installation script works properly. After pressing the "Save" button at the top of the screen, select "Run Installation Script" from the menu at the bottom left of the screen. As shown below, the installation script execution dialog is displayed, and if "Execution completed" is displayed at the bottom left of the dialog, it is okay. Click the "Close" button. 9 インストールスクリプトの実行結果.png The Python code for video analysis is at the beginning,

python


from pyzbar.pyzbar import decode
import cv2
import numpy as np

And import the module installed by the installation script and the module used to draw the detected barcode with a line. Now, in video analysis, the function   new_video_frame(self, video_frame, time_sec) The image to be analyzed is notified to the Python side, the analysis process is performed in this function, and the result is notified to the platform side. Here, the input image is stored in the argument video_frame as numpy.ndarray type, and the elapsed time (seconds) from the start of the application is stored in float type in time_sec. Considering the case where multiple barcodes are included in the image, implement this function as follows.

python


    def new_video_frame(self, video_frame, time_sec):
        data = decode(video_frame)

        for c in data:
            self._sys.set_value({'value': [c.data.decode('utf-8'), c.type]})
            
            points = c.polygon
            pts = np.array(points, np.int32)
            pts = pts.reshape((-1, 1, 2))
            cv2.polylines(video_frame, [pts], True, (0, 255, 0), 2)

        self._sys.set_video_frame(video_frame, time_sec)

The function set_value () is for notifying the platform side of new data as shown in Exercise 1, and this app arranges the barcode value and barcode type for each barcode detected. Notify what you have done. The function ** set_video_frame () ** is used to notify the platform of the image of the result of video analysis. 10 ビデオ解析のソースコード.png

Edit usage page

Set as follows on the "Display Items" tab in the "Usage Page" tab. 11 表示項目.png On the "Layout" tab, there are two display items, which are edited as follows as an example. First, the "frame image after analysis", which is a component of video analysis, may be left as the default expression format of "image", and the size and arrangement position of the rectangle are edited. Next, for "History", select "Array history table" as the expression format, and edit the title, header, etc. in the option settings on the right side of the screen. 12 レイアウト.png

Run!

Press the "Save" button at the top of the screen and then "Start" the app. When the camera reads the barcode or QR code, the contents are displayed on the "Layout" screen. If the camera is out of focus and cannot be read properly, you can adjust it using the application "Camera" that is installed as standard on Windows 10.

  1. Select "Camera" from the Windows menu. 13 カメラを選択.png
  2. Select "Record Video" at the right edge of the screen, and then press the setting button on the upper left to turn on "Pro Mode". 14 プロモード.png
  3. Press the "Manual Focus" button in the center left of the screen and set the knob to "Infinity" at the top. 15 無限遠.png
  4. Close the "Camera" app and try again on the Limotte management tool.

Video input attributes

By the way, how do you get the number of pixels and frame rate of the video to be analyzed? In the programming of components in Limotte, this information is passed to the argument opt of the function \ _ \ _ init \ _ \ _ (). For example, at the beginning of the function new_video_frame ()         print(self._opt) If you write and execute it, the following will be displayed in the "Console" window of the "Code" tab in the "Configuration" tab.

{'__video__': {'width': 640, 'height': 480, 'frame_rate': 5.0, 'scale': 100.0, 'auto_scale': False, 'scaled_width': 640, 'scaled_height': 480, 'rotation': 0}, '__audio__': None, '__guid__': None, '__remotte__': None}

You can see that the value of the key'\ _ \ _ video \ _ \ _'contains attributes related to the source.

Attention! As a platform specification, video and audio analysis components are not displayed on the console when print () is performed inside the function \ _ \ _ init \ _ \ _ (). </ font>

Summary

In this exercise, we experienced the components of the "video analysis" system. The function new_video_frame () gives an image from the platform, parses it, and sets_value () and [set_video_frame ()](https :: //www.remotte.jp/ja/user_guide/program/functions) I want to learn to notify the analysis result to the platform side by a function. In the next exercise, we will deal with voice analysis applications and create a "level meter" for the voice acquired from the microphone.

Recommended Posts

[Remote development] Video analysis application (Practice 3)
[Remote development] Voice analysis application (Practice 4)
[Remote development] Control application (Practice 2)
WEB application development using django-Development 1-
Web application development with Flask
[Remote development] Control application (Practice 2)
Let's make a remote rumba [Hardware]
Let's make a remote rumba [Software]
[Remote development] Video analysis application (Practice 3)
[Remote development] Voice analysis application (Practice 4)
First Django development
Let's make a Makefile and build it (super beginner)