OpenCV AR marker library ArUco Python3 sample script
I put the source on github. OpenCV_SamplesForMyself
-[Open CV] OpenCV drawing function
I used Windows10, Python 3.8.6.
When using streaming, I used the Raspberry Pi 3 model B
with a camera module.
Raspberry Pi can be streamed with MJPEG-Streamer.
If you can use Python3, you should be able to use OpenCV by installing it with the following command.
$ pip install opencv-contrib-python
If you execute the following script, the AR marker will be output (ar.png
)
You can also output other markers by executing opcv_outputARmark02.py on github.
opcv_outputARmark01.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import cv2
aruco = cv2.aruco
dictionary = aruco.getPredefinedDictionary(aruco.DICT_4X4_50)
def arGenerator():
fileName = "ar.png "
# 0:ID number, 150x150 pixels
generator = aruco.drawMarker(dictionary, 0, 150)
cv2.imwrite(fileName, generator)
img = cv2.imread(fileName)
arGenerator()
Script execution command
$ python opcv_outputARmark01.py
The output marker ar.png
.
This is an overview of the data stored on github.
file name | Detection target | drawing | output |
---|---|---|---|
opcv_outputARmark01.py | - | AR marker | image |
opcv_outputARmark02.py | - | AR marker | image |
opcvCapImg01_drawID.py | image | ID | |
opcvCapImg02_drawText.py | image | letter | |
opcvCapImg03_drawLine.py | image | line | |
opcvCapImg04_drawRectangle.py | image | Rectangle | |
opcvCapImg05_drawCircle.py | image | Circle | |
opcvCapImg06_drawPolylines.py | image | Polygon | |
opcvCapImg07_drawIMG.py | image | image | |
opcvCapImg08_drawAxis.py | image | 3D axis | |
opcvCapVideo01_drawID.py | Video | ID | - |
opcvCapVideo02_drawID_outMP4.py | Video | ID | Video |
opcvCapVideo03_drawRectangle_outMP4.py | Video | Rectangle | Video |
opcvCapVideo04_drawText_outMP4.py | Video | letter | Video |
opcvCapVideo05_drawAxis_outMP4.py | Video | 3D axis | Video |
opcvCapVideo06_drawImg_outMP4.py | Video | image | Video |
opcvCapVideo07_drawVideo_outMP4.py | Video | Video | Video |
(dummy)test-mapping.mp4____.txt | - | - | - |
(dummy)test-video.mp4____.txt | - | - | - |
test-img01.png | - | - | - |
test-img02.png | - | - | - |
test-img03.png | - | - | - |
test-mapping.png | - | - | - |
--Column description
--Detection target
--Target to detect AR marker
--Image
--Use PNG format image
--Video
--Includes the following 3 formats
-
--Video in MP4 format
--Streaming (Raspberry Pi + camera)
--Camera device (Windows PC + camera)
--Drawing
--Type of information to be given to the detected AR marker
-(ID, text, images, videos, etc.)
--Output
--Format to output the execution result to a file
-(Image, video)
--Dummy file
--A dummy text file is stored instead of the mp4 file.
- (dummy)test-mapping.mp4____.txt
--Dummy file of test-mapping.mp4
- https://youtu.be/S-h031SBLaQ
- (dummy)test-video.mp4____.txt
--test-video.mp4
dummy file
- https://youtu.be/qlqU_y5hu0k
--Video data can be downloaded here
- https://note.com/agw/n/na2f22f876228
For all AR markers detected by aruco.drawDetectedMarkers (img, corners, ids, (0,255,0))
, draw a rectangle with the ID and the four corners of the marker as vertices.
Draw characters with cv2.putText (...)
. In the sample, the coordinates of the four corners of the marker are drawn.
Draw a straight line with cv2.line (...)
.
Draw a rectangle with cv2.rectangle (...)
.
Draw a circle with cv2.circle (...)
.
Draw a polygon with cv2.polylines (...)
.
test-video.mp
.Detects the marker and draws the ID. Unlike other video samples, the result is not output as a video. https://youtu.be/XVpjjGrzeAw
Similar to [VIDEO-01], it detects the marker and draws the ID.
https://youtu.be/XVpjjGrzeAw
https://youtu.be/eyuDafmZQ28
https://youtu.be/zTJSGT5dQ_g
https://youtu.be/nyaPeCBZ2hQ
https://youtu.be/1aR9-r7BFjQ
Draw a video file (MP4) on the detected marker. It seems that the output video does not include the video drawn according to the marker (?), And the output video did not draw correctly.
https://youtu.be/jqJlxl5lzkI
cv2.VideoCapture (...)
is also compatible with camera devices and MJPEG-Streamer.
Use cv2.VideoCapture (0)
to specify the camera device and cv2.VideoCapture ("http: // {IP Address}: 8090 /? Action = stream ")
to specify streaming.
You will be able to detect/draw markers for streaming as shown in the figure below.
I have confirmed that the camera device and streaming can be used in the video sample script of here.
Here's an example of using a camera device or streaming with a script in here. opcvCapVideo07_drawVideo_outMP4.py
can be used by changing the values of the variables targetVideo
and outputVideo
excerpted below.
opcvCapVideo07_drawVideo_outMP4.py
# ---Excerpt--- #
#targetVideo = 0 #Camera device
#targetVideo = "test-video.mp4" #Video
#targetVideo = "http://{IP Address}:8090/?action=stream" # MJPG-Streamer
targetVideo = "test-video.mp4"
#mappingVideo= 0 #Camera device
mappingVideo= "test-mapping.mp4" #Video
#mappingVideo= "http://{IP Address}:8090/?action=stream" # MJPG-Streamer
outputVideo= "editV07.mp4"
# ---Excerpt--- #
Example) Sample when a marker is detected from streaming and an image is drawn
In 2021, the first work is "infinity mirror AR". #protoout #opencv pic.twitter.com/F1kZA6JtyT
& mdash suo-takefumi (@zsipparu) January 1, 2021
Example) Sample when a marker is detected from streaming and a video (MP4) is drawn
& quot; Video version of the infinity mirror AR & quot;
& mdash; suo-takefumi (@zsipparu) January 1, 2021
Cows are wandering around. # 丑 # protoout #opencv #AR pic.twitter.com/FPn5ZWqvO1
I referred to here for the installation of MJPG-Streamer. https://qiita.com/suo-takefumi/items/2ae5527869dc13d038a9
I investigated how to use OpenCV, so I summarized the information. It seems that OpenGL can handle 3D objects as well, so I'll investigate that as well.