[PYTHON] Track objects in your video with OpenCV Tracker

If you want to do something based on the markers and other objects that appear in the image, you will often follow the procedure of detecting and locating the object in the frame using the object detection / marker detection method. However, if object tracking is also used, it may be possible to interpolate the detection of markers and objects in frames where object detection and marker detection have failed.

Let's try tracking objects in the video using OpenCV Tracker.

Since there is no need to build an environment, we will experiment on Google Colaboratory.

Google Drive mount

I want to load the video to be analyzed from Google Drive, so mount Google Drive.

from google.colab import drive
drive.mount("/content/gdrive")

FPS designation

fps = 5

If the FPS is set high, the tracking accuracy will basically increase, but the processing time will also increase, so first analyze with 5 FPS. Make it a variable so that you can change it later.

Extract frames from videos (convert to still images)

!ffmpeg -i "/content/gdrive/My Drive/TrackerExp/src.mp4" -vf fps=$fps "/content/gdrive/My Drive/TrackerExp/src/%06d.jpg "

Use ffmpeg to extract still images from videos. The extraction destination does not have to be on Google Drive, but I try to save it on Google Drive because it is convenient when checking the process later. If you do not need to refer to it, it is better to use a temporary directory instead of a directory on Google Drive because the synchronization queue of Google Drive will not be clogged.

Get a list of extracted images

import glob
import os
files = glob.glob("/content/gdrive/My Drive/TrackerExp/src/*.jpg ")
files.sort() #Sort the files found
print(len(files))

Get the list of images from the extraction destination folder. There is no guarantee that the acquired images will be sorted by frame number, so sort them.

Create a directory to save the analysis results

dst_dir = "/content/gdrive/My Drive/TrackerExp/dst"
if os.path.exists(dst_dir) == False:
  os.makedirs(dst_dir)

This time, the analysis result (tracking result) is drawn on the original image and saved as an image. Create a save destination directory.

Specify the range (time) you want to track

start_sec = 5.0 #Tracking starts at 5 seconds
end_sec = 15.0 #Tracking ends at 15 seconds

start_frame = fps * start_sec
end_frame = fps * end_sec

Specify the range (time) of the video you want to track. The actual frame number (from what frame to what frame) changes depending on the FPS, so the frame number is calculated by multiplying the seconds by FPS.

Tracker generation

tracker = cv2.TrackerMedianFlow_create()

Generate a tracker. Since OpenCV has multiple tracking algorithms other than MedianFrow, you can switch according to the characteristics of the video to be tracked. Example: cv2.TrackerKCF_create ()

Post here etc. is easy to understand about the first and second of each tracking algorithm.

Tracking initial position setting

start_rect = (100, 200, 30, 30) # x: 100, y: 200, width: 30, height: 30
start_img = cv2.imread(files[start_frame])
tracker.init(start_img, start_rect)

Load the image in the first frame and specify the range of objects you want to track in that frame.

Perform tracking

for i in range(start_frame, end_frame):
  frame = cv2.imread(files[i])
  located, bounds = tracker.update(frame)
  if located:
    x = bounds[0]
    y = bounds[1]
    w = bounds[2]
    h = bounds[3]
    cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 255, 0), 2)
  cv2.imwrite(os.path.join(dst_dir, os.path.basename(files[i])), frame)

After setting the initial position, tracking is performed frame by frame. Read the frame image with cv2.imread, and execute update of tracker for the read image. To go.

located, bounds = tracker.update(frame)

The first return value of the update method is "Did you find the target?" Expressed in True / False. If found, the target position (range rectangle) is set in the second return value in the form (x, y, width, height).

x = bounds[0]
y = bounds[1]
w = bounds[2]
h = bounds[3]
cv2.rectangle(frame, (int(x), int(y)), (int(x + w), int(y + h)), (255, 255, 0), 2)

If the target is found, draw a yellow frame on the original image with cv2.rectangle. The last argument, 2, is the line thickness.

cv2.imwrite(os.path.join(dst_dir, os.path.basename(files[i])), frame)

Finally, save the result in the output directory created above.

Recommended Posts

Track objects in your video with OpenCV Tracker
Loop video loading with opencv
Try sorting your own objects with priority queue in Python
Detect video objects with Video Intelligence API
Use OpenCV with Python 3 in Window
Track baseball balls with Python + OpenCV
I want to detect objects with OpenCV
Make a video player with PySimpleGUI + OpenCV
Save video frame by frame with Python OpenCV
Put OpenCV in OS X with Homebrew and input / output video with python
Argument implementation (with code) in your own language
Let's find "T" in your work with us! !! !!
Track green objects with python + numpy (particle filter)
Try converting videos in real time with OpenCV
How to do zero-padding in one line with OpenCV
Write letters in the card illustration with OpenCV python
How to loop and play gif video with openCV
Visualize fluctuations in numbers on your website with Datadog
Display USB camera video with Python OpenCV with Raspberry Pi