OpenCV (Open Source Computer Vision Library) is a collection of BSD-licensed video / image processing libraries. There are many algorithms for image filtering, template matching, object recognition, video analysis, machine learning, and more.
Example of motion tracking using OpenCV (OpenCV Google Summer of Code 2015) https://www.youtube.com/watch?v=OUbUFn71S4s
Click here for installation and easy usage Install OpenCV 3 (core + contrib) in Python 3 environment & Difference between OpenCV 2 and OpenCV 3 & simple operation check
Click here for still image filtering Try edge detection with OpenCV Perform various filters with OpenCV (gradient, high-pass, Laplacian, Gaussian) Extract feature points with OpenCV (AgastFeature, FAST, GFTT, MSER, AKAZE, BRISK, KAZE, ORB, SimpleBlob)
Click here for processing video files Try converting videos in real time with OpenCV Try converting webcam / camcorder video in real time with OpenCV
This time, I will try to draw the optical flow in real time using OpenCV.
Optical flow is a method of expressing the frame of a feature point and the difference (vector) between frames in a moving image. Since the frame rate is usually constant, it can be said that it expresses the speed of feature points. There are two calculation methods: template matching and feature points / features. This time, I will draw an optical flow using feature points. The flow is as follows.
Operating environment
Video data I used the sample video that comes with OpenCV. OpenCV\opencv\sources\samples\data\768x576.avi
LucasKande.py
import numpy as np
import cv2
cap = cv2.VideoCapture('768x576.avi')
# Shi-Tomasi corner detection parameters
feature_params = dict( maxCorners = 100,
qualityLevel = 0.3,
minDistance = 7,
blockSize = 7 )
# Lucas-Kanade method parameters
lk_params = dict( winSize = (15,15),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
#Randomly generate 100 colors (generate a random ndarray with 100 rows and 3 columns in the range of 0 to 255)
color = np.random.randint(0, 255, (100, 3))
#Processing of the first frame
end_flag, frame = cap.read()
gray_prev = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
feature_prev = cv2.goodFeaturesToTrack(gray_prev, mask = None, **feature_params)
mask = np.zeros_like(frame)
while(end_flag):
#Convert to grayscale
gray_next = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Optical flow detection
feature_next, status, err = cv2.calcOpticalFlowPyrLK(gray_prev, gray_next, feature_prev, None, **lk_params)
#Select feature points for which optical flow was detected (0: not detected, 1: detected)
good_prev = feature_prev[status == 1]
good_next = feature_next[status == 1]
#Draw optical flow
for i, (next_point, prev_point) in enumerate(zip(good_next, good_prev)):
prev_x, prev_y = prev_point.ravel()
next_x, next_y = next_point.ravel()
mask = cv2.line(mask, (next_x, next_y), (prev_x, prev_y), color[i].tolist(), 2)
frame = cv2.circle(frame, (next_x, next_y), 5, color[i].tolist(), -1)
img = cv2.add(frame, mask)
#Show in window
cv2.imshow('window', img)
#Finish by pressing the ESC key
if cv2.waitKey(30) & 0xff == 27:
break
#Preparation of next frame and point
gray_prev = gray_next.copy()
feature_prev = good_next.reshape(-1, 1, 2)
end_flag, frame = cap.read()
#End processing
cv2.destroyAllWindows()
cap.release()
With OpenCV, you only need to call one method for both feature point extraction and optical flow calculation. What a convenient world (^^;)
When executed, the optical flow is drawn in real time.
Recommended Posts