Scikit-image, scikit-learn, Processing time when using OpenCV-Python concern. So, in this article, I'll show you the gateway to measuring processing time and profiling.
import cv2 The OpenCV binding is already available in an error-free environment. Cv2.getTickCount (), cv2.getTickFrequency () in it You can use to measure the processing time. An example of an article Measuring performance with OpenCV This function works the same on both Windows and Linux, so you can measure processing time without compromising script portability.
Based on the examples that have been used repeatedly so far, let's measure how the processing time changes when the input image size is changed. [Normalized Cut] (http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_ncut.html#example-segmentation-plot-ncut-py) Let's change the size of the input image with the example of.
.py:ex_getTickCount.py
from skimage import data, segmentation, color
from matplotlib import pyplot as plt
import cv2
img0 = data.coffee()
x=[]
y=[]
a=1.2
for i in range(10):
r=a**i
newSize=(int(img0.shape[1]/r), int(img0.shape[0]/r))
img=cv2.resize(img0, newSize)
t1=cv2.getTickCount()
labels1 = segmentation.slic(img, compactness=30, n_segments=400)
out1 = color.label2rgb(labels1, img, kind='avg')
t2=cv2.getTickCount()
dt=(t2-t1)/cv2.getTickFrequency()
print newSize, dt
x.append(newSize[0])
y.append(dt)
plt.figure(1)
plt.plot(x,y, "*-")
plt.xlabel("width")
plt.ylabel("time [sec]")
plt.grid(True)
plt.show()
Figure: Change in execution time depending on image size
If you look at the graph, you can see that when the width of the image is doubled, the processing time is quadrupled. In other words, if the width of the image is halved, the processing time will be 1/4 times longer. To make this the same processing time while keeping the width of the image the same is equivalent to reducing the processing time by 75%. In general, such speeding up is not easy, and in the case of processing in which the image may be reduced, reducing the size of the image is a standard for shortening the processing time.
First, Tips for Python beginners to use the Scikit-image example for themselves 2 Process multiple files Introduced a script that processes Normalized Cut for each frame. This time, based on that, I measured the processing time in each frame and made a graph of the result.
.py:ex_plot_ncut2_time.py
from skimage import data, io, segmentation, color
from skimage.future import graph
import cv2
def plotNcut(img):
labels1 = segmentation.slic(img, compactness=30, n_segments=200)
out1 = color.label2rgb(labels1, img, kind='avg')
g = graph.rag_mean_color(img, labels1, mode='similarity')
labels2 = graph.cut_normalized(labels1, g)
out2 = color.label2rgb(labels2, img, kind='avg')
return out1, out2
name="768x576.avi"
cap=cv2.VideoCapture(name)
i= -1
out=open("time_data.txt", "wt")
while cap.isOpened():
i += 1
ret, img=cap.read()
if ret != True:
break
if i>100:
break
[h, w] = img.shape[:2]
img = cv2.resize(img, (w/2, h/2))
t1=cv2.getTickCount()
out1, out2 = plotNcut(img)
t2=cv2.getTickCount()
dt=(t2-t1)/cv2.getTickFrequency()
out.write("%f\n" % dt)
cv2.imshow("img", out1)
cv2.waitKey(100)
cv2.imwrite("org_%04d.png " % i, img)
cv2.imwrite("img_%04d.png " % i, out1)
print i
out.close()
.py:view_data.py
import numpy as np
import pylab
name="time_data.txt"
data=np.loadtxt(name)
print data
pylab.figure(1)
pylab.subplot(1,2,1)
pylab.plot(data)
pylab.ylabel("time[s]")
pylab.grid(True)
pylab.subplot(1,2,2)
pylab.hist(data)
pylab.grid(True)
pylab.xlim([3, 4])
pylab.xlabel("time[s]")
pylab.show()
By displaying the time series or the histogram in this way, it becomes easier to evaluate how much the processing time can change depending on the scene, rather than measuring the processing time only once. Trying to use it is one of the clues to understand the method.
Python has a profiler in the standard library, so you can profile python scripts regardless of OS or CPU type such as Windows or Linux. In the case of C ++, the profiling method differs depending on the OS, compiler type, and tool type. Even if the tools have the same name, the usage may be extremely different depending on the version, and you may feel inconvenienced. In that respect, the Python profiler is Python standard library 26.4. Python profiler You can run the profiler in a simple way, as shown in.
The main entry point for profilers is the global function profile.run () (or cProfile.run ()).
Blog article Python code profiling
Please try to imitate how to use it by looking at examples such as.
What is the largest breakdown of processing time in profile.run ()? Is the number of function calls as expected, or is the function called more than expected? I check those parts.
You can compare the operating time and breakdown of the same python script with the same profiler between the source Python (for example, Windows) and the destination python.
The processing time of OpenCV-Python functions (for example, face detection and person detection) varies significantly depending on how the OpenCV cv2.pyd used is built and whether the CPU is multi-core. That should become apparent when you test it with a Python profiler.
Tips for Python beginners to use the Scikit-image example for themselves 2 Process multiple files
Based on the example of processing multiple files and processing in multiple frames in Let's measure the execution time.
matplotlib has a hist () function that makes it easy to create histograms. Let's actually measure how much the processing time varies and create a histogram.
Why detectMultiScale () is slow on Raspberry Pi
Micha Gorelick, Ian Ozsvald, translated by Aizo Aikawa, "High Performance Python"
About the processing speed of SVM (SVC) of scikit-learn Is an example of a practical article about time measurement.
Note: When I realized that it would be okay to replace the part that took too much time in image processing with a reduced image, I tried reducing the image size. The processing time has been dramatically reduced. Python has a simple profile, so you can quickly identify which process is taking a long time. Once that is identified, only speed up that part. Of the many lines, just adding the process of reducing the image to one of them can increase the processing speed by 10 times or more.