In this article, we'll look at how to use TrackingJs for image and video tracking to set up computer vision and object and color detection tracking capabilities.
Tracking.js is a JavaScript framework designed to allow developers to easily embed computer vision and object detection capabilities in web applications without the need for coding or complex algorithms. Tracking.js is relatively simple and easy to use, and is similar to JQuery in many ways. It provides trackers, utility features for various computational operations, and other web components to make your life easier.
Tracker in tracking.js is a JS object that detects and tracks elements of images and videos, including faces, colors, and objects. Tracking.js also provides the ability to create custom trackers to detect what you like. Let's take a look at a specific tracker.
Color Tracker Color Tracker can be used to track a particular set of colors in an image or video. By default, this Tracker supports magenta, cyan, and yellow colors, but you may add other custom colors. Let's actually take a look at Tracker.
This is a snip of the code available on GitHub.
Here is the code description.
--Line 6: Contains tracking-min.js. --Line 14: Defines a video component. --Line 21: Create an instance of ColorTracker.
var colors = new tracking.ColorTracker(['magenta', 'cyan', 'yellow']);
--Line 23: Defines what happens when a track event occurs. That is, when the video is running, Tracking.js begins to analyze what the event object contains. The event object contains a data attribute, which we are testing to see if its length differs from 0. This means that there is data, and in fact that data represents information about the color found somewhere in the video / image, so the data is where the color was found (x-axis, y-axis). Includes the height and width of the area containing it, and a color label. This is printed on line 29. --Line 34 Starts tracking of video components located in the DOM. Now let's see how to add a custom color. I am using the function registerColor with the color label and callback functions as parameters.
I tried to extract the code published in GitHub.
On lines 27 and 34, you can see that we used this function to register two new colors. In fact, Tracking.js uses RGB representation for colors, so by default only magenta, cyan and yellow colors are registered in the library, but if you want to add more colors, this function You can tell lib how to detect colors using.
In our case, we added green and black because we have their own RGB representation. They are (0,255,0) and (0,0,0,0). The second parameter is a callback that contains the logic to detect the color.
It's very simple, Tracking.js sends the tracking color (in RGB format) to this callback. For example, in the case of green (absolute green), the RGB code is (0,255,0), so verify if the received data corresponds to it, and at this point Tracking.js knows if that area should be considered. So it just returns true / false.
tracking.ColorTracker.registerColor('green', function(r, g, b) {
if (r == 0 && g == 250 && b == 0) {
return true;
}
return false;
});
The last thing to do is add a color label to the color list when you initialize the tracker.
var colors = new tracking.ColorTracker(['magenta', 'cyan', 'yellow','green']);
Object Tracker This part is related to tracking what you want to track in a series of images or videos. To do this, it's best to provide trained data as input on what the tracker should track. Fortunately, TrackingJs comes with three datasets: eyes, mouth and face. Let's see how these datasets work.
Let's implement an example of object detection. The code is here.
Here we include at the top of the file tree a dataset trained to detect faces, eyes, and mouth. By default, these are the only trees available in this library. This [link](https://medium.com/r?spm=a2c65.11461447.0.0.2cc43091iShshP&url=https%3A%2F%2Fcoding-robin.de%2F2013%2F07%2F22%2Ftrain-your-own-opencv- Click on haar-classifier.html) to see how to create a custom dataset to discover new objects.
--There is a plot () function, whose role is to draw a rectangle around the detected object.
The result is as follows.
Creating a program that visualizes something using a computer is very difficult in many ways. For example, you'll have to think about how to open the user's camera to handle the available pixels, detect curves in an image, and other heavy computations. Utilities are lightweight, easy-to-use APIs for web developers. In fact, some of the available utilities are: Function detection, convolution, Gray Scale, image blur, integral image, Sobel, Viola-Jones.
This feature works by finding the corners of the objects available in the image. Its API looks like this:
var corners = tracking.Fast.findCorners(pixels, width, height);
Next is the convolution filter, which is useful for edge detection, blurring, sharpening, embossing, etc. in the image. There are three ways to convolve an image: horizontal, vertical, and separate convolutions.
tracking.Image.horizontalConvolve(pixels, width, height, weightsVector, opaque);
tracking.Image.verticalConvolve(pixels, width, height, weightsVector, opaque);
tracking.Image.separableConvolve(pixels, width, height, horizWeights, vertWeights, opaque);
Gray Scale Gray Scale is used to detect the brightness value on the tone side of the Gray Scale of an image.
tracking.Image.grayscale(pixels, width, height, fillRGBA);
Tracking.js implements a blur algorithm called Gaussian Blur, which is a type of blur effect. This particular blur effect is primarily used in multimedia software to help reduce image noise and unwanted detail. The syntax for blurring an image with a single line of code is:
tracking.Image.blur(pixels, width, height, diameter);
This feature allows developers to have sums of values within a rectangular subset of the grid, which is also called the summed area table. In the field of image processing, it is also known as an integrated image. To calculate this with Tracking.js:
tracking.Image.computeIntegralImage(
pixels, width, height,
opt_integralImage,opt_integralImageSquare,opt_tiltedIntegralImage,
opt_integralImageSobel
);
Sobel This feature calculates the vertical and horizontal gradients of an image and combines the calculated images to find the edges of the image. The Sobel filter is implemented by first grayscale the image, then calculate the horizontal and vertical gradients, and finally combine the gradient images to create the final image.
The API is as follows.
tracking.Image.sobel(pixels, width, height);
Viola-Jones
Viola-Jones The object detection framework is a real-time competitive object detection rate. Is the first object detection framework to provide in real time. This technique is used inside the tracking.ObjectTracker
implementation.
To use Viola-Jones to detect objects in image pixels using Tracking.js:
tracking.ViolaJones.detect(pixels, width, height, initialScale, scaleFactor, stepSize, edgesDensity, classifier);
Writing computer vision programs was often difficult, even simple tasks required a lot of calculations to complete, and coding was very iterative. The main purpose of this great lib is to provide complex processing on the web with an intuitive API, Web Components (https://developer.mozilla.org/en-US/docs/). Web / Web_Components? Spm = a2c65.11461447.0.0.2cc43091iShshP) is included.
Web components in many ways are a new concept in modern web development. Their purpose is to bring in a simpler logic encapsulation model. In fact, this allows developers to encapsulate all the logic of their functionality in HTML elements and use them as naturally as DOM elements. It makes use of the same concepts behind libraries such as react and angular. But unlike these, web components are fully integrated into your web browser.
Therefore, Tracking.js provides an easy way to realize computer vision on the Web by embedding a part of the API as explained above in the Web component element described above. To use them, you must first install them via the bower package. Simply enter the following command to install the TrakingJs web components.
bower install tracking-elements --save
It can be used to provide some basic elements such as the Color element (used to detect color) and the Object element (used to detect elements). The main principle of TrackingJs web components is to extend native web elements (<video />
, <img />
, etc. and add new functionality to the framework, which is a new attributeIt is done with id = “”
and the elements to detect (colors, objects, etc.) are defined by thetarget = “”
attribute.
Let's detect some colors: magenta, cyan, yellow:
<!-- detect colors into an image -->
<img is="image-color-tracking" target="magenta cyan yellow" />
<canvas is="canvas-color-tracking" target="magenta cyan yellow"></canvas>
<video is="video-color-tracking" target="magenta cyan yellow"></video>
The above code uses ʻis =" image-color-tracking "to specify that it is extending the native
element, to let the tracker know the target color. I am using
target =" magenta cyan yellow "`.
Let's do it in practice.
Here is the Code Snippet (https://gist.github.com/dassiorleando/fb5c5f71ad76e3ec360530c2d0ac339e?spm=a2c65.11461447.0.0.2cc43091iShshP).
--Line 5: Import the polyfill to enable the functionality of the web component.
--Line 7: Import the web component.
--Line 11: We are using the web component itself by extending the native one with the ʻis attribute and defining the object to be tracked with the
targetattribute. --Line 18: Add an event listener to the element to listen for track events that occur when a face is detected. --Line 20: Call the
plotRectagle ()` function to draw a rectangle around the detected area.
In this article, we've seen how to install and apply computer vision on the web and use TrackingJs to perform some object and color detection functions.
Recommended Posts