Motion tracking in the browser

Experimenting with WebRTC and Canvas

Gwen Vanhee
Little Miss Robot

--

Slowly but surely WebRTC (real-time communication) is finding it’s way to the web. It’s one of those new javascript API’s which allows the browser access to your webcam & microphone and share that data via peer-to-peer, making it possible to run skype-like applications in the browser.

Although final specifications are not fully implemented yet and browser support is still somewhat experimental, we can already start playing with it. And so we did …

See the demo
Get the source-files

Motion detection

Motion detection is really a brute-force operation, where we compare pixels from our webcam-feed to those of a reference image (or previous frame). In places where pixel colors aren’t the same, we can assume something is moving.

Although implementation is rather straightforward, there are some things to keep in mind. Using small frame sizes improves performance, as bigger frames have more pixels — taking up more time to traverse.

Environmental lighting has a huge impact on the output and is rather hard to control. Best results are achieved with a clear contrast between fore- and background. Since (natural) lighting conditions are changing every second, you might want to throw in some sliders to adjust parameters as you play.

Implementation

Navigator.getUserMedia() is the magical keyword here, this allows us to fetch the output from our webcam and draw it to the canvas frame-by-frame. From here we can access pixel data of the image via canvas.getImageData(), which provides us with a nice array of pixels to work with.

We then create the reference image to compare our live feed against by copying the pixel array every frame — or by pressing a key, clicking a button, … Next we traverse all pixels, convert them to grayscale and subtract them from each other based on a deviation threshold (remember those sliders?).

Live-feed, grayscale, reference image and resulting frame

In places were the outcome of this subtraction (which is a number) is below our threshold, we color that pixel black. If that’s not the case, we color the pixel white. We then draw our pixel array back onto the canvas with the canvas.putImageData() method. This will output a nice black & white image, clearly showing us what’s moving and what’s not.

Doing something with the output now, is just a matter of traversing the array again and look up the white pixels. Enjoy!

--

--

Gwen Vanhee
Little Miss Robot

Creative developer @littlemissrobot - frontend dude into generative coding, mobile & other stuff i don't understand