Motion Exposures — visualizing movement patterns from public webcams

Yannick Brouwer
Sep 12 · 4 min read
A long exposure made from movement in a public webcam stream.

A few years ago I had an assignment to visualize security data. While working on the assignment I thought that it would be interesting to monitor activity and visualize movement in public spaces based on public webcams. I decided to experiment and make a prototype in my favorite tool for creative programming: Processing.

I found a library called IPCapture by Stefano Baldan, that was designed to get MJPEG streams from public surveilance camera’s. It’s an older protocol and many types of webcams are not supported by the library but it’s enough for a bit of experimentation.

The town of Monthey

On the website Opentopia, I found publicly available webcams of airports, city centers, parking lots, bars and even an indoor ski slope. Often streams were low quality, had a slow refresh rate or had little to no action. While looking for interesting streams I actually learned that there is a whole community that is very passionate about watching the world through public webcams.

I found a stream of the Place Central in a town called Monthey in the south of Switserland. It features a roundabout, pedestrians, a busstop and roads in several directions, so I thought it could result in some interesting patterns.

Monitoring Activity

I decided to start simple and created a little program that would compare the pixels that changed between frames to quantify the amount of movement in a frame. So for example, if a pixel at x:460 and y:500 would have a brightness of 127 in one frame and 99 in the next frame, the difference in brightness would be 28 pixels. I would then sum the differences of every pixel to get a value for the whole frame and save it to a CSV file. If I let this program run for a few hours I could plot the amount of activity on the square over time.

This grap shows the amount of activity from the webcam in Monthey during several hours on a Tuesday

Visualizing Movement

The next step was to visualize where movement happens within the frame. After calculating the brightness difference between consecutive frames I would compare each pixel with a thresshold value. If the brightness difference is larger than the thresshold the resulting pixel would be white, and the other pixels would remain black. You can see the resulting frame on the left.

The white pixels, the pixels that changed compared to the previous frame, are saved in a 2D array. For each new frame, the the X and Y position of a white pixel is added to that same position in the 2D array. By letting the software run for a longer period of time, you get a smooth long exposure image that you might know from those pictures of headlights on a highway. In a normal long exposure the static objects are visible while the moving objects fade away. With a motion exposure it’s the exact opposite, static objects are black while moving objects are visible.

The difference between a short exposure (on the left) and a long exposure (on the right)

I experimented with a few other locations. The image below is from a indoor skicenter in the Netherlands. Pretty cool to see where people acutally ski.

Indoor Skicenter in the Netherlands (webcam no longer online). On the left the webcamstream, in the middle the result of the frame differencing algorithm and on the right the resulting pattern of a few minutes of running the code.

Use it with GIFs and videos

I came across this video of the magic roundabout in Swindon, UK, that features five smaller roundabouts in a larger roundabout.

I decided to rewrite my code a bit to be able to use it for GIF’s and videofiles. In my experience video codecs aren’t well supported in Processing, what works on one computer gives endless errors on another. I therefore decided to work with image sequences that can be easily exported from a video.

I imported the video as layers in Photoshop and used “auto-align layers” to stabilize the footage a bit before importing it in the Processing sketch. You could also import a video in Premiere, use the warp stabilizer and export the frames as PNG’s.

The resulting image cleary shows which routes are used the most. However, because it’s a short repeating video, the resulting image isn’t that smooth.


You can download the code for this little experiment on my GitHub account.

Let me know if you create any fun patterns with it!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade