Motion Exposures — visualizing movement patterns from public webcams
A few years ago I had an assignment to visualize security data. While working on the assignment I thought that it would be interesting to monitor activity and visualize movement in public spaces based on public webcams. I decided to experiment and make a prototype in my favorite tool for creative programming: Processing.
I found a library called IPCapture by Stefano Baldan, that was designed to get MJPEG streams from public surveilance camera’s. It’s an older protocol and many types of webcams are not supported by the library but it’s enough for a bit of experimentation.
The town of Monthey
On the website Opentopia, I found publicly available webcams of airports, city centers, parking lots, bars and even an indoor ski slope. Often streams were low quality, had a slow refresh rate or had little to no action. While looking for interesting streams I actually learned that there is a whole community that is very passionate about watching the world through public webcams.
I found a stream of the Place Central in a town called Monthey in the south of Switserland. It features a roundabout, pedestrians, a busstop and roads in several directions, so I thought it could result in some interesting patterns.
I decided to start simple and created a little program that would compare the pixels that changed between frames to quantify the amount of movement in a frame. So for example, if a pixel at x:460 and y:500 would have a brightness of 127 in one frame and 99 in the next frame, the difference in brightness would be 28 pixels. I would then sum the differences of every pixel to get a value for the whole frame and save it to a CSV file. If I let this program run for a few hours I could plot the amount of activity on the square over time.
The next step was to visualize where movement happens within the frame. After calculating the brightness difference between consecutive frames I would compare each pixel with a thresshold value. If the brightness difference is larger than the thresshold the resulting pixel would be white, and the other pixels would remain black. You can see the resulting frame on the left.
The white pixels, the pixels that changed compared to the previous frame, are saved in a 2D array. For each new frame, the the X and Y position of a white pixel is added to that same position in the 2D array. By letting the software run for a longer period of time, you get a smooth long exposure image that you might know from those pictures of headlights on a highway. In a normal long exposure the static objects are visible while the moving objects fade away. With a motion exposure it’s the exact opposite, static objects are black while moving objects are visible.
I experimented with a few other locations. The image below is from a indoor skicenter in the Netherlands. Pretty cool to see where people acutally ski.
Use it with GIFs and videos
I came across this video of the magic roundabout in Swindon, UK, that features five smaller roundabouts in a larger roundabout.
I decided to rewrite my code a bit to be able to use it for GIF’s and videofiles. In my experience video codecs aren’t well supported in Processing, what works on one computer gives endless errors on another. I therefore decided to work with image sequences that can be easily exported from a video.
I imported the video as layers in Photoshop and used “auto-align layers” to stabilize the footage a bit before importing it in the Processing sketch. You could also import a video in Premiere, use the warp stabilizer and export the frames as PNG’s.
The resulting image cleary shows which routes are used the most. However, because it’s a short repeating video, the resulting image isn’t that smooth.
You can download the code for this little experiment on my GitHub account.
Let me know if you create any fun patterns with it!