Open data cam
Open source tool that uses machine learning to quantify the world.
‘Open Data Cam’ is a tool that helps to quantify the world. The best thing about it: You can make it yourself! With computer vision ‘Open Data Cam’ understands and quantifies what it sees. The simple setup allows everybody to become an urban data miner.
Why do we need this?
‘Open Data Cam’ can help cities become smarter. One way of understanding cities is the combination of urban areas and emerging technology. The tool can help cities (via its citizens, institutions, scientists and decision makers) to make more sense of its surroundings.
The idea behind the project was to create a free, easy to use platform for detecting objects in urban settings. Creating data through real-time detections can change the way we make decisions and perceive our urban surroundings. With this tool at hand, it’s up to you what you want to quantify. It might help you automatically turning traffic into data or might even turn mobility into a game.
Automated traffic survey
The obvious use case is to count vehicles at any given location. The gathered data can be used in traffic engineering. For example for validating traffic models, learning about traffic composition by vehicle type or validating other counting methods. But we’re also more than curious about what you will do with your ‘Open Data Cam’ — have a look at the Github repository!
‘Beat the Traffic’ is our version of traffic counting. Based on the technology we use in ‘Open Data Cam’, you can create a mobility wonderland in cities around the world. Players can globally enchant traffic jams at iconic locations and turn them into nicer things like unicorns, rainbows and driving trees. Your high-score will be converted into how many buses would have been needed to transport the cars you have clicked, addressing mobility challenges of our time.
How does it work?
‘Open Data Cam’ is a video camera attached to a mini computer running an interface and counting detections of the video stream. The heart of the ‘Open Data Cam’ is a Jetson TX2 board running on a graphical processing unit (GPU). The GPU allows it to process many parallel threads at once, perfect for image analysis and video processing.
While the heart is processing data, the brain of ‘Open Data Cam’ is running YOLO — an object detection library. YOLO is based on machine learning and is trained to detect objects in pictures and videos. The attached camera feeds YOLO with a video, YOLO then outputs all objects in every frame.
By accessing the interface of the ‘Open Data Cam’ users can reach the data, created by the system. It lets users specify which areas of the picture YOLO should count objects in. Finally the export function allows users to access the detected data points and use it in any thinkable way.
We’ve created an open source version of what we built to make the cam work. You can find the installation guide on GitHub.
The software running on the Jetson board will allow you to draw lines into the video stream. As objects cross this line, they will be counted, no matter in which direction. Add more lines to count on multiple spots. After finished you can export the data by hitting the export button. All data processing happens locally on your Jetson board.
In the repository you’ll find everything you need to create a bootable Jetson board running the ‘Open Data Cam’ interface.
To get your Jetson up and running you need to connect it to your computer and get into the Dev mode by using JetPack, which will install linux and all dependencies. You can find a detailed guide on how this works in the README of our repository.
‘Open Data Cam’ is a project designed and developed by moovel lab and collaborators and is currently used for research projects by the University of Stuttgart, the Carnegie Mellon University in Pittsburgh and other practitioners.