IoT car challenge part.02

Hello, JarJarCar

Virginie Cornu
OpenHardw.re
4 min readApr 22, 2019

--

In the first episode, we introduced the projects we are working on for the Connected Car Challenge organized by La Fabrique des Mobilités.

Today, I focus on the project that we named JarJarCar which allow us to get an idea of how much a bus is crowded/empty (you’ll find the explanation for the name in the article 😁) and specifically, the computer vision part that I tenderly called JarJarCam.

~ little bus sunshine ~

📘 Context

The idea that first came to mind was to try to count people globally, through a camera located inside a bus, and orientated towards the back of the bus.

With some research and review of projects, it quickly became obvious that we would be confronted with several difficulties such as :

  • What if the bus is crowded, even if mid-crowded?
  • How could we number seating people that would be hidden by standing people?
  • What about smaller people standing behind taller people? Even with a camera filming from above, the height of the bus won’t allow us to catch everyone.

So I took the problem from another angle: instead of trying to count static people, shouldn’t we try to count people coming in and out of the bus?

Okay, the problem seems a bit more manageable that way, JarjarCam now needs to be able to detect people coming in and out of a given area.

🤔 Thought

For this project, we imagined that our device will be placed in a standard Parisian bus composed of two doors, one at the front, the other one in the middle. So the camera would be positioned above each door and will detect people coming in or out of the bus while the separation of the areas would be delimited by the doors :

The camera in the front of the bus is supposed to count the flow coming in and the one in the middle the people getting out, although our service will detect both ways anyway.

It was about at this moment that the name of the project became JarJarcar, referring to the Star Wars creature with two big eyes widely separated: Jar Jar Binks

~ hello, Jar Jar Binks ~

💾 Implementation

Now, for the implementation, there are several approaches but two of them seemed interesting:

  1. Detecting people based on a pre-trained model
  2. Doing a simple background subtraction

I started reading very interesting projects on those two approaches and decided to give a try to the excellent blog post written by Adrian Rosebrock which seemed to answer exactly to our needs.

What is it about?

In a nutshell, the method described in that article is based on two phases: object detection and object tracking. Basically, once a person has been detected, the tracking of this person begins.

To detect a person, the system is based on a centroid tracking algorithm:

During the phase of detection, bounding boxes are placed around the object, then, a centroid is computed (center of the box) and the Euclidean distance between each centroid is computed as well. If two centroids are considered very close (according to the parameter we set up to indicate how much is “close”), the algorithm will consider them as the same object and will track them only once. If they’re not close enough — still according to our parameter — it will be considered as a separate new object.

Each object is assigned a unique Id and when the object is considered as not seen on screen anymore (disappeared for a number of frames set up), the object and its id are deregistered.

To detect the people coming in and out of an area, we set up a line on the video feed, considered as the separation between two areas (up vs down) and with the object tracking, we detect if the person is crossing the line and in which direction. Then we increment a counter for each people coming up and down and we create a JSON object that will be retrieved by the mobile application to provide with this information to our end users on our mobile application.

In progress

So far so good, I just need to make some adjustments to the program as the distance between the people and the camera is much closer in a bus and therefore the original parameters were not a good fit for our use case.

The system is now working quite well, although it doesn’t work 100% of the time and there are still some cases to handle but the main idea is here.

Our connected device will be based on a MangOH red board and, as we need the computation to be done in real-time from a direct camera feed, we can’t implement our project as a SaaS (Service as a Software), we need it to be run directly on the device.

⏰ Next steps

Well, we still have work to do and there are the main steps:

  1. Transfer and have the algorithm run on the board in real-time
  2. Optimize the algorithm to cover more cases
  3. Finally, test it in real conditions!

To be continued… 🚌 😉

~ brainstorming at WeWork ~

--

--

Virginie Cornu
OpenHardw.re

VP of Data — Tech lover, data explorer, IoT enthousiast, #TeamMakers, curious about everything to improve people’s lives through innovation