A while ago I tried to make my bike smarter with machine learning to help keep me safe whilst commuting through busy London streets.
Background
If you live in a big city and cycle you probably have developed superhuman reflexes and awareness, especially in cities like London where a lot of the narrow roads were originally designed for horse and cart.
After enough close calls and some actual calls I figured it was time to try and solve this problem with a modern approach (without mirrors).
Solution
Using a small computer (Raspberry Pi) attached to the bike. It would run an object detection model on images being taken from a rear facing camera. The system would detect the different potential dangers behind and intuitively relay that information back to the rider (via a handlebar mounted LED strip) so that they can maintain their focus on what's ahead.
Need for speed
With just the Raspberry Pi running the object detection model it was able to make predictions at a rate of about 2 predictions per second, which for this use case is on the slow side. That is where the Coral Edge TPU comes in. It is able to run the same model at 30ā60 predictions per second giving the system the super low latency it needs. You can find out more about Coral Edge TPUās from this Google I/O session:
Specifically I used the Edge TPU USB Accelerator and had it and the Raspberry Pi both attached to the bike:
After the Raspberry Pi + Edge TPU made itās prediction of what's behind the bike, it would then send commands to an RGB LED strip on the bikesā handlebars to then indicate to the rider where the dangers from behind are, and what level of danger those things are (because itās able to distinguish between the different types of vehicle e.g car, bus, truck, bike, pedestrian).
The specific model
Due to this being a pretty generic prediction task (traffic object detection), there were already pretrained models I could download and use that already detected different types of vehicles. In fact I ended up using the exact same model that Bill used in his 2019 Google I/O demo:
You can explore different kinds of pretrained models on the Coral website:
You can also find all the code for this project on my github:
Interesting points
Australia has it own dangers
When I presented this idea in Sydney there were discussions around if a specific version of this could be made for their specific dangers. At the time I assumed they might have been talking about some cliche dangers when you think of Australiaā¦ kangaroos, snakes, drop bears. But actually for cyclists in Australia itās teritorial magpies that are constantly attacking. So a custom version that included swooping birds in the model would be perfect for them.
Pretrained models not designed for toy cars
Itās important to test exactly how youāre going to use pretrained models. When I was demoing this idea on stage I hadnāt thoroughly tested how well it worked with toy cars and I got suboptimal results. This is where training custom models with tools like AutoML Object Detection would shine. Also interesting to note that my toy version of this demo worked a lot better when I included a printed out fake road for the toy cars to sit on, see in the previous gif above š.
Future
I want to train a custom model that would be trained specifically on London traffic (including the distinctive taxis) and then I could look into training other custom models for different regions such as an Australian magpie edition. Oh, and actually make this on a real bike that I can cycle (not just a prototype).
Takeaway
Itās good to be aware of technology like Edge TPUās in case you have a problem that requires super low latency and mobility. Also always checking to see if there are pretrained models that could already solve your problem and save you a huge amount of development time.
Follow and ask me questions on twitter ZackAkil!