A Couple of Challenges for Self-Driving Cars
Although I’ve been a petrol-head since an early age, I can’t wait for self-driving cars to become mainstream.
With a mechanic father, I grew up around cars — more often than not, these cars were not all in one piece and I loved seeing how the bits fitted together and what did what. In fact, as a kid I loved everything about cars. But somehow, after many hours spent in traffic jams or searching for that elusive parking space, I lost that feeling. Sure, I appreciate a nice car and travelling in comfort. It’s just that the actual act of driving is a lot less fun thesedays.
I’m a techy at heart too, so when Udacity launched their Self-Driving Car Nanodegree, I couldn’t resist putting in an application. With 11,500+ applicants for the first 250 places, I didn’t hold out a lot of hope of getting a slot. My application must have stood out though, because I ended up on the 3rd cohort of students, starting the course a couple of months after that first bunch. As the course is partially self-paced and I can be single-minded sometimes, I ended up finishing the course with the first group and in November this year I became the 103rd graduate of the Udacity Self-Driving Car Engineer NanoDegree.
I love this technology and I can’t wait for the day when I can jump into a self-driving car and do something more interesting than driving whilst it takes me to my destination. I’ll be using my time for something, probably working, so jams won’t matter. Finding a parking space will be a problem for the machine, not me.
So I was a little disappointed to read the recent Wired article that self-driving cars have been over-hyped and are a long way from mainstream adoption. But I do have to agree…
Udacity’s course does a fantastic job of introducing the key concepts of self-driving cars, such as localisation (where am I?), perception (what else is around?), planning (where am I going?) and control (how am I going to get there?). All of this is introduced with practical exercises to show how systems are implemented in simulated environments.
For obvious practical reasons, the course uses simulation and relatively limited scenarios. These are challenging, but I found myself comparing the simulator with my daily real-world driving and this where I run into some issues.
It’s pretty easy to see how motorway driving can be automated. In fact, there’s everything you’d need to create that in the Udacity course (maybe I’d even trust my own solutions to do that … maybe), bar the hardware. The constraint is that this is a well-controlled environment — known layout/lanes/exits, reasonably predictable behaviour for other cars and limited actions (start, stop, accelerate, slow, change lanes).
Something like the following:
That’s all fine and nowadays systems are around which will do precisely this. But we don’t just drive in these controlled environments and a great example of that happens 2 minutes into my daily commute.
Picture the scene — road curving gently to the left (the side on which we drive in the UK) past the local shop. Outside the shop at 7:15am is the delivery lorry dropping off replenishment stock, with a couple of parked cars either side for good measure. So, you have no visibility of what’s coming along the road, which is blocked on your side and curving away behind the obstruction. Assuming you’re not going to wait for the lorry to unload, you must pull out into the opposite lane to get past.
To deal with this situation, a person driving this route will gently ease out into the opposite lane in good time and make a judgement call as to whether the road is clear, all the while being prepared to stop. There’s also a wide junction to the right and it’s not uncommon that you get half way past the assorted obstructions and then a car appears coming the other way — I forgot to mention that this is all happening on the brow of a hill. So both cars slow to walking pace and squeeze past, often borrowing a bit of the junction on the opposite side of the road. It’s a safe process, barring the potential for some lunatic at high speed or on their phone.
But a lot of this maneuver is at least bending (if not breaking) the rules of the road. Creating a self-driving car which can make this kind of decision — at the right time and not at any other time — is definitely not trivial and IMHO is a key unresolved problem.
That kind of environment is typical on UK roads and urban areas with multiple obstructions are going to be a challenge for widespread self-driving adoption.
A second problem is … us. Current driver-assist type technology requires the driver to be ready to take over if the self-drive system encounters an issue. Safety systems check that the driver has their hands on the wheel at some intervals. But we live in a world where people will cable-tie a drinks can to the steering wheel to fool the system into thinking someone is holding the wheel.
OK, although some systems don’t check for an alert driver, some (Cadillac) use facial recognition cameras to check if the driver is alert. Which is fine until someone comes up with a hack for that (maybe a mask, as per the iPhone login hack recently demonstrated).
So we can be pretty sure that advanced driver assistance systems (ADAS) are going to be abused in some really creative ways.
However, I think this problem will get worse as these systems get better. If and when we can rely on the automatic system for, say 99% of the time, we as drivers will be far less capable of taking over in the 1% of time when we’re needed. We’ll be less used to driving, yet we will be expected to snap into driving mode to handle the exceptional circumstances. Presumably some of these circumstances will be highly dangerous and need a quick response.
It’s also easy to imagine someone who shouldn’t be driving at all (e.g. drunk) taking the risk that the car won’t need them on some given journey. I’m not sure I want to be sharing the road with nearly autonomous cars which might handover emergency control to someone who’s been drinking…
So it feels like the safest route here is have 100% autonomous cars, with no steering wheel or need for anything other than a destination command. That’s “Level 5” autonomy and feels like a long way off.
This boils down to a couple of key issues (which is by no means a complete list):
- Taking the driver out of the loop — completely and all the time
- Autonomous vehicles being smart enough to break the rules — sometimes and safely
These don’t feel like easy problems to solve in the short term, although both are objectives for autonomous car research. This is why I agree with the Wired article quote from the CEO of Nutonomy: “the last 1% is harder than the first 99%”. Getting autonomous cars over the hill into mainstream use will be harder than we first thought.
I still think it’ll be worth the effort though. ;-)