5 Key Challenges faced by Self-driving cars

Riti Dass
Riti Dass
Sep 14, 2018 · 9 min read

It’s fun to ponder a future filled with self-driving cars, a world with breezy commutes where robot navigators have made deadly crashes a thing of the past. But how far off is that future, really?

What Google has suggested is that this driverless utopia may actually be much further away than many people may realize. In a speech at SXSW in Austin, Google’s car project director Chris Urmson explained that the day when fully autonomous vehicles are widely available, going anywhere that regular cars can, might be as much as 30 years away. There are still serious technical and safety challenges to overcome. In the near term, self-driving cars may be limited to more narrow situations and clearer weather.

As Lee Gomes pointed out at IEEE Spectrum, this was the most conservative roadmap yet offered by Google, which has been operating and tweaking autonomous cars for years on private and public roads. If they’re saying it’s hard, we ought to listen.

So what are the big hold-ups, anyway? For this , let’s dive more into the obstacles that stand between us and our glorious self-driving future. None of these things are deal-breakers per se, and there are tons of smart people working on these problems. Instead, think of this as a big to-do list:

1) Creating (and maintaining) maps for self-driving cars is difficult work

First, a quick clarification: Lots of car companies, from GM to BMW to Tesla to Uber, are working on various species of autonomous technology. Some of this is partial autonomy, as with Honda’s Civic LX, a car now on the market that can stay within its lane.But for now, let’s focus on full autonomy — cars that don’t need drivers at all. And right now, Google seems to be the furthest along with that technology:

Google’s self-driving cars work by relying on a combination of detailed pre-made maps as well as sensors that “see” obstacles on the road in real time. Both systems are crucial and they work in tandem.

Before Google can test a self-driving car in any new city or town, its employees first manually drive the vehicles all over the streets and build a rich, detailed 3D-map of the area using the rotating Lidar camera on the car’s roof. The camera sends out laser pulses to gauge its surroundings, and the people on Google’s mapping team then pore over the data to categorize different features such as intersections, driveways, or fire hydrants.

This is a time-intensive process, but Google thinks it’s the best way forward. The idea is that building the map ahead of the time can free up processing power for the car’s software to be “alert” while puttering around autonomously. The car uses the map as a reference and then deploys its sensors to look out for other vehicles, pedestrians, as well as any new objects that weren’t on the map, such as unexpected signs or construction.

But, relying on this mapping system will pose some major challenges. Right now, Google has only built detailed 3-D maps for a relatively limited number of test areas, like Mountain View. For self-driving cars to go mainstream, Google would have to build and maintain detailed maps all over the country — across 4 million miles of public roads — and update them constantly. After all, roads change a lot: Researchers at Oxford University recently tracked a single 6-mile stretch of road in England over the course of a year and found its features were constantly shifting. One rotary along the path was moved three times.

Google is confident it can pull this off — mapping, after all, is something the company is extremely good at. As more and more self-driving cars hit the road, they will constantly be encountering new objects and obstacles that they can relay to the mapping team and update other cars. Still, it’s an incredibly daunting and potentially costly undertaking.


2) Driving requires many complex social interactions — which are still tough for robots

A far more difficult hurdle, meanwhile, is the fact that driving is an intensely social process that frequently involves intricate interactions with other drivers, cyclists, and pedestrians. In many of those situations, humans rely on generalized intelligence and common sense that robots still very much lack.

Much of the testing that Google has been doing over the years has involved “training” the cars’ software to recognize various thorny situations that pop up on the roads. For example, the company says its cars can now recognize cyclists and interpret their hand signals — slowing down, say, if the cyclist intends to turn. Here’s a demonstration:

So far, so nifty. But there are thousands and thousands of other challenges that pop up, many of them quite subtle and unpredictable. Just imagine, for instance, that you’re a driver coming up on a crosswalk and there’s a pedestrian standing on the curb looking down at his smartphone. A human driver will use her judgment to figure out whether that person is standing in place or absent-mindedly about to cross the street while absorbed in his phone. A computer can’t (yet) make that call.

Or think of all the different driving situations that involve eye contact and subtle communication, like navigating four-way intersections, or a cop waving cars around an accident scene. Easy for us. Still hard for a robot.

All this explains that fully self-driving cars will ultimately need to be adept at four key tasks: 1)understanding the environment around them; 2) understanding why the people they encounter on the road are behaving the way they are; 3) deciding how to respond (it’s tough to come up with a rule of thumb for four-way stop signs that works every single time); and 4)communicating with other people.

There’s a long ways to go in all of these areas. And reliability is the biggest challenge of all. Humans aren’t perfect, but we’re amazingly good drivers when you think about it, with 100 million miles driven for every fatality. The reality is that a robot system has to perform at least at that level, and getting all these weird interactions right can make the difference between a fatality every 100 million miles and a fatality every 1 million miles.

Google’s cars are meant to be completely driverless, but more traditional car companies such as BMW or Audi are working on autonomous vehicles that can flip between computer and driver control, depending on the situation.

The huge drawback to the latter approach, as plenty of analysts have noted, is that shared control could potentially make self-driving cars much more dangerous. Imagine, say, that the human inside the car has been drifting off but then suddenly has to snap to attention to prevent a crash. (This has been a growing problem in the airline industry as autopilot becomes more prevalent.) Plus, it’s a bit of a high-wire act to hand over controls on a highway when the car is going 60 mph.


3) Bad weather makes everything trickier

Compounding these challenges is the fact that weather still poses a major challenge for self-driving vehicles. Much like our eyes, car sensors don’t work as well in fog or rain or snow. What’s more, companies are currently testing cars in locations with benign climates, like Mountain View, California — and not, say, up in the Colorado Rockies.

This is a real, but lesser, hurdle. Weather adds to the difficulty, but it’s not a fundamental challenge. Also, even if we had a car that only worked in fair weather, that’s still enormously valuable. It might take longer to overcome weather challenges, but this won’t derail the technology.”


4) We may have to design regulations before we know how safe self-driving cars really are

Another big obstacle for self-driving cars isn’t technical — it’s political. Before self-driving cars can hit the roads, regulators are going to have to approve them for use. One thing they’re going to want to ask is: How safe are these things, anyway?

And here’s the tricky part: We probably won’t know!

Drivers in the United States currently get into fatal accidents at a rate of about one for every 100 million miles driven. Ideally, we’d want self-driving cars to be at least that safe. But it’s unlikely we’ll be able to prove that any time soon. Google only drove its cars 1.3 million miles total between 2009 and 2017 — not nearly enough to draw rigorous statistical conclusions about safety. It would take many decades to drive the hundreds and hundreds of millions of miles needed to prove safety.

What might that look like? Regulators could come up with alternative testing procedures — such as modeling or simulations or even pilot programs in volunteer cities. We might also look to other technologies that get approved even when their safety is uncertain, such as personalized medicine. But this is going to be something to think hard about.

Apart from this, there are separate legal questions too, such as how these cars will be insured and who exactly will be liable — the driver or the manufacturer — in the event of a crash.


5) Cybersecurity will likely be an issue — though a surmountable one

Another issue is cybersecurity. How do we make sure these cars can’t be hacked? As vehicles get smarter and more connected, there are more ways to get into them and disrupt what they’re doing.”

This shouldn’t be impossible to fix. Software companies have been dealing with this issue for a long time. It will likely require a culture change in the auto industry, which hasn’t traditionally worried much about cybersecurity issues.

Many car enthusiasts already modify their own vehicles to improve performance. What happens if they do this for self-driving cars and inadvertently compromise the computers’ decision-making ability? Just as an example, someone puts on oversized wheels that distorts’ the cars sense of how fast it’s going. It’s hard to stop anyone from doing that.

This could be a particular challenge if the auto industry tries to develop systems that enable different vehicles to talk to each other on the road (say, to make merging easier). The whole premise of using V2V [vehicle-to-vehicle communication] for safety is that if you get a message to slam on the brakes, you better be able to trust that message. But securing that system could be extremely difficult. Again, not fatal. But something to ponder.


Self-driving cars are coming — but perhaps not all at once

So when might we overcome all these challenges? What most people envision when they envision autonomous vehicles probably won’t be the reality anytime soon. It’s skeptical that we’ll be able to buy a car in 2020 that we can just put our kid in and ship off to school. That kind of complete trust and autonomy is a ways off.

But, limited forms of autonomy are very plausible. We already have the technology to do automatic parking in garage structures. Similarly, it wouldn’t be surprising to see self-driving buses along fixed routes or trucks that can use autonomous technology to platoon and save fuel on highways.The technology is advancing rapidly, and it’s likely to become useful in all sorts of unexpected places.

The question here is, how quickly can we get this into people’s hands? If you read the papers, you see maybe it’s three years, maybe it’s 30 years. But speaking honestly, it’s a bit of both.


Give us your feedback in the comments below or join the YellowAnt community. Sign up for YellowAnt here.

Riti Dass

Written by

Riti Dass

Is curiosity and forgetfulness the worst combination out there?….. I have probably found out, but can’t remember. Visit : https://about.me/ritidass

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade