Scene parsing algorithms that control self-driving cars have focused on fair weather conditions, but UBC researchers aim to tackle rainy days. Photo: istockphoto

Teaching KITT to drive in the rain

Self-driving cars are getting closer and closer to reality. UBC computer scientists are trying to ensure wet conditions don’t rain on the parade.

By Silvia Moreno-Garcia


In 1982, when David Hasselhoff jumped into KITT, a super-advanced Pontiac Trans Am that could drive itself, it was obvious Knight Rider was pure TV science fiction. But nowadays, with companies investing millions in autonomous vehicle research, could KITT be just around the corner?

KITT’s capabilities were astounding. But David Hasselhoff doesn’t have to be the only one with a self-driving car today.

The technology behind self-driving cars is advancing at an incredible pace, with companies like General Motors, Google, Tesla and Uber testing cars in San Francisco, Phoenix and Boston. And the idea of robo-cars is very appealing to younger consumers, with nearly two-thirds of Millenials willing to own a self-driving vehicle within the next decade.

But a University of British Columbia (UBC) study shows these cars may face a rather unexpected obstacle. No, it’s not KITT’s evil nemesis, KARR, but something decidedly more low-tech. Rain.

“We have computer vision algorithms which can be used by cars. But if you train a vehicle using the current algorithms, they don’t perform so well in adverse weather conditions,” says computer scientist Fred Tung, lead author of the ‘Raincouver’ study who conducted the research at UBC.

UBC computer scientist Jim Little discusses self-driving technology.

Have you ever seen (in) the rain?

Self-driving cars use supervised learning, a type of machine learning algorithm. A human acts as a teacher, training a computer to provide the correct answer. In the case of self-driving vehicles, images manually labelled at a per pixel level are fed into the machine so that it can recognize a person, a vehicle. This task of identifying and labelling images is known as scene parsing or semantic segmentation.

Sample frame illustrating technical challenges present in video sequences. You can hardly see the person on the right, who appears as a red figure.

There are many scene parsing datasets available for self-driving cars. But they are shot under good conditions: fair weather and daytime lighting. Tung and colleagues were interested in seeing how machine learning algorithms would handle scenes shot in the rain. They attached a camera to the dashboard of a car and drove around Vancouver, collecting video footage. Then they labelled the vehicles, people and roads on screen. Every six seconds, by hand.

The so-called Raincouver scene parsing benchmark is the first scene benchmark to focus on challenging rainy driving conditions, during the day, at dusk, and at night.

And the algorithms didn’t handle the scenes very well.

Another frame from the Raincouver video shot by Tung and colleagues.

Streets look different in the rain and at night. There’s the glare of headlights or windshield wipers obscuring parts of an image. And people are hard to distinguish from the background when they’re standing in the dark, even to a human eye. More so for a machine.

“Can they perceive clutter? Can they see people who walk out between vehicles, or who are wearing dark clothing? Humans have trouble driving around in the rain, at night,” says Jim Little, a UBC vision systems expert and senior author of the study. “Autonomous driving systems will have the exact same difficulties.”

It’s not only that people may be hard to spot on camera at night and in the rain, the other concern is whether machines can recognize unusual situations. Computers learn to detect an object by being fed vast amounts of information. But at night, Little explains, there might not be enough data for a computer to learn from.

“People don’t necessarily walk out in front of a car a lot at night. There’s lots of vehicles, lots of roads, but not a lot of people,” he says.

I was dreaming while I drove

Considering that Vancouver gets more than 161 rainy days per year, cars that can’t operate properly under these conditions would be a big issue. But Tung and Little have some ideas about how to tackle the Raincouver problem.

To help self-driving vehicles make their way around rainy cities, they’re proposing expanding the data sets available. Filming footage in more cities, under different conditions would allow the systems to acquire more data. Synthetic images could also help computers learn.

The researchers suggest extending the number of categories labeled in their data set, adding buildings and trees to the mix. Other sensing modalities could also help cars find their way. These could include using lasers and GPS systems to figure out the shape of roads.

“We should also move to motion analysis. We are analyzing each frame independently, but our visual systems are very sensitive to slight movements,” says Little, who is exploring motion analysis in sports games. “These vision systems could learn to analyze the flow of images.”

Street crossing in Taipei. Photo: Flickr, Phil Wong 黄飛立

Rain or shine both Little and Tung believe self-driving cars will materialize. Tung says they might reduce congestions by utilizing roads more efficiently, or help the visually impaired and elderly get where they need to go. They could also be safer than regular cars since computers won’t be distracted by text messages or the radio.

“You see what you see when you are driving. But a fleet of cars, all in contact with each other can share information and learn together and improve,” says Little.

That’s something that is already happening. Tesla has so far collected more than 1 billion miles of real-world data from its customers’ cars.

However, before you start dreaming about reading a book while KITT drives you to work, researchers need to iron out these adverse weather issues. And a Vancouver downpour isn’t the only challenge.

“Of course, there’s snow, too,” says Little with a chuckle. “We haven’t driven in that!”


Learn more

F. Tung, J. Chen, L. Meng, and J. J. Little, “The Raincouver scene parsing benchmark for self-driving in adverse weather and at night,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2188–2193, 2017.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.