Self-driving cars

Why aren’t they a thing yet?

Self driving cars will soon be a reality on our roads. They will have a real impact on our society. Are we ready for the changes they will bring? Who will be the winners, and who the losers? Are we heading towards a more divided society?

Changes in laws and policy

For example, if they are safer, then insurance will change

Without humans to cause accidents, 90% of risk is removed. Insurers are scrambling to prepare.

https://www.bloomberg.com/news/articles/2019-02-19/autonomous-vehicles-may-one-day-kill-car-insurance-as-we-know-it

Self driving cars and inequality

Autonomous vehicles Heaven or Hell?

Without interventions, autonomous vehicles — in an unregulated market — are very likely to only benefit some, reinforce inequality, and potentially make marginalized communities worse off.

http://greenlining.org/wp-content/uploads/2019/01/R4_AutonomousVehiclesReportSingle_2019_2.pdf

Where are we?

Some companies are getting ready to launch services. Google’s Waymo is the next stage of the self driving car project. They are signing up commuters in Phoenix.

Machine Learning at Waymo
Waymo 360 experience
How they appear to drive when they are followed.

The moral machine

Is it OK to kill time? Machines used to find this question difficult to answer, but a study reveals that Artificial Intelligence can be programmed to judge ‘right’ from ‘wrong’.

In a study published in Frontiers in Artificial Intelligence, scientists have trained an AI algorithm with books and news articles to ‘teach’ a machine moral reasoning. “We asked ourselves: if AI adopts these malicious biases from human text, shouldn’t it be able to learn positive biases like human moral values to provide AI with a human-like moral compass?” says Dr Cigdem Turan co-author.

Turan explains, “You could think of it as learning a world map. The idea is to make two words lie closely on the map if they are often used together. So, while ‘kill’ and ‘murder’ would be two adjacent cities, ‘love’ would be a city far away. Extending this to sentences, if we ask, ‘Should I kill?’ we expect that ‘No, you shouldn’t.’ would be closer than ‘Yes, you should.’ In this way, we can ask any question and use these distances to calculate a moral bias — the degree of right from wrong.”

Who should the self-driving car kill?

A road hazard appears in front of your autonomous taxi. Will it make a choice that saves you but kills others? Or will it decide to save others at the price of its passenger?

https://www.bloomberg.com/opinion/articles/2020-02-06/morally-ethical-self-driving-cars-are-the-next-real-challenge

Who would you kill? MIT came up with a set of scenarios based on the trolley dilemmas to highlight the ethical decisions that this technology will have to make, and gather human perspectives on these dilemmas. You can try it yourself and compare your views with other people’s opinions.

What do you think?

Are we heading towards a more divided society? Respond with your views and articles.

--

--