Who Should a Driverless Car Save in a Crash?

CognitionX
AI Ethics
Published in
3 min readDec 10, 2018

Research by MIT, published in the journal Nature, found that respondents were most likely to want an autonomous car heading for an inevitable crash to prioritise humans over pets, larger groups of people, and young people. The university’s Moral Machine test involved asking millions of respondents across 233 countries what their ethical preferences were.

They found that responses varied by different groups. For example, respondents from Asian countries such as Japan, Taiwan and Indonesia showed a much smaller preference for sparing younger characters over the old, and Latin American and Francophone countries showed a preference for sparing women.

Check out this useful summary of the findings from the Telegraph and this VentureBeat article for how these trends play into the broader autonomous vehicle space.

If you’re interested in the safety of self-driving cars, check out Nvidia’s self-driving car safety report they presented to the US government and a new survey conducted by PSB Research and commissioned by Intel, which found that 43% of respondents don’t feel safe around them.

Read on to learn about Tim Cook’s call for US federal privacy law, Honda’s work with universities on human-like AI, Amazon’s facial recognition tech, and more.

Data Privacy

Tim Cook calls for US federal privacy law to tackle ‘weaponised’ personal data

Apple CEO Tim Cook warned in a keynote speech that personal data was being “weaponised” against the public and endorsed tough privacy laws for both Europe and the US. The iPhone and Mac computer giant has stood out in its explicit declarations that Apple prefers to protect its customers’ personal data. “In many jurisdictions, regulators are asking tough questions. It is time for the rest of the world, including my home country, to follow your lead. We at Apple are in full support of a comprehensive federal privacy law in the United States,” he said. In related news, Google is bringing personal data controls directly into each of its products.

Roboethics

Honda partners with universities to investigate human-like AI

MIT’s CSAIL lab, in partnership with Penn’s School of Engineering and Applied Science and the University of Washington’s Paul G. Allen School of Computer Science & Engineering, will develop prototypes, working examples, and demonstrations of what Honda calls the “mechanisms of curiosity.” Specifically, MIT CSAIL will focus its efforts on systems capable of predicting future percepts — concepts developed as a consequence of perception — and the effect of future actions, while Penn’s engineering department and the Paul G. Allen School will develop perception models informed by biology and robots that can work safely in human environments.

Bias

Amazon pitches Rekognition, its facial recognition system, to ICE

Officials from U.S. Immigration and Customs Enforcement met with Amazon this summer and the corporate giant pitched the government agency on its controversial technology that can identify people in real time by scanning faces in a video feed, documents obtained by the Project on Government Oversight show.

AI Safety

AI won’t solve the fake news problem

Professors Gary Marcus and Ernest Davis write in the New York Times about Zuckerberg’s vision for Facebook’s AI programs which would be able to detect fake news, distinguishing it from more reliable information on the platform.

To get to where Mr. Zuckerberg wants to go will require the development of a fundamentally new A.I. paradigm, one in which the goal is not to detect statistical trends but to uncover ideas and the relations between them. Only then will such promises about A.I. become reality, rather than science fiction.

Public Engagement

Stanford’s new Human-Centred AI Initiative

Stanford University announced a major new initiative to create an institute dedicated to guiding the future of AI. It will support the necessary breadth of research across disciplines; foster a global dialogue among academia, industry, government, and civil society; and encourage responsible leadership in all sectors. They call this perspective Human-Centred AI.

Fairness

A new course to teach people about fairness in machine learning

To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of their popular Machine Learning Crash Course (MLCC).

The MLCC Fairness module explores how human biases affect data sets. For example, people asked to describe a photo of bananas may not remark on their colour (“yellow bananas”) unless they perceive it as atypical.

--

--

CognitionX
AI Ethics

The most trusted source of personalised advice on All Things AI