5 core principles to keep AI ethical

World Economic Forum
World Economic Forum
4 min readApr 20, 2018
The UK has proposed controlling AI with a code of ethics. Image: REUTERS/Michaela Rehle

Rob Smith, Formative Content

Science-fiction thrillers, like the 1980s classic film The Terminator, illuminate our imaginations, but they also stoke fears about autonomous, intelligent killer robots eradicating the human race.

And while this scenario might seem far-fetched, last year, over 100 robotics and artificial intelligence technology leaders, including Elon Musk and Google’s DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines.

In an open letter to the UN Convention on Certain Conventional Weapons, the signatories said that once developed, killer robots — weapons designed to operate autonomously on the battlefield — “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

SpaceX and Tesla founder Elon Musk signed an open letter on AI ethics. Image: REUTERS/Aaron P. Bernstein

The letter states: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

AI must be a force for good — and diversity

This week, the United Kingdom government published a report, commissioned by the House of Lords AI Select Committee, which is based on evidence from over 200 industry experts. Central to the report are five core principles designed to guide and inform the ethical use of AI.

The first principle argues that AI should be developed for the common good and benefit of humanity.

The report’s authors argue the United Kingdom must actively shape the development and utilisation of AI, and call for “a shared ethical AI framework” that provides clarity against how this technology can best be used to benefit individuals and society.

They also say the prejudices of the past must not be unwittingly built into automated systems, and urge that such systems “be carefully designed from the beginning, with input from as diverse a group of people as possible.”

Intelligibility and fairness

The second principle demands that AI operates within parameters of intelligibility and fairness, and calls for companies and organisations to improve the intelligibility of their AI systems.

“Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society,” the report warns.

Can robots and humans live in harmony? Image: REUTERS/Francois Lenoir

Data protection

Third, the report says artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

It says the ways in which data is gathered and accessed need to be reconsidered. This, the report says, is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.

“Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the government … to review proactively the use and potential monopolisation of data by big technology companies operating in the UK.”

Flourishing alongside AI

The fourth principle stipulates all people should have the right to be educated as well as be enabled to flourish mentally, emotionally and economically alongside artificial intelligence.

For children, this means learning about using and working alongside AI from an early age. For adults, the report calls on government to invest in skills and training to negate the disruption caused by AI in the jobs market.

Automation could eliminate millions of jobs globally. Image: Statista

Confronting the power to destroy

Fifth, and aligning with concerns around killer robots, the report says the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

“There is a significant risk that well-intended AI research will be misused in ways which harm people,” the report says. “AI researchers and developers must consider the ethical implications of their work.”

By establishing these principles, the UK can lead by example in the international community, the authors say.

“We recommend that the government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence.”

Have you read?

Originally published at www.weforum.org.

--

--

World Economic Forum
World Economic Forum

Published in World Economic Forum

The World Economic Forum, committed to improving the state of the world, is the International Organization for Public-Private Cooperation

World Economic Forum
World Economic Forum

Written by World Economic Forum

The World Economic Forum, committed to improving the state of the world, is the International Organization for Public-Private Cooperation #wef

No responses yet