Can we teach morality to machines? Three perspectives on ethics for artificial intelligence

Before giving machines a sense of morality, humans have to first define morality in a way computers can process. A difficult but not impossible task.

Slava Polonski, PhD
7 min readDec 19, 2017

--

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems…

--

--

Slava Polonski, PhD

UX Research Lead @ Google Flights | 20% People+AI Guidebook | Forbes 30 Under 30 | PhD | Global Shaper & Expert @WEF | Prevsly @UniofOxford @Harvard