Yes, We Can Create Ethical AI

Researchers say baking in privacy, transparency, and fairness will improve machine decision-making

By Paula Klein

Algorithmic decision-making has been criticized for its potential to lead to privacy invasion, information inequities, opacity, and discrimination. However, IDE co-lead and MIT Professor Alex Pentland, and his collaborators, say more human-centric Artificial Intelligence (AI) approaches can be incorporated swiftly in the areas of privacy and data ownership, accountability and transparency, and fairness — if there is a collaborative and concerted effort.

In a recent research paper, Ethical Machines: The Human-centric Use of Artificial Intelligence, the authors urge multi-disciplinary teams of researchers, practitioners, policy makers, and citizens to prioritize, co-develop, and evaluate these processes to benefit all.

The use of machine learning algorithms to address social issues and the proliferation of human behavioral data and AI capabilities are skyrocketing — from determining creditworthiness to evaluating job candidates or college applicants. The main motivation for the use of technology in these scenarios is to overcome the shortcomings of human decision-making.

“Machine learning algorithms can perform tasks faster, process significantly larger amounts of data than humans can, they don’t get tired, hungry, or bored, and they are not susceptible to corruption or conflicts of interest,” according to the authors.

Focused Attention

At the same time, this speed and efficiency of machine decision-making has caused new worries. Awareness of these issues is a first step that should then lead to widespread adoption of “ethical machines.” The opportunity to significantly improve the processes leading to decisions that affect millions of lives is huge, Pentland writes.

“As researchers and citizens we believe that we should not miss this opportunity. However, we should focus our attention on existing risks related to the use of algorithmic decision-making processes.”

Specifically, privacy, transparency, and fairness must be baked into machine learning and AI design.

“If we honor these requirements, we would be able to move from the feared tyranny of AI and of algorithmic mass surveillance to a human-centric AI model of democratic governance for the people,” Pentland writes.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
MIT IDE Paula Klein, Editor

MIT IDE Paula Klein, Editor

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.