CODE 2016 Fireside Panel — “The Tyranny of Algorithms?”

In recent years, machine learning algorithms have begun to guide many socio-technical systems that affect our lives and our welfare. Such algorithms make recommendations and drive decisions about what to read, who to be friends with, who to advertise to, jobs and job prospects, college admissions, loan applications and many other important life choices.

Recent research has shown that such algorithms could introduce and enable bias and discrimination in a number of ways. We also suspect that such algorithms could be designed to combat bias and discrimination, depending on how they are coded.

In this panel at the Oct. 15 CODE conference at MIT IDE, panelists explored how machine learning and algorithmic thinking can reinforce or overcome stereotypes, inequality and discrimination. They also discussed possible solutions to the dilemma, including, for example, whether experimentation itself could offer a way to maximize the benefits, while minimizing the risks, of algorithmic thinking.

Moderator: Sinan Aral (MIT) (far left, below)

Panelists (second from left): David Parkes (Harvard), Alessandro Acquisti (CMU), Catherine Tucker (MIT), Sandy Pentland (MIT), Susan Athey (Stanford).

Originally published at

Like what you read? Give MIT IDE a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.