Panel Examines the Humans Behind Machine Bias

Tough questions about the ubiquity of algorithms and how they can go awry

MIT IDE
MIT Initiative on the Digital Economy
4 min readMay 29, 2018

--

By Paula Klein

While machine learning technology has the potential to remove human bias from decision-making, it’s becoming increasingly clear that automated segmentation algorithms can also exacerbate the problem, especially in areas such as hiring and loan and credit services.

At the recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor in New York, the topic of bias was a common thread throughout the day. (See Where Humans Meet Machines: Intuition, Expertise and Learning and Is Technology Outpacing Organizations? for more coverage).

Moderator, Renee Richardson Gosline

During a panel on The Biases of Humans and Machines, moderator, Renee Richardson Gosline, Senior Lecturer, Research Scientist, MIT Sloan, led a frank discussion of the ubiquity of algorithms and how they can go awry. Panelists tackled the thorny issues around individual and societal responsibility for addressing the risks of algorithmic bias, anhow to raise awareness of these new threats.

Cathy O’Neil, a data scientist and author, noted that “About five years ago I started to realize that every industry was using these formulas to determine who are the winners, who are the losers — and those labels were staying with us for life, but we didn’t even know it.” It was everywhere, she said, yet, “there’s no appeal system if it is incorrect. I began to question the trustworthiness of these algorithms. And that’s one of the reasons I wrote the book, Weapons of Math Destruction.”

Gosline likened this type of labeling to a form of “branding,” where people don’t have any recourse to say, ‘Hey, hang on. This doesn’t really represent me.’

Arianna Huffington

Arianna Huffington, Founder and CEO, Thrive Global, said that people are realizing “that advances in technology are not all universally good. There can be unintended consequences.” From a societal perspective, she worries that “instead of us being in control of [technology], it is in control of us…There is nothing wrong with AI ultimately being more intelligent than we are, but we are not becoming any wiser or more empathetic in the process.” In her view, “We need to learn to set boundaries to our relationship with our technology.”

The panelists described patterns where “archetypes of success” seep into software and algorithm design, creating bias. And that’s what platforms like Blendoor are trying to correct.

Stephanie Lampkin, Blendoor Founder and CEO, said her company is “working to mitigate unconscious bias in hiring” because it remains “one of the clearest cases where our unconscious biases and our idea of who can be successful and why, comes to light.” Blendoor tracks how far different demographics make it into a search, and it also publishes a corporate equality, diversity, and inclusion index every year “measuring different ways in which you could be representing inclusion and equity. We hope that accountability will drive better behavior before lawsuits do.”

Blendoor CEO, Stephanie Lampkin

Lampkin is also frustrated that “there really are no checks and balances” on these algorithms. “How do we know that a Stanford grad in computer science will be good for this role, on this team? Have you tested that? Have you tried a Penn State grad in that role and that team? Surprisingly, in an industry so driven by data and metrics, there isn’t sufficient analytics. I’m hoping to drive that. Let’s remember, humans are designing these tools.”

For her part, Huffington is aiming at human interactions on a different level. “At a time of maximum digital connectivity, loneliness is growing at an exponential rate, and people have never felt more disconnected,” she said. “The reason I remain an optimist is because we are having this conversation; we are asking questions.”

Cathy O’Neil

O’Neil finds disturbing the cumulative effects of “arbitrary profiling on society — whether we’re applying to college, applying for jobs, applying for loans, applying for mortgages — algorithms are not just predicting, but creating our future…”

“It’s not that we’re not good at all of this; we’re very good at this, and that’s a problem too.” As solutions, the panel recommended more and better testing of algorithms, a certification process, and more human awareness and control.

Watch the full video here. Further research on the topic can also be found here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.