Introducing AI Blindspot: A Call for Tech to Think Holistically and Spot Risks

AI Blindspot
Berkman Klein Center Collection
3 min readOct 22, 2019

How can teams prevent structural inequalities and their unconscious biases from affecting artificial intelligence systems?

The AI Blindspot project offers a discovery process to help organizations combat biases and build better AI systems. The project was created during the 2019 Assembly program, organized by the Berkman Klein Center and MIT Media Lab.

AI Blindspot aims to stimulate discussion and deliberation about the pitfalls of AI systems. To do this, we created a website and a set of printed cards as both a provocation and a tool.

A discovery process for spotting unconscious biases and structural inequalities in AI systems

Mitigating Unconscious Biases and Structural Inequalities

Blindspots can arise from oversights in a team’s workflow, unconscious biases, or structural inequalities embedded in society. For example, without researching and thinking carefully about product users at the beginning, teams might build products based on unrepresentative data, leading to potential harms once the product is in use. We’ve seen this happen in the healthcare sector. Researchers in Germany, the USA, and France developed an algorithm that detects skin cancer more accurately than dermatologists — but only on light skin tones. There’s not a demographically diverse dataset for skin cancer, so the algorithm was trained on unrepresentative data.

AI Blindspot card: Representative Data

Blindspots can occur at any point in the pipeline during the development of a model, from when the model is first conceptualized to when it is built and even after it is deployed. No human is immune to blind spots, and while we can roughly point to their location, they are normally hard to perceive. The same can be said of the way algorithmic technologies are developed — even with the best of intentions, the things we never anticipated can end up causing great harm.

The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. These harms can be mitigated if we all intentionally take action to guard against them.

With recent calls for the tech industry to take greater responsibility for their dual-use inventions, we need “systematic consideration of the harms that might occur” before products are designed and launched. We offer AI Blindspot as a provocation to help us build better systems.

Help Us Improve The “AI Blindspot” Project at MozFest 2019

We are engaging with stakeholders in the AI community, such as policymakers, activists, journalists, product managers, and academic researchers to conduct user tests, surveys, and workshops to gather feedback and iterate the content, format, and presentation of this tool — and we need your help!

Two of our AI Blindspot team members, Ania Calderon and Hong Qu, will be conducting a workshop at MozFest on October 26. If you will be going to MozFest, we urge you to join us. We will be handing out sets of AI Blindspot cards to participants.

You can also find us on Twitter @aiblindspot and email us info@aiblindspot.com.

--

--

AI Blindspot
Berkman Klein Center Collection

A discovery process for spotting unconscious biases and structural inequalities in AI systems from MIT Media Lab and Berkman Klein Center’s Assembly program.