Technology amplifies injustice. But we can change that.

Hanna Naima McCloskey
Fearless Futures
Published in
3 min readApr 3, 2018

As technology’s capabilities have advanced, including the deployment of artificial intelligence, the concerning impact — from criminal justice to recruitment to consumer finance — on inequalities and exclusion has become increasingly evident.

What’s certain is that given the importance and growing significance of technologies across our daily lives, there is a need to ensure that it is designed with the explicit focus of ensuring its impact on people’s lives is inclusive. Too many people’s lives, and their safety, depend on it.

So, here are three principles — should technologists want to design inclusion into their technology — that can inform design practice:

ONE: Technology isn’t inherently good, and it’s not neutral either

Given the profound impact that technology has had on the world there is a tendency in some quarters to see technology as a social good (solving cancer, enabling the Arab Spring) or, at worst, as neutral. But it’s neither of these things. Technology is a tool. A tool manipulated by humans. The harmful consequences of AI that we have seen are not the consequence of thoughtlessness or neutral decision making. As a designer, not asking a question, needs to be acknowledged as a decision. Not seeking out certain data, IS data as Dr. Chanda Prescod-Weinstein notes. It is data on who matters and who doesn’t. Not assessing data for injustice, is a decision to leave that injustice in place. Are your teams asking themselves about which groups they are not considering? Or, which groups are hypervisible and targetted in the data — and what the implications are for their lives and realities?

TWO: Let’s stop talking about bias, and start talking about oppression

The mainstream conversation on technologies’ impact on inequality is typically referred to as algorithmic bias. Yet bias is too simple a concept to be relevant here. And how we name a problem invariably informs the solutions we’ll develop to challenge it. Instead, we need to get real about the ways in which systems of oppressions operate and the role of history in our present and future (in the historic data we train machines on, for example). A much more useful term therefore is algorithmic oppression — coined by scholar Safiyya Noble. Understanding systems of oppression — and the role we play in maintaining them — is as much a need for technologists as applied mathematics and distributed computing, if we are to design for inclusion.

THREE: Our intentions are irrelevant, prioritise impact

The bar for impact was set very low with Google’s initial “don’t be evil”! Tech has a history of shrugging its shoulders and blaming its users when ‘unintended consequences’ emerge, rather than holding themselves and the designed infrastructure responsible. Yet, if I run you over in my car, and say ‘I didn’t intend it’, it won’t resolve your broken leg, and we both know it. What if in designing technology our assumption was that its impact would be harmful? What if the starting assumption was that oppression — racism, sexism, classism, homophobia etc — would be perpetuated with our technology unless we explicitly built in ways to challenge it? How might that change the way products are developed? How might that inform the urgency with which we tackle the lack of diversity in our teams? Our intentions are unverifiable, relying on self-certification alone, but our impacts can be accounted for. So let’s focus on what we can control in the real world.

These are complex issues. They aren’t going away. And nor are the three principles above enough. However, if we can begin with them, we will start to grow our capabilities to design for inclusion, inside and out, of our organisations.

--

--

Hanna Naima McCloskey
Fearless Futures

CEO @ Fearless Futures. Educator. Innovator. Design for Inclusion.