Intent, impact and harm: Why we need to think about design ethics

Kate Every
Hippo Digital
7 min readMar 4, 2022

--

First, do no harm

Considering the impact of our design decisions matters. Without due care, technology has the capacity to create harm at monumental scale and speed. We need to take responsibility for the part we play and consider outcomes as well as intention when we design services that are used by others.

Last week at the Academy

Last week in our Hippo Digital Academy we did a deep dive into ethical design with our new cohort of Trainee Hippos. We started by discussing the complexity of defining the concept and how some definitions of “design” are limited.

Design is the process and art of planning and making detailed drawings of something — Collins Dictionary

Design is… The intent behind an outcome — IBM

Some definitions, such as those given above, focus on the process of designing, or the intent of the designer. They don’t talk about the actual outcome, the “thing” that is designed — whatever that might be. As a group, we talked about whether we have a responsibility for the outcome of our designs or just the process of designing. We felt that, as people designing products and services for other people, we are responsible for those outcomes.

I’m interested to hear if you agree? Where does the responsibility of the product & design team begin and end? Are we responsible for the designs, products and services we put out into the world — or is that someone else’s responsibility, a stakeholder maybe?

Does ‘intent’ matter?

Design is… the intent, and unintentional impact behind an outcome — Creative Reaction Lab

I previously discussed the concept of intent in relation to ethics when I was struggling to define ethical design: “Duty-based (or deontological) ethics asserts that motives matter more than outcomes. So if your intent was good, you behaved ethically, regardless of the outcome.”

There are countless examples of the ways in which designs cause harm as a result of unintended consequences. As such, I would argue that intent actually matters very little when it comes to our responsibility to our users. Good intentions do not necessarily equate to positive outcomes for users. What matters is impact.

Take the example of Google’s photo-tagging algorithm which automatically groups users’ photos. We assume that this was designed to improve the usability of the Google Photos service and to make life easier for people. The unintended impact, however, was that a black woman was incorrectly categorised as a “Gorilla” multiple times by the algorithm. This is upsetting and degrading for the individuals involved. It also ties into existing racist narratives and stereotypes, reinforcing existing discriminatory language and views.

The friend who spotted the mistake, 22-year-old freelance web developer Jacky Alciné, said: “[It’s] a term that’s been used historically to describe black people in general. Like, ‘Oh, you look like an ape,’ or ‘you’ve been classified as a creature …[and] of all terms, of all derogatory terms to use, that one came up.”

And this isn’t the only instance of this kind of mistake originating from machine learning technology. Flickr has also received complaints about auto-tagging people in photos as “animals,” and concentration camps as “jungle gyms.”

The reason that this is a design problem, and not just an error in the technology, is that the technology is itself designed. Machine learning, and deep learning, work by being trained with huge datasets until they can start recognising patterns, objects or words. They can process millions of pieces of data (like images) to build up the model which enables them to make predictions, such as how to categorise a photo. The problem is, when the data is not controlled for human bias in the first place, the machine will learn our existing biases. If you feed it a dataset of images of white people, it will become incredibly good at identifying white faces. But what does that mean for people who aren’t white? The technology cannot be relied upon to identify them correctly.

This is exactly what happened in the case of Robert Williams, a 43-year-old father who lives in the Detroit suburbs. He was incorrectly identified by facial recognition technology as being the perpetrator of a robbery. He was arrested, interrogated and held in custody for 30 hours before his release. The police ran a dimly lit image from a surveillance camera through the facial recognition system. The system misidentified Williams as a possible match based on his old driving licence photo.

In this situation, racial bias was also at play. Williams is an African American man, and a federal study into facial recognition technology found that:

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

Women were more likely to be falsely identified than men, and the elderly and children were more likely to be misidentified than those in other age groups, the study found. Middle-aged white men generally benefited from the highest accuracy rates.

Again, the issue is to do with the accuracy of the technology, because it has not been trained using diverse data. This means that traditionally marginalised groups are disproportionately affected by the negative outcomes of the technology.

In this case, prosecutors dropped the case two weeks later due to insufficient evidence. Williams was humiliated, and his young daughters were traumatised at seeing their father hauled away, but he was free to go. But the consequences could have gone from upsetting to deadly. Williams wrote of his ordeal: “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.” As we know from countless news stories, black men in America are particularly vulnerable in interactions with the police. There is a lot at stake when technology makes this kind of mistake.

For more on the danger of facial recognition technology, I would encourage you to watch the film Coded Bias which investigates algorithmic bias. It is a fascinating look into how the tech works — and it explains the issues a lot more eloquently than I have done here!

In these stories, we assume that the intent of the designers and technologists was good. They did not set out to cause harm. They may have followed the design thinking process and designed with empathy. Even so, the impact of the designs was harmful to people, so does their ‘good intent’ really matter?

A spectrum of harms

These are just two examples — there are countless other stories of the harmful impact of tech and ill-considered design. Impacts can be merely frustrating like in the case of dark patterns, to being tragic, exclusionary, or reinforcing existing discrimination. Specific design choices can even lead to wrongful incarceration, death, or widespread disruption to democracy.

A spectrum of harms: Not all impacts are equal. Tech-created harms can range from the mildly annoying to the deadly.

This is not meant to demonise the design or product teams that shipped these products. Most of the time, the intent was good. Or at least not deliberately malicious (not you Cambridge Analytica, you are firmly in the bad intent/bad impact box 😡). Many of us have been in high-pressured environments where there are tight delivery deadlines and decisions that are outside of our control. We could do some root cause analysis on these examples and probably find a range of reasons why the products ended up as they did. Maybe it was a specific decision made. But it might just as well be that delivery timelines didn’t allow time to fully consider the potential impact.

This isn’t about blame. It’s about realising that our decisions have real consequences on people’s lives. It’s also about empowering ourselves to recognise opportunities where we can intervene if something doesn’t feel right.

We are gatekeepers

The work we produce can be responsible for creating harm at a significant scale. Facebook could choose to refuse to fix something that might be harmful to 0.1% of its user base. Surely that’s an edge case, right? That’s a TINY proportion of their users.

But that 0.1% is 3,000,000 people. THREE MILLION.

Those are real people, with lives and families, hopes and dreams. When our designs impact people at scale, we have a responsibility to try and mitigate potential harms.

Scale is what turns WMDs [Weapons of Math Destruction] from local nuisances into tsunami forces, one that define and delimit our lives. — Weapons of Math Destruction, Cathy O’Neil

If I can leave you with one thing it’s this. In our roles as designers of services, we are uniquely positioned to influence. We might not always feel like it, but we have a significant amount of power just by being the people in the room. We have a lot more power in influencing outcomes than the thousands or millions of people who will end up using our services.

As designers, we need to see ourselves as gatekeepers of what we are bringing into the world, and what we choose not to bring into the world. — Mike Monterio, Ruined by Design: How Designers Destroyed the World, and What We Can Do to Fix It

Start thinking of yourself as a gatekeeper. And remember that you have the power to ask questions and have challenging conversations.

Sometimes these things don’t get considered because we assume it’s someone else’s responsibility. Make it your responsibility.

What do you think?

Does ‘good intent’ matter when designing for people?

Who is responsible for the products and services we, as designers, create?

Share your thoughts! I would love to hear from you. I’m on LinkedIn.

--

--

Kate Every
Hippo Digital

Service Designer working on public services and committed to design ethics and trauma-informed practice