Designing for Unconscious Biases: An Interview with David Dylan Thomas

Content strategist David Dylan Thomas discusses various cognitive biases using real-world examples and explains how we can minimise their impact — or use them for good

Oliver Lindberg
UX and Front-End Interviews
8 min readOct 31, 2022

--

Photo by Ryan Lash / TED

David Dylan Thomas, a content strategist at Philadelphia-based experience design firm Think Company, first discovered his interest in cognitive biases when he saw Harvard professor Iris Bohnet give a talk called ‘Gender Equality by Design’ at South by Southwest in 2016.

“It was the first time I really saw the connection between hiring bias — around gender and race — and pattern recognition,” he says. “Iris explained that this bias doesn’t always come from someone who is overtly sexist or racist but from people who have seen the same pattern all their lives. They almost can’t help but keep thinking in this pattern when they decide to hire somebody. It’s just stuck in their head. When I learned that something as evil as racial or sexual bias could be boiled down to something as simple as pattern recognition, I decided I needed to learn everything I could about cognitive bias.”

David defines cognitive bias as a series of shortcuts our minds take to get through the day. “Even right now, I’m making decisions about the tone of my voice, how to sit or what I should look at,” he explains. “If I thought carefully about each one of those decisions, I would never get anything done. So most of the time our minds are on autopilot, which is a good thing, but every now and then these shortcuts can lead to errors we call cognitive biases.”

Confirmation bias…

One of the most common biases that affects developers, UXers and people who build digital products is confirmation bias. We think we’ve come up with a great solution for a problem and get married to it. Then when we’re assessing how valid that position is, we overvalue evidence that confirms what we already believe, while dismissing any that disproves it. “That bias is as old as time,” David points out. “It’s why the scientific method was invented and it’s why we always have to check ourselves.”

A lot of confirmation bias comes back to pattern recognition and seeing patterns that we are used to. It has a big impact on our workforce and means some groups find it harder to get ahead. “Typically, the picture that people have in their heads when they think of web developers or UX professionals tends to be white males,” David warns. “Even if you don’t consciously believe in your heart that white men are better at web development than women, if your unconscious bias tells you that’s the pattern you’ve seen all your life and you’re looking at resumes, you’ll still prefer the man. They’ve done experiments with identical resumes that featured different names at the top. The female names would typically get left behind, while the men moved forward.”

… and how to avoid it

One way to remove bias from the hiring process is to employ blind hiring techniques. When the City of Philadelphia recently hired for a web developer position, they got an intern with no stake in the hiring process to print out the resumes and redact the names before they got passed on. Managers, however, then checked out applicants’ profiles on GitHub, which immediately revealed all the personal information. So they built a Chrome extension that blinded the GitHub page before it loaded, so it wouldn’t spoil the process and made it publicly available.

David came across another example of confirmation bias when a friend vetted tech-for-good projects funded by the UK National Lottery. He found that sometimes people were designing products in a way that was well intended but could actually be very harmful. “There was this app that was going to help sex workers report customers who were violent,” David remembers. “The design, however, used a lot of bright colours. If you think about the scenario, maybe they’re in an alleyway or somewhere they don’t want to be detected, they turn on this app and it floods the room with light, it could be a really dangerous interaction. So my friend started implementing this solution called Red Team Blue Team, which is actually a very old approach used by the military and journalists.”

The idea is you have a blue team made up of designers and developers, who start to form an idea. When they’re just about ready to prototype, they stop and the red team comes in. It’s made up of a similar group of people but they’re just there for one day and their job is to go to war with the blue team. They identify any potentially harmful side effects that the blue team never thought of because they were in a blind spot.

Photos by Sean Tubridy / Confab Events (left) and Think Company (right)

Understanding déformation professionnelle

To learn more about the situation, David went to the RationalWiki list of cognitive biases, which covers nearly 100 different types, broken down into sections. He then launched the Cognitive Bias Podcast and began to discuss one bias per episode.

Another important bias is called ‘déformation professionnelle’, which means you view the world through the lens of your job, no matter what’s going on. “An example might be the paparazzi who ran Princess Di off the road,” David explains. “They did something terrible but in their mindset, in that moment, they were doing an excellent job of being paparazzi. They were getting really difficult photographs that were going to fetch them a really high price. They weren’t doing a good job of being people, though, and so this bias can potentially be very dangerous.”

When we design systems or create algorithms that use data, it’s important to get multiple perspectives. David suggests we define our jobs more broadly and not just focus on the most elegant solution but think in terms of our society. “We can fight this bias by setting codes of conduct or rules for what we will or won’t work on together,” he says. “Design is getting to the point where people are starting to stand up. We’ve seen people at Microsoft and Google refuse to work on certain projects and essentially go on strike until the company drops them.”

Tackling bias in AI

We also need to be more careful about the data that we feed artificial intelligence with, David warns. As the data comes from humans, it can be biased and cause harm. “A few years ago Amazon used a hiring bot, which was going to look at a bunch of resumes and help people make decisions about who to hire,” David remembers. “The data was based on their last five years of resumes, which might seem reasonable until you realise that most of the applicants had been men. So naturally the AI started to believe that men were better at the job than women. It detected whether a college on someone’s resume was likely to be a women’s college and then discounted the resume based on that. It was actually being very clever but in a very biased way because it was fed the wrong data.”

Other biases that can get in the way of team or client work include self-centred bias (you remember your contributions to a project better than anyone else’s) and the framing effect. “If you have a touchy subject, the way you frame it can really influence the conversation,” David cautions. “If you show a photo of a senior citizen behind the wheel of a car to a group of people and ask ‘should this person drive this car?’, the discussion will be around policy. Some people will say that old people shouldn’t be allowed to drive and others will say that would be ageist. All that you’ll really learn from the discussion is who’s on what side. But if you ask ‘how might this person drive this car?’, you’ll get a design discussion. People might want to change the shape of the steering wheel or move the dashboard around. Designers like the phrase ‘how might we’ because it invites collaboration, whereas ‘should’ tends to shut down the conversation.”

Using biases for good

As this example demonstrates, biases are not always dark patterns. In fact, if you’re aware of them, they can also be used for good. “There’s a whole field of biases called cognitive fluency,” David explains. “They all revolve around how our minds will correlate easy-to-read with easy-to-do. If something uses large letters and is easy to read or if it rhymes, we’ll think it’s more true! It’s just a quirk of how our brains work. In the US, African Americans generally don’t believe health information that comes from the government, for example. But it’s vital, so if you put that information out in very clear large print, in plain language that’s easy to understand and that even includes rhymes, as silly as it sounds, it will make it more believable. I’d say exploit that bias, as it can help save lives.”

Content strategists, in particular, should also be aware of notational bias. If you create structured content in a way that has to do with your view of the world, you can end up erasing or hurting a lot of people. “If you had a gender field on an intake form and your only options are male and female, all sorts of identities don’t get to participate. The New York Times for a very long time would not use the term Ms as an honorific, they just used Mrs or Miss. So intentional or not, the message it sends out is that the most important information to know about a woman is whether or not she’s married. As content strategists we need to be very careful in how we structure our content to make sure that notational bias isn’t hurting anybody.”

To find out more about cognitive biases, David recommends checking out You’re Not So Smart, another podcast that dives a bit deeper than David’s’ own show. He also points to Daniel Kahneman’s book Thinking, Fast and Slow, the bible of cognitive biases, Dan Ariely’s Predictably Irrational and web app Pocket Biases, which covers one bias a day.

This article originally appeared in issue 324 of net magazine in 2019 and has been reviewed by David Dylan Thomas prior to republication. Also see David’s book Design for Cognitive Bias by A Book Apart. David now works independently and is available for speaking gigs and workshops. For more, see www.daviddylanthomas.com.

--

--

Oliver Lindberg
UX and Front-End Interviews

Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of GenerateConf. Former editor of @netmag.