Persuasion or Coercion?

Sid Barcelona
12 min readNov 25, 2019

--

The promise and peril of psychology in UX design

I presented this talk at DCUX 2019 on Nov. 9th.

Good afternoon. My name is Sid Barcelona, and I am a creative director at Threespot.

My goal today is to get inside your head. To do that, I want to conduct a quick experiment.

Some of you may know about the frequency illusion — when a new idea is brought to your attention, and then you unexpectedly see it everywhere.

I’m going to try and drop a word inception-style into your head, and you can let me know on Twitter if my experiment worked.

The word is dopamine. It’s a neurotransmitter that’s part of the brain’s reward system. It’s involved in everything from emotions, motivation, as well as addiction. It is also key to creating habit-forming products and driving persuasive technologies to change behaviors.

So let’s talk more about the critical role UX plays at this convergence of psychology, technology, and business.

How many of you have read headlines like these? All of them encourage us to use psychology to create more intuitive, human-centered products.

What’s alarming is that most of them say that we can tap into our customer’s conscious and unconscious processes without any psychological training.

As a community, we need to draw clear ethical boundaries so we don’t design products that cross the line from persuasion to coercion. We need to make sure that our digital nudges don’t become shoves and ensure we are creating good habit-forming products and not addictions.

Applying behavioral techniques to our work has benefits, but we also need to recognize potential opportunities for abuse. Let’s look at the promise and peril of applying psychology to UX and talk about ways we can create products and services ethically and responsibly.

Thinking Fast and Slow

A lot of what we know about behavior comes from the work of Daniel Kahneman and Amos Tversky. They identified two cognitive systems we use for decision-making.

System One is automatic, unconscious, fast-thinking, and is used for everyday decisions. System Two is more reflective, and it’s ideal for complex problem solving but requires a higher mental effort.

System Two is very reliable at making sound judgments, but unfortunately, System Two is very lazy and often defaults many decisions to System One. Fifty percent of our choices come from unconscious decisions or daily habits.

To make quick decisions, System One also relies on mental shortcuts and cognitive biases that are prone to errors.

There are 188 cognitive biases identified, which means there are 188 ways to influence or hijack a person’s decisions.

Let’s look at how these biases are used on a simple travel site.

We all recognize the power of images, not only to attract attention, but images are important to solidify an idea to memory. This is known as the picture superiority effect.

We overestimate how much our opinions, values, and beliefs are similar to other people. That’s why false consensus bias and social proof work so well.

We value things that are rare and find them more desirable. This plays into the scarcity effect and our fear of missing out.

The anchoring effect reveals our tendency to make decisions based on the first piece of information we see. This trip was “originally” $1,100, but now at the new price, I feel like I’m getting a huge deal — although I can’t afford $996 a night in Kauai.

In general, this design stays on the right side of the line of persuasion.

Recently, a client asked us to change the default setting on a health app to get more email signups. We told them that this was a dark UX pattern. Even though they might get a bump in email subscribers, they probably wouldn’t get the level of engagement or conversion on future advocacy or donation asks because a person was turned off by being added to a random email list.

Nudge

Richard Thaler, a behavioral economist, introduced an idea of the “nudge.” These are small tweaks that can influence someone’s decision and encourage them to make a good choice. A great example is in a school lunch line. Placing fruits right in front of the desserts might prompt kids to make a healthier choice.

I like my Apple watch. It reminds me to stand every 50 minutes, and right now, it’s telling me to breathe. These are great digital nudges.

The New York Times has reported that Uber has used psychological tricks to nudge their drivers to work longer hours and even encourage them to take trips that aren’t lucrative for them. These digital shoves put a company’s business interest before the interests of its drivers.

Behavioral Design

BJ Fogg, the Director of the Stanford Behavioral Design Lab, says that any behavior change requires three elements: motivation, the ability to make that change, and a trigger.

Nir Eyal has adopted that formula to create a model optimized to build habit-forming products. How many of you have read his book Hooked?

Everything starts with a trigger. There are external and internal triggers. We’re in the business of creating external triggers. They’re the popups, the badges, the bings, dings, notifications — anything in the user experience that’ll drive an action.

Actions need to be simple, and they’re always in pursuit of some reward.

The rewards should be variable to encourage you to come back to the app time and time again.

As you invest more time, money, or energy in the app, the more you value it, which also makes it less likely you’ll give it up. These investments load the next trigger, like posting on Twitter, hoping for a response.

The more you successfully go through the Hook, the stronger the connection you build with the product. You form a habit over time and will start relying less on external triggers.

Internal triggers take over and they are associated with memory and driven by emotions. How many of you have felt bored or anxious and have unconsciously pulled out your phone to ease that feeling? That’s a great example of an internal trigger.

The Hook model is effective at creating good habit-forming products, but are there scenarios that can lead to addiction?

Scientists say that dopamine levels spike not only when we get an unexpected reward but also in anticipation of that reward, whether we get it or not.

This is the biochemical hook that is similar to what a person experiences sitting in front of a slot machine. The thrill comes in anticipation of a reward. Designers of slot machines talk about the optimal gaming experience as solitary, continuous, and rapid. They also use “time on device” as a key metric.

So depending on the designer’s intent, the Hook model could be used to craft experiences that could fall on either side of the line.

How can we create products that don’t rely on continuous, habit-forming loops?

Can we slow things down and create experiences that allows System Two thinking to kick in and let people be more reflective in making decisions?

Google is taking baby steps in this direction, and they’ve mandated that all Android partners install at least one digital wellbeing app on their devices. Here are three examples.

The black and white wallpaper is called the Unlock Clock, which reminds you how many times you’ve unlocked your phone that day. Post Box holds all of your notifications until a time of day you specify. The Desert Island app lets you select up to five apps to use while locking you out of the rest of the apps on your phone.

These apps move in the right direction, but have we really reached the point where we need “attention retention” apps? That seems like we’re addressing symptoms instead of dealing with the root problem.

Ethical Design

As designers, I believe we all have good intentions and want to make positive changes in the world. But we need to think seriously about avoiding unintended consequences when we use these powerful psychological tools.

Do you think Chris Weatherell, designer of the retweet, could have imagined that the retweet would be abused by Twitter mobs to promote Gamergate, Pizzagate, and even Russian interference in the 2016 election?

Do you think designers at Instagram could have foreseen that teens use likes as social currency, and therefore NOT liking a person’s photo or NOT tagging someone in a post is a sign of exclusion, even cyberbullying?

Shoshana Zuboff, in her book Surveillance Capitalism, paints a bleak picture of where we’re headed with persuasive technologies. We all know that companies are already extracting way too much data than they need to improve their products and services.

Some of them use this information for features like personalization, but ultimately they’re taking behavioral data and monetizing it. They’re selling our data to machine learning companies to train their algorithms which are then tuning predictive systems to understand what we’ll buy, what we’ll watch, and even how we’ll vote.

It’s not a long stretch, according to Zuboff, that these same algorithms could be modified to manipulate behavior at scale.

And it’s not just the Google’s and Facebook’s of the world that are in this business. Healthcare companies, insurance companies, and even traditional companies like the Ford motor company are getting into this business. The CEO of Ford announced that they are now a “data company,” with plans to monetize driver data collected in their vehicles to power systems for smart cities.

Work with Radical Intent

Is this the future that we want to help create? I believe that we can avoid the negative consequences of persuasive technologies if we work with radical intent.

Here are three ideas on how we can work with radical intent.

First, we need to draw clear ethical lines. As a community, we need to have frameworks and deep discussions to help us make informed design decisions.

Richard Thaler characterizes a good nudge as being easy to opt-out of, it doesn’t interfere with someone’s freedom of choice, and it improves a person’s wellbeing. Bad nudges manipulate a person’s choice and often put a business need in front of a person’s need.

If you are not already, you should follow the work of Tristan Harris at the Center for Humane Technology. He’s created a Humane Design Guide that you can use to evaluate your product and service across six human-centered lenses.

Frameworks like these can help ensure we’re creating products that are in the best interest of people. I’m happy to see that there are a lot of sessions today about design ethics. I’d love to see other frameworks if you have them.

The second is to know your own biases. We talked about the cognitive biases of users, but we can also bring our own biases into the design process.

For example, if we’re conducting a user test for a new design pattern that we’ve fallen in love with, we need to make sure that we stay objective with the data to avoid falling into our own confirmation bias.

The way we frame data influences how we interpret it. The Nielsen Norman Group conducted a study, asking whether a search function should be redesigned based on usability findings. One group was presented negative findings first, and that group overwhelming believed that it should be redesigned versus the group that saw the positive findings first.

Finally, we need to make sure that we recognize that we can bring our own unconscious biases into projects. It’s hard to see our own blind spots. That is why I encourage us to work in diverse teams so that everyone brings their own experiences and perspectives into the products that we build.

Finally, to work with radical intent, we need to create a system of checks and balances within our project teams and our community.

One idea is to adopt an approach used in journalism of a “red team.” This external team can provide an objective review and help evaluate whether a project is designed in the best interest of users.

Another idea is to conduct a premortem, where a team looks into the future and imagines that a project has failed. This exercise can help identify all the different points of risk, potential privacy issues, and possible pitfalls in collecting personal data. Looking at all possible risks ahead of time could help teams design ways to mitigate these risks long before a project has launched.

We need formal processes to hear dissenting voices that often get shut down. Processes like these can ensure teams can overcome optimism bias in situations where a CEO is rallying people around a hot new product while completely ignoring any of the downsides.

Let’s draw clear red lines.

Let’s understand and know our own biases and overcome them.

Let’s create a system of checks and balances to work with radical intent and build products and services that are truly human-centered.

One last thing … don’t forget to reach out to me if you run across the word dopamine.

Thank you.

Further Reading

Thinking Fast and Slow

Nudge

Hooked

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Decision Frames: How Cognitive Biases Affect UX Practitioners

The Man Who Built The Retweet: “We Handed A Loaded Weapon To 4-Year-Olds”

Google has released six free apps to break your shameful phone addiction

How To Stay Relevant With One Simple Idea

Surveillance Capitalism

Center for Humane Technology

--

--