Human by Design — Part 1

BllocInc
Blloc
Published in
8 min readJun 17, 2021

How technology figured out human vulnerability to design.

The change we are waiting for is not unilateral. The movement towards digital wellbeing starts with everyone: users, companies, designers, engineers, educators, investors, media, and policy makers. Each has a critical role to play and can influence one another. In this article we break down the role of a designer and a company.

In Part 1 (you are here) of this article we will look at the core design practices that make users addicted, as well as our biological susceptibilities to that type of design.
In Part 2 we’ll delve into how we can design software (eg. product, interface, OS) for human values. We’ll use Ratio as an example.

How We Got Hooked

It’s not a secret. Smartphone addiction is one of the universal human experiences in the west. Since the early 2000’s Stanford Behaviour Design Lab has pioneered the most influential behaviour models, allowing hundreds of companies to develop captivating and addictive products. And now, less than two decades later, we are making a U turn.
To better understand the dark patterns of design, let’s break down the essential steps needed to create an addictive product (based on the Hook Model & BJ Fogg Behaviour Model).

1. Trigger 🔥

Trigger can be either external or internal. This could be an email, notification, a red badge on an app. These are the external prompts that nudge us to take action. It’s what brings us to the product in the first place.
Over time most of the triggers become internal, i.e. coming from users themselves: a slight feeling of boredom, restlessness, loneliness, or a random memory. Here the internal trigger is a pain point. It nudges a user to take action to alleviate the internal discomfort (however small it might be).
Our minds learn this equation:

trigger + action = alleviation

Engaged in high repetition, trigger + action can easily turn into a habit, since every time we take action suggested by an app, we are rewarded by the alleviation of the pain point. And a “habit is when not doing an action causes a bit of discomfort”.
An example could be a trigger to check Instagram first thing in the morning. It manifests as a slight discomfort when we feel like we’ve missed out on something while we were sleeping. Trigger nudges us to take action.

2. Action 👀

Action is the part where the user experiences a sense of relief from the trigger. At the same time, the action is engineered for the user to continue using the product. That could be refreshing a news feed, posting a picture, sending an email, liking a tweet etc.

For a user to take action, 3 aspects must be present in the Action equation:

  • Trigger — as mentioned previously.
  • Motivation. At the core essence, our motivations are: to seek pleasure and avoid pain (discomfort), to seek hope and avoid fear, to seek social acceptance and avoid rejection.
  • Ability. This is the ease at which a user can take and complete the action. How frictionless the interface is. How buying something is just one click away. How much mental or physical energy we have to exude. How timely or money consuming it is. How much we deviate from our social norms by taking action, or how much it conflicts with our routines. All of these factors determine if we will take action or not.

BJ Fogg argues that ability (frictionless action) can surpass motivation, as motivation is many times unreliable. Simplicity and ease of action does not require a lot of motivation. Continuing with the previous Instagram example, even if we want to stop checking the app first thing in the morning (motivation), the habit and extreme ease of accessibility are already set in motion (if we haven’t set the app limits/screentime). After all, we are rewarded with a pleasant rush of dopamine coming from the variable rewards.

3. Variable Reward 🎰

(even Medium has the endless scroll)

Variable reward might be one of the most famous parts of the model, well known for its slot machine-esque qualities. Let’s remember the deepest drives of a human being: to seek pleasure and avoid pain, to seek hope and avoid fear, to seek social acceptance and avoid rejection. The rewards that a user experiences after taking action will be designed with these core drives in mind.

Back to Instagram, as we check the app, let’s say we receive a DM, or someone has reacted to our story. The next morning this “reward” will be different. And the next one. The rewards and their types will always vary. This unpredictability of reward is one of the most addictive patterns for a human being (though first behavioural experiments of this kind were performed on animals). In Pt.2 of this article we will see why are we so susceptible to this kind of reward unpredictability.

Now, let’s look at how many potential variable rewards an app like Instagram has:

  • Bottomless and intermittent news feed
  • Other people’s stories
  • Who viewed your story
  • Someone Reacting to your story
  • Someone going live
  • Explore feed
  • Post likes and comments
  • Messaging
  • Filters
  • Someone tagging us
  • New followers
  • Shop
  • Reels
  • Not getting enough expected rewards (negative)
  • Feeling triggered by someone else’s content (negative)

And that’s not the full list. Receiving such variety of stimuli allows us to build patterns and habits that make us invest into the product on a deeper level. We create an unconscious dependancy by repeatedly taking action and receiving reward for it. The longer we do it, the more we invest, and the harder it is to let go.

4. Investment 🔐

Investment is the long term strategy of the the customer retention. It reveals how much content the user has created, data collected, followers gained, reputation built up, and skills acquired (depending on the product). The more we use the product and invest across these domains, the more value we assign to the product. At this point it becomes harder for the user to leave or to transition to a competitor’s product.

Investment phase also resembles a way relationships build between humans. Through incremental investments, predictability, and trust, we form a future expectation of a good relationship. We have an innate tendency to reciprocate kindness. The same rule also applies between humans and machines. In this case, we experience a long streak of “kindness” coming from the product as a variable reward (social approval, hope, pleasure, distraction) and a frictionless experience (ease to take action). And so, we are primed to reciprocate: we finish the app set up, we keep the streak going, we complete our profile information, create a playlist, etc. We are implicitly promised that the more we invest into the product, the better the experience will become in the long run. Investment ensures that we remain in the trigger + action = alleviation cycle.

Now, as promised, let’s look into why we are so vulnerable to the intermittent reinforcement: to the bottomless scroll, to the slot machines, to the red-dot notifications, and the constant juggling of the apps.

We Can’t Help It

There’s a tendency to blame technologists for engineering addiction. But, many products did not start out with such intentions. Ushered by pressure from stakeholders, many consumer-app business models shifted towards exponential growth, engagement, maximising time spent in app, and retaining visits. These metrics have become the pinnacle of success, sometimes, at any cost. Driven by the need, technology borrowed and adapted discoveries from other interdisciplinary fields, such as psychology, neuroscience, biology, sociology, and design. Over countless iterations, technology has caught up with the stimulus that we respond to the most. Our vulnerability to dark design patterns was not created by chance (or by technology) — it originates from quite primal survival strategies, recognised in humans as much as in animals.

Authors of The Distracted Mind provide us with an in-depth overview of this strategy.
An evolutionary theory (Marginal Value Theorem, developed by Eric Charnov, 1976) suggests that certain animals have an extremely efficient way to forage food. Because of the unpredictable food environments, some animals have learnt to continuously scavenge through them — understanding that the more time is spent feeding in one place, the less resources are available. Decreasing food availability increase anxiety levels — as it threatens the basis of survival. And thus, the animal has to move to the next food patch.

With that in mind, the animal also has to “take into consideration” how much time will pass until it finds a new place to feed. This is where the dopamine rewards will kick in, driving the animal to look for a new luscious source of food in its environment. This mechanism ensures that the animal will abandon the old patch and move towards the next one. Increased anxiety will make animal move on from an old patch, and dopamine will allow to continue looking for the new one. The survival cycle continues.

Now, what is left for us to do is to replace patches with apps (email app, Instagram, Twitter, WhatsApp, etc), food with a novel information (emails, notifications, messages, tweets), and the animal is us. In wild, an old patch does not regrow that fast, but an app can supply you with new information 5 seconds after you’ve checked it.
And that’s how we end up in a closed loop.

In Part 2 we’ll show you how we can create multiple exits out of the Hooked Model by changing our design incentives (from engagement maximisation to core human needs and values).

Ratio is designed by Blloc — based in San Francisco and Berlin.
People first, apps second.

--

--