The Tamagotchi Trap
On the Dangers of Mindful Design, Well-being-based Design, Behavior Change, etc
Designers are growing more concerned with the lives of our users, and with supporting meaningful interactions. We need to face tough questions: What is a good use of the users’ time? Why would a user have an interaction, if it’s not a meaningful one?
These are questions about human agency and dignity. Unless we tread carefully, we’ll be ensnared (or “hooked”) by a dangerous way of thinking.
On one hand, designers mustn’t decide for the user what’s good for them.
They mustn’t decide, for instance, that the user should maximize their “well-being” or “happiness” according to some external standard. This is because personal well-being and happiness aren’t the focus of life for most people.
Or: How often is going to grad school about maximizing happiness? Are new parents increasing their happiness? Their mindfulness? Is wing-suit base-jumping a good move for well-being? How about an obsessive drive to learn the robot? Wouldn’t a relaxing yoga class be better?
To “nudge” users towards well-being or happiness — if it’s based on “science” (see below) rather than on the users’ own sense of what’s important — is to treat them like a Tamagotchi, a digital pet. You’ve decided what’s good for them.
Let’s call this the Tamagotchi Trap: imagining the user as your pet, and deciding what’s good for them.
But there is another dangerous way of thinking about users, if you take their goals too seriously.
Imagine a user who comes to buy a sketchy health product, like Sensa — which claims to help you lose weight but is a total scam. Such a user might appreciate some inquiry into whether completing this purchase is a good goal for them.
Similarly with a user who seems to choose procrastination, social isolation, or something that feels to them like an addiction. This user might like help to avoid these “preferences” they have.
By taking goals and preferences at face value, you can create spirals of addiction and confusion and justify your place in them.
We can call this Selling Sensa: Designers mustn’t take the user’s apparent preferences or stated goals too seriously.
So: is there any way to avoid the Tamagotchi Trap, without Selling Sensa?
The question that needs to be resolved is: Which of the agent’s ideas about how to live are to be trusted? Which are to be held as fundamental to their self-direction?
Or, in other words: What inside of us is to be honored? Where is the source of dignity and meaning in our lives?
Science Doesn’t Address This
Experimental sciences (psychology, neuroscience, medicine) study human beings from the outside. These sciences ask us questions and then correlate our answers with events, circumstances, or experimental factors.
But events, circumstances, or experimental factors are outside a person. In these fields, humans are the objects of study. The scientist ends up considering humans as objects, suggesting treatments for us: events or circumstances that would be good for us. This puts us in the Tamagotchi trap.
Instead, we can turn to philosophy. Philosophy works differently: It uses thought experiments, debate, and counterexamples to uncover what we mean by things.
What is meant when we say someone had a free choice or they didn’t? Or when we say someone was fully-informed, or that we treated them right, or that such-and-such an arrangement preserves their dignity.
These are philosophical questions, not questions of experimental science. But there’s a deep literature about them, and results from philosophy can get us through the dangers posed above.
Results from Philosophy
A useful way to re-frame the problem is to ask: when we do a thing, what is important to us about doing it?
There are common answers to this: we do things to achieve certain goals, or to feel good. But philosophical debate has concluded these are wrong. It appears that achieving goals isn’t what’s truly important to us, nor is feeling good.
What do I mean by this?
The more pressing a goal feels, the more it’s an expression of something really valuable to us — like our survival or those of the people we love.
Notice that we wouldn’t want to achieve the goal if it didn’t actually protect or express the valuable thing. This is one piece of evidence that life is not really about achieving goals.
For another piece of evidence, you can imaging achieving all your goals without trying at all. As fast as you can think of them.² For most people, this doesn’t get at the important part of having goals.
The same kind of analysis works for positive feelings: we don’t want the feelings without the valuable things they signify. When most people imagine a life of continuously stimulated positive feelings, they find the idea horrifying.
And we can also eliminate many other candidates: Is it important to us to get our beliefs right? To become high status? To express our character traits? To do a perfect job with our social roles? To free ourselves from oppression?
None of these get to the heart of what’s important to us in the things we do. Like goals and feelings, they are connected with what’s important to us, but they are not what’s important itself.
(Those who’ve followed my writing will know what I’m going to say.)
It’s our values that define what’s really important to us. We want to treat people in certain ways (honestly, generously), we want to act in certain ways (boldly, thoughtfully, with self-care), we want to approach things in certain ways (with reverence, with levity, with skepticism).³
Unlike goals and feelings, we don’t want to compromise on values. We don’t want the results of living by values, we want the process. If a person feels they’ve lived by their values, that’s equivalent to knowing they’ve lived a good life, as well as they understand how.⁴
It appears that we choose, pursue, and accomplish goals only for the values we live out and protect in the process.
With this information, we can better understand the Tamagotchi Trap and Selling Sensa:
- We treat the user as Tamagotchi when we ignore their values, by assuming they have whatever values we picked for them, like well-being or happiness.
- But Selling Sensa (or selling social isolation) also ignores their values, while making a big show of embracing their goals or preferences.
In either case, ignoring someone’s values makes us untrustworthy. We aren’t really helping them if we ignore their values.
The answer, then, is to take users’ values into account. The user may not come with the right goals, but they know how they believe in living.
So… all we need to do is ask about their values and support them. Right?
Goals In Disguise
The story is complicated because some of our stated values aren’t really values. They are goals in disguise.
You can ask people how they aim to live, and encourage them to answer in the form of values rather than in goals, policies, feeling qualities, beliefs, etc. But their answers will include some ‘decoy’ values. These must be filtered out.
There are two types of decoy values:
Norms. Someone might answer that they aim to ‘be polite’. There are real values that relate to this — for instance ‘giving each person their own space’. But it could be that this person only wants to be polite because of a goal they have: the goal of being liked and accepted, or of fitting in. This is a reasonable goal! But as with all goals, it’s not meaningful in itself. It’s a means to an end.
If a person who aims to be polite discovered that they could be accepted and liked even when they weren’t polite, they may lose their interest in being polite. In this case, we know it was goal for them, not a value.
Ideological Commitments. Here’s another decoy value: imagine someone says they aim to ‘not perpetuate the industrial slaughter of animals’. Again, there are real values that relate to this — for instance ‘taking responsibility for one’s distant effects on the world’. But it’s likely that what this person means is that they see a certain change as necessary for the world. They think that by publicly espousing a value of living in a certain way, they will bring about this change.
If this person were assured that this change to the world would happen anyways — or had already happened — then this value would no longer be important in guiding their life. This is how we know it’s a goal not a value.
So: both norms and ideological commitments are goals disguised as values. We mustn’t take them too seriously. Instead, we can look for the real values which underlie these decoy values: values like building community or responding to the situation of the world. We can tell these are values because they are not about outcomes.
It is these values, that, as designers,we must support. We do the user a disservice if we deliver on their goals but not the values they spring from.We end up Selling Sensa.
Here are the questions I started with:
Which of the agent’s ideas about how to live are to be trusted? Which do we hold as fundamental to their self-direction? What inside of us is to be honored? Where is the source of dignity and meaning in our lives?
Hopefully I’ve shed some light on them, and how they relate to the Tamagotchi trap and Selling Sensa.
We mustn’t maximize the users’ “well-being” or “happiness” according to some external standard. We must instead honor their values (although not necessarily their goals, feelings, norm-compliance tendencies, ideologies) and provide spaces where they can live in the manner in which they believe in, helping them to treat people, to act, and to approach things in whatever way they find meaningful. Only then are we responding to what’s really important to them.
To make my day: share this post to a fan of the science of well-being / happiness / positive psychology / behavioral economics. Ask them what they think.
To learn more: see my long post on designing for values, or take the online class:
 See also the discussion of the human sciences in Nothing to be Done under “Why the Human Sciences aren’t Scientific”.  See Five Days with the Devil for a thought experiment that dives into this in more detail.  For more on the nature of values, see Human Values: A Primer.  For the academic pedigree of this view re goals, preferences, and values, see the notes in What’s Next.