Photo by Persnickety Prints on Unsplash

The Design Landscape & Values-Misaligned Systems

Joe Edelman
The School for Social Design
5 min readAug 19, 2021

--

  • Why do bureaucracies of good people do bad things?
  • Why do apps with good intentions cause bad social outcomes?
  • Why do helpful innovations — like cars, single-family homes, and smartphones — lead people into isolated lives that don’t serve them?

I’ve spent much of my life learning about values-misalignment, which covers everything above.

This is a quick overview of the problem and why common design approaches (including mechanism design, XD, design thinking, and red teaming) don’t address it well. Based on why those fail, I’ll outline of an alternative.

Values-misalignment

Values-misalignment is ubiquitous in human life. It’s there in a shallow dinner conversation among people who’d rather go deep; there when an organization’s promotions process makes honest people withhold information; there when an “egalitarian” team structure gives rise to hidden power dynamics; there when attempts to make the world “more open and connected” give rise to political polarization on an unprecedented scale.

You can tell people want to make fewer values-misaligned things. There are popular essays, like Meditations on Moloch. There are talking heads like Eric Weinstein and Daniel Schmachtenberger. And academic fields like social choice, game theory, and mechanism design purport to deal with values-alignment, with a vast literature on market and coordination failures.

To make the problem clearer, let’s divide it in two:

  • The Hard Problem of Values-Misalignment is when there’s no agreement about values, or no resources to check designs against them. It’s a hard problem when an organization is run by sociopaths who claim certain values but pursue a different, secret agenda. It’s a hard problem when users and designers have different goals. It’s a hard problem when cutthroat competition means there’s no time to think ahead. Etc.
  • The Easy Problem of Values-Misalignment is when there’s rough agreement among stakeholders about what’s desired, what’d be bad, and resources to make something good.

Even with the Easy Problem, we’ve made very little progress. Usually, good people fail to anticipate the social consequences of what they build, even when they take time to think.

Existing Approaches

Imagine you’re working on something. Your team has rough agreement about what’d be good and bad, and can afford to design thoughtfully.

What kind of thinking should you do together?

I can’t cover all the existing approaches, but the most popular are plainly inadequate:

  • The most common, I’ll call naive, ideological optimism. That’s when you latch on to an abstract ideological vision — like “decentralized”, “anonymous”, or “inclusive”, and hope that by building things that way, the right social outcomes will result. This approach seldom works: when you design around an ideology without considering specific human impacts (such as whether people can be vulnerable, creative, or take charge of things), your design is unlikely to work out regarding those specific human impacts (that you forgot to think about).
  • Somewhat more sophisticated is red teaming, where to avoid bad outcomes, you split participants into “bad actors” and normal people, and guess how the bad actors will ruin things. Bad people do exist, and red teaming helps limit their influence. But, while most of us aren’t bad actors, we all struggle to live by our values online and at work. Many problems (like clickbait, political polarization, and office politics) aren’t mainly due to “bad actors”. Red teaming doesn’t help much to address them.

Next up are design subfields like mechanism design, experience design, and speculative futurism. Each is inadequate in its own way:

  • Mechanism design tries to incentivize good performance, or drive an allocation of resources. Unfortunately, as I’ve covered elsewhere, mechanism design is based on a inaccurate model of human beings. For this reason, “incentivizing good things” usually creates a mess: badges, points, endless optimization, golden handcuffs, bureaucracies, and credit scores.
  • Experience design (also service design) are about moving users through a designed experience—often one designed to create positive emotions or avoid negative ones. Unfortunately, “calm” or “delightful” experiences can still work against users’ long-term interests, leaving them “delightfully” isolated and disempowered.
  • Some other design approaches, like speculative futurism and design thinking, fail to include the anthropological tasks needed for values-alignment: the gathering of information on what’s meaningful for people, the investigation of whether what you’re making supports or undermines those meanings (as opposed to just being a good experience, or helping with a job to be done).

Even thoughtful people, using the methods above, often fail to make values-aligned systems.

So, What Would Help?

Based on the critiques above, what design approach would address the easy problem of values-alignment?

All designers have practices of empathy (like user research, talking with customers, talking with friends), practices of imagination (like design sketches and prototypes), and practices of argumentation (to show how a design idea improves experience, fits a persona, or gets a job done).

Let’s consider new practices of empathy, imagination, and argumentation, to make values-alignment more likely.

Practices of Empathy

Other design methods focus on empathy with goals (Jobs to Be Done), feelings (“calm” design, “delightful” interactions), or life situation (personas). We’d want our new approach to recognize — not just goals and feelings — but people’s values: what they find meaningful; how they want to live and relate with others. New facts about the same people we already talk to.

Note: This is a specific sense of the word values. While people often use “values” for abstract ideological commitments (equality, freedom, inclusiveness, security, decentralization), we mean something different: the ways of being and relating which feel meaningful to a person (like being held, being vulnerable, being creative, taking stage, etc).

However people love to be — with their friends, family, living situation, or work — this approach should lead designers away from one-size-fits-all visions (flat organizations, agile, decentralized, etc), and towards making things that support exactly how specific people want to live, so they can recognize and protect specific kinds of meaning uncovered in their interviews.

Practices of Imagination

The new design approach should point away from visions of sameness (libertarian, consumerist, communist, etc), and towards diverse communities and social structures, each designed to support what’s meaningful for some people — without assuming the same thing is meaningful for everyone.

It should help designers envision utopias of shared meaning — not utopias of goal achievement, solo experience, or abstract ideological principle.

It should have hard-nosed ways to test social ideas — perhaps by making quick experiences, games, and structured interactions for a test population.

Practices of Argumentation

Finally, the approach should involve studying why good and bad social outcomes occur. It should help designers construct a theory of where personal values are easy to live by, and where they are hard, and use it to make a detailed case against goals, metrics, and protocols that destroy meaning. When colleagues are excited about an org processes, app, or community structure, the designer should be able to anticipate problems their colleagues didn’t think of, explain why the idea would backfire, and propose alternatives.

Surprise!!

I’ve just described Values-Based Social Design, my main project over the last six years. You can see the practices of empathy, imagination, and argumentation in the textbook at Practices and Methods.

Does that sounds interesting? Go read the book, or join the School for Social Design.

--

--

Joe Edelman
The School for Social Design

Building economies of meaning, and leading the School for Social Design sfsd.io