Human Systems—Frequently Asked Questions

All about Human Systems programs and design methods

Basic Concepts
What is Human Systems?
Where is it practiced?
What kind of design and metrics is relevant?
Why are traditional design methods inadequate?
What do you mean by values?

The Courses
What is the Fundamentals course?
What is High Impact?
Do people like the classes?
Do I need to apply?
How can I get my company to pay for High Impact?
Will you do an in-person workshop for my company or team?
Will you come to my event?
Who runs the classes?
Who developed the courses?

Comparisons
How is this different from other values-based design approaches?
How does this compare to other “design ethics” curricula?
• How does this compare to other approaches to technology and social problems, which focus more on “bad actors” or using AI to detect bad content?

Relevance to Social Issues, AI, and Metrics
What do values have to do with social issues?
How can designers address social issues?
What does this mean for metrics?

Random
What if I just want to keep in touch?

Basic Concepts

Q. What is Human Systems?

Human Systems is a collection of team processes, analysis techniques, and approaches to design and metrics to help people fix broken social systems, change norms, and recover meaning.

  • Fix broken social systems. Policies and technologies often make it harder for people to live by their values—including values like honesty, open-mindedness, courage, care, etc. We consider this a major problem, and we help you learn to measure the problem and to fix it, as well as all the downstream social problems (like widespread depression, isolation, and the breakdown of democracy) that arise from bad social systems design.
  • Change norms. Hidden factors structure our social interactions in surprising ways. Any social environment—whether it’s an in-person standup meeting, a chatroom, a difficult chat with dad, etc—makes some things easier, others harder. Learn to identify this hidden structure and change the games people play.
  • Recover Meaning. Sometimes the hidden factors in social space support individual’s sense of meaning and meaningful relationships within that space. Other times, they don’t. Trace where meaning comes from in your life and others’, and where it gets blocked by poorly-designed institutions, processes, and environments.

Human Systems is also the name of an online school that teaches these methods, and the collective community around the school.

Q. Where is it practiced?

Our graduates work together across many organizations, with alumni in Google, Khan Academy, Mission U, Adobe, Buzzfeed, Facebook, Github, The Sorbonne, and Amazon.

Q. What kinds of design and metrics is relevant?

The class is useful whether you design and/or monitor:

  • technological spaces (social VR, messaging, team productivity, scheduling, marketplaces),
  • organizational processes (meeting styles, org and reporting structures, team operations),
  • social service environments (schools, employment centers), or
  • entire social structures (basic income, currencies, voting systems)

Our methods are less relevant if your product is a fixed-use, single-player tool (like a video editor) or a fixed-emotional-arc, single-player experience (like a film).

If you are unsure, you can ask us about your project here.

Q. Why are traditional design methods inadequate?

Designing social environments is different in an important way from designing tools or experiences.

To design social environments requires a different kind of empathy. Most designers are already good at two types of empathy:

Goals empathy. Most of us have some experience talking with someone and getting an idea of what their goals are and how we can help them.

Feelings empathy. And most of us have some experience recognizing other people’s feelings and trying to design things to delight or bring joy to another person.

But many of us have little practice with a third type: understanding how someone wants to live and to relate with others (in my terminology, understanding their values) and helping them to live or relate in that way.

Let’s call that values-empathy.

  • If you’re designing tools, then goals-empathy is what you need.
  • If you’re designing an experience for an individual that’s supposed to affect them a certain way — like a film — then feelings-empathy is your bag.

But you’re designing a social environment — say, an organizational structure, an event, a political system, or a social network — my claim is that you need values-empathy. And pretty much nobody has it.

For more on this topic, see Can Software Be Good For Us?

Q. What do you mean by values?

One key concept in our courses is human values: open-mindedness, agency, protecting one another, and belonging. Modern social systems haven’t been designed around values, and we haven’t learned to measure whether systems supports us in living by our values. This is causing huge problems.

In general it’s hard to live by our values: it’s hard to be honest, hard to be open minded, hard to be thoughtful about our choices, hard to be bold, etc. But designers of social systems often make it even harder. We can all think of times when being honest (or bold, or gentle) was hard for us. Knowing the “hard to dos” of living by different values allows us design better systems, and to answer questions like:

  • How could we change Facebook’s News Feed to support the value of open-mindedness in political comment threads?
  • How would we design a school, if our concern was to give students agency over their own lives?
  • Could Twitter’s harassment problems be addressed by users directly — by letting them protect and monitor one another as we do in the real world?
  • How would Instagram look, if sharing photos were to be understood as being about belonging?
  • And what’s the right way to measure the impact of a social network on political polarization? What about the experience of agency within a school?

The Courses

Q. What is the Fundamentals course?

Everyone starts by taking Fundamentals—three 2 hour sessions (plus homework). Fundamentals is about a new way to think about human motivation, which we hope you’ll find relevant to how you approach your work (and in some cases, your personal life). In particular you’ll learn to name values and norms as precisely as most people can name goals and feelings, and you’ll use this newfound articulacy to see where values and norms come from. This will help you see how to create, or re-envision, social spaces so they support people’s ability to have meaningful, purposeful experiences.

Norms. When we engage in a social space, our behavior is driven by different things: sometimes we have a specific goal to achieve—I may want to convince you to invest in my company, or to act professional in an environment, or to create or spread a norm—I may want to model political activism because I think more people should be politically active.

Values. But sometimes we aren’t guided by norms or strategic goals. But you still have ideas about how you want to be. Not because you want to make a kind of impression, but just you think it’s a good way to be. You might want to be honest, or real; you might want to keep things light, and so on.

When social spaces enable people to live by their values, then the people in our social spaces tend to have meaningful experiences. In particular, they don’t feel compelled to act in ways that don’t align with their values.

You will also learn to speak plainly about your own personal values and the values of others. You will come to understand when, where, and how you decided to be the way you are. For many people this means confronting unsettling questions. It can be tough.

Apply here.

Q. What is “High Impact”?

After going through Fundamentals, those lucky enough to work on large-scale systems are invited into a program where you can take a la carte sessions on (1) redesigning systems, (2) survey and data methods, and (3) metrics at scale. As well as get personal and behind-NDA attention from the trainers. Those in the high impact program who are at the same company or who face similar social issues are also put in groups.

Apply here.

Q. Do people like the classes?

“This marks one of the most important shifts in my design career and I believe in our industry.” — Kate Pincott, Designer at Facebook
“This curriculum is the best way for designers of social systems to do their jobs responsibly. My hope is that this can lead more people to articulate and confront how their company’s tool or platform is letting people down — and inspire them to do better.” — Katherine McConachie, MIT Media Lab
I found the “hard to dos” concept really helpful. Translating from values to specific difficult actions seems to reliably generate insights. — Andy Matuschak, Khan Academy

Q. Do I need to apply?

Yes. We have to make sure the courses are a good fit. In particular, students must be experienced with other types of introspection, and also have prior background designing social spaces (as defined above). Otherwise the classes are unlikely to work out. We also sometimes must give priority to those who will be in the high impact program — who design social systems of consequence at rapidly growing startups, big companies, in public policy, and so on.

Apply here.

Q. How can I get my company to pay for High Impact?

This will be easier if your product is already acknowledged to be causing social problems and if it is part of your job to address, monitor, or understand them.

  • Check if there are HR/professional development budgets. Our classes might count as professional development, depending on your role (and whether you choose to focus on design, metrics, data & survey methods, etc).
  • Check if the vertical has a budget. If you are a designer and your org has a VP Design, that person might have a budget for trainings involving design and social issues, cutting edge design approaches, etc. Similarly for other verticals.
  • Best of all—convince your team to budget and do it together. You can do this by (a) passing around some of Joe’s writing and having a discussion; (b) trying on your own to reframe some of what your team is struggling with in terms of value and norms and selling people on it; (c) bringing our starter games to a meeting.

Finally, if you work at one of the major tech platforms, we might be able to connect you with people who have experience on this, or even who can help directly with budget. Ask us.

Q. Will you do an in-person workshop for my company or team?

Ask us. In general we charge more for in-person and surprisingly, we think it might be a worse experience. When you take the classes over the course of weeks, instead of slammed into a couple days, they tend to sink in better.

Q. Will you come to my event?

Likely only if it is (a) local for one of our trainers or (b) if you pay a lot. But go ahead, ask us.

Q. Who runs the classes?

The instructors come from different backgrounds. One is a designer at GitHub, one a PhD sociologist, one is currently at the MIT media lab. But that’s not really important — what matters is that instructors have emotional and facilitation skills and are talented social designers and analysts. There are never more than five students in a session, so the instructors have time to mentor you as you work.

Q. Who developed the course?

The material was originally collected by Joe Edelman and he oversees the classes and is one of the trainers. Joe coined the term “Time Well Spent” to spread an understanding of social harms and social good which transcends and incorporates notions of distraction, persuasion, freedom, privacy, and so on. It has been adopted as a guiding light in metrics by teams at Facebook, Google, and Apple and led to many product and ranking changes. It has also become a bellwether vision for the tech industry: for an internet that helps us relate with one another and to take actions in the ways we find meaningful — the “livable cities” of the online world. Joe is always accessible on the Slack and often in the sessions.

The course is being extended by a growing community of designers, academics, and this guy.

Comparisons

Q. How is this different from other values-based design approaches?

  1. Our design and metrics approaches work at scale, with the complex systems that characterize the modern world.
    Many design and measurement approaches, like participatory design and ethnography, can’t really work at the scales of the modern world. Human Systems can be used to evaluate and build things like legal codes, social policies, and social networks which touch millions and billions of people. Our processes for designing and surveying around values were developed for organizations who operate at the scale of Facebook or a national government.
  2. Our understanding of values is no-BS, detailed, and testable.
    Many approaches to human values involve a short list of supposedly “universal human values”, without a methodology for researching and fully specifying the values that participants in a given system are struggling to live by. These usually give broad names to values, like “autonomy” or “creativity”. At Human Systems, we learn to name values with the same precision we can name goals, and when we do this we find that there are a great many of them, and that new ones are always being invented. Values are sometimes unique to an individual, and they pass from person to person via mechanisms like admiration, inspiration, and mentorship.
  3. We don’t pretend values are our only motivation. We show how values, goals, fears, norms all come into conflict.
    Our approach can explain why, in many environments, people don’t act according to their values. Values are contextualized within a broader notion of human motives, including goals, fears, norms, and ideologies. When these contradict, we face the difficult discussions necessary—about what the right stance for a designer or measurer will be—head on.

Q. How does this compare to other “design ethics” curricula?

  • Unlike philosophical approaches, we start with a practical understanding, detailed analysis of social issues, and proven processes—not with big questions.
    People who work in the applied ethics fields (bioethics, AI ethics, and so on) love certain kinds of ivory-tower problems, like whether a self-driving car should use Kantian reasoning or follow Bentham in an accident. But, even in these fields, this kind of thinking has little or nothing to do with what will make cars safer (which will involve developing many mundane rules, like making sure the car drives slower in fog). In general, the right group to worry about whether cars are safe are the people who study car accidents and learn from them. (Certain kinds of engineers and policy-makers.) The right group to think about making cities livable are people who’ve studied good and bad cities. Regarding social problems due to technology, the right group would be people who study social problems and social benefit, including sociologists and economists, and who have practical techniques for making technology or policy safer. Unfortunately, this doesn’t describe most work in “design ethics.”
  • Human Systems isn’t an ad-hoc method or grab-bag of lenses, but a paradigm-shift.
    We believe the basic problem in understanding the social effects of technology and policy is one of seeing the social fabric. Unless designers can come to a common conception of what in society they might be damaging or supporting, it will be hard to anticipate or observe these effects. Everything in Human Systems is based on teaching designers and measurers to understand the social fabric, how it evolves on its own, and how its shaped by new systems.
  • We teach you concrete, powerful skills.
    It takes practice to get good at recognizing others’ values, being articulate about them, and doing new kinds of social analysis and design. Our classes focus on giving you the practice you need to be able to do these things in your day-to-day work life.

Q. How does this compare to other approaches to technology and social problems, which focus more on “bad actors” or using AI to detect bad content?

While some of the social problems we face do relate to “bad actors” (people with an intent to destroy social systems), most don’t:

  • Journalists who are pushed towards clickbait or outrage headlines mostly aren’t bad actors.
  • Neither are people hurling insults on twitter.

Many of the systems built during the 20ᵗʰ and 21ˢᵗ centuries make it hard or inconvenient for participants to live by their values. People in both groups would behave differently in a different environment, one which made it easier to practice values like journalistic integrity and respect.

Values like these are what hold the social fabric together, and make our various institutions and practices work. Democracy requires values like journalistic integrity and respect. Other values are necessary for other parts of the social fabric: for people to take care of their friends, to form community, to be entrepreneurial or scientific, and so on.

When practicing values gets difficult, the values themselves get lost. Parts of the social fabric cease to function. Individuals become less connected and, if they cease living by their values, their lives and relationships feel meaningless. At larger scales, cooperation becomes impossible and groups can’t work together to face threats like climate change, war, or declining economies.

The social harms caused by social media are also often viewed as problems of AI/detection (e.g., we need to train AI to detect and remove terrifying children’s videos). However, we often come to faster, deeper, and better solutions when we consider them as design problems instead (e.g., mixed-age communities of kids should watch together, not as isolated individuals). The values-based redesign process taught in our High Impact Program gives students the tools to understand these design problems.

Relevance to Social Issues, AI, and Metrics

Q. What do values have to do with social issues (like online bullying, political polarization, or depression)?

Every social issue can be framed as the failure of social environments to support people in living by their values:

Bullying
being protected by friends & community ← victim unable to live by this value
kindness ← perpetrator unable to live by this value
protecting one’s friends ← community unable to live by this value

Political & media polarization
• open-mindedness ← audience unable to live by this value
• thoughtfulness ← journalists unable to live by this value

Depression
• courage, agency, self-knowledge ← depressed person unable to live by these values
• being social supportive, loving, inclusive ← others unable to live by this value

Many of those involved will say they want to live by these values. So something must be going wrong in these particular social environments that makes living by these values harder (or makes living by them conflict with other goals or interests cultivated in these environments).

A key thesis of our approach is that social breakdown happens because new social environments are making it harder for us to live by our values.

Q. How can designers address social issues?

Once we recognize a social issue as a failure to live by values, it becomes much easier to investigate what’s happening. The designers’ own experience of trying to be honest, courageous, etc is often relevant. Designers can ask questions like “what’s hard about being honest?”, “what’s hard about being open-minded?” and “which environments make this hard thing even harder?”. Such questions clarify what we must avoid in social designs, in order to avoid social breakdowns. When designers think this way, their designs are better for everyone.

Q. What does this mean for metrics?

Metrics often measure the wrong things. To measure the right things, we need to ask: what social good do we hope for from our social environments, and what social harms must we watch out for?

One way to understand the social impact of technologies and policies is to measure people’s ability to live by their values — their ability to be honest, open-minded, or courageous, to be protective of the people in their lives and to be protected and cared for by people (rather than e.g., by companies / AIs).

Measuring whether people can live and relate to each other by their values is powerful: it measures (1) users’ time well spent, (2) the meaningfulness of their interactions, and (3) whether social breakdown is occuring. Values-based metrics are the way forward for watchdogs and government agencies as well as for product companies themselves.

Random

Q. What if I just want to keep in touch?

  1. There’s an email list you can join.
  2. On twitter, you can follow Joe Edelman and Human Systems.
  3. You can follow this medium publication.
  4. And there’s the Facebook group.

More questions? Ask on our website.