Good by Design

How do we get from being a good person into being on a good team, into being in a good company?

Kit Oliynyk
17 min readApr 4, 2019

Hey folks. It’s been over a year since I wrote my first story on design ethics, and started figuring this stuff for real. Many things have changed since then. I’ve seen a few hundred more blood-boiling headlines about how tech companies continue to ruin our society; I’ve read a few books about philosophy and ethics, and started talking about it myself.

I had a rough awakening. I went through quite a journey of reevaluating my craft and my identity as a designer, so naturally, I’ve assumed it is my responsibility to help my fellow designers wake up. So I’ve made a talk with a whole bunch of horrible case studies, woeful revelations, and sad stories, basically trying to scare people into a shared understanding. Or, as my friend and mentor Steph Hay puts it, I was going for the throat. And I’ve heard a lot of talks very similar to mine, with the same sad headlines collaged over the black background.

We talked about how our designs ruined the world instead of saving it. We’ve built tech that spies on us, tech that deprives us of sleep, and tech that turns us into dopamine addicts. We’ve enthralled people with our beautifully crafted apps, and made sure they’d never let go of their phones again. We’ve crafted biased algorithms that distort our perception of reality. We’ve created social networks that boost our fear and anxiety and make us feel more alone than ever.

We believed in our own version of the future, and we’ve been making new things to move fast and break this old world, regardless of consequences.

We believed the tech was as “neutral” as the house utilities, and we kept making new ones. We’ve treated people merely as consumers of our craft — as users — and we’ve built our businesses around it. And in our naive and blind optimism, we were dead sure people were going to use our products in the exact same way as we’d prescribed them.

So yeah, I gave a talk about all of these horrors because I thought most of us still need this wake-up call, but more and more people started saying, “Hey, enough preaching already; we get it, give us something practical to take back to work!” I was happy to find out that people are woke already. So I started sharing toolkits and frameworks for ethical decision-making. Then some people said, “Hey, wait a second, what’s this philosophical mumbo-jumbo? Philosophy is useless. This is a design conference; I want to learn something new and cool for my craft. Come on, Kit, you used to talk about UI animation — that was good stuff!”

The more I’ve talked to different people the more I’ve realized something: people in our industry are in some very different places in their ethical journeys, and, more importantly, have different beliefs. Some people already believe in the social good, social justice — all of that — and might’ve already considered the ethical implications of their work. And some people think all of this social stuff is someone else’s problem. And that’s ok.

It’s ok to have different beliefs. The problem is, if we’re unwilling to declare our own beliefs and learn what our colleagues believe in, it might be very hard to make ethically responsible products. In fact, it might be very hard even to define what we consider right or wrong as a group. So I started wondering: how do we get from being a good person into being on a good team, into being in a good company? How do we get from personal ethics into shared ethics, and spread them across our companies?

I against I

Alright, let’s start with some personal stuff. I studied Computer Science at the National Technical University of Ukraine, and back then, philosophy was a mandatory class. Well, in post-Soviet universities everything was mandatory — we couldn’t pick and choose our classes. But I remember hating the philosophy in particular. It seemed so useless. So far-off and detached from my actual craft.

I had a different self-identity back then: I thought what made me a good designer was profound knowledge of my craft — all the tools I’d mastered and all the ways I could change existing things and make new things. I looked at the world around me, i.e., some slow and complicated website, and I’d think to myself, “It ought to be simpler and faster. I can make it better.” I didn’t realize that “better” means “simpler and faster” only in my head, only according to my personal beliefs.

We all did it, right? Uber looked at public transportation in the U.S. and thought “we ought to invent a new kind of transportation system: faster and simpler.” And now we’re stuck in traffic among hundreds of other Uber cars, and it’s not as simple as just walking over to a bus stop.

Then I listened to one of Erika Hall’s talks and learned of David Hume, an 18-century philosopher. He said that we could observe what is, the nature of our reality with reason, science, and pragmatism. But whenever we think what ought to be, it’s not objective reasoning; it’s our morals and beliefs talking.

We use our brains to observe the present, and we use our hearts to speculate about the future.

And design happens right in between those two. I used to believe that technology was neutral, but according to Hume, any decision, any design that changes WHAT IS into WHAT OUGHT TO BE is very much not neutral. It’s personal. It’s an ethical choice.

So I kept reading. I studied the good ol’ trolley problem, I learned about the five ways to approach ethics, and the difference between utilitarianism and consequentialism. This is where a lot of people start getting that terrible migraine and decide to give up on all of this theoretical stuff, right? Ok, I got stuck too, and here’s how I’ve pulled it together.

Previously, I thought I should design to make human lives better. I thought my job was empathy for the user, for one human being at a time. I loved focusing on those “happy paths.” It felt so good, so self-righteous. It was so easy to believe that if I can make one person happy, it’ll make EVERYONE happy. And that would make me a great designer and a very good person.

In reality, I can build an algorithm that could make a couple of people happy now. Maybe. But it could make a million people suffer just a few years later. And it’s impossible to make everyone happy — it’s an ethical choice to exclude some people from benefitting my design. So I guess I’ve learned that empathy is not enough. Human-centered design is not enough. Is there a way to be humanity-centered instead?

I used to be so proud of being creative. But I had to redefine my self-identity, what makes me successful as a designer in my own eyes. I’ve realized that we’re not a creative class anymore.

We’re no longer in the business of creative solutions; we’re in the business of disaster-proofing our collective future.

So I decided that if I want to be good at this whole disaster-proofing thing, I have to replace a bunch of beliefs in my head with new ones.

I started valuing communication over craft, “why” over “how,” questions over solutions. But hey, this is not a call to action. This is my personal journey, my personal beliefs. I cannot force anyone to believe this stuff. I’m only bringing this up as a reminder that before we can get into any kind of ethical design process at large scale, before we learn how to make social good as a team, we have to decide what’s good and what’s evil for ourselves. There’s no one else who can make that decision for us.

Fjord presented this framework last year, and at first, it made sense. I’ve looked at all of my case studies, and I tried to fit them into these cut-and-dried quadrants, but it didn’t work. I can only evaluate what’s a “use” case vs. “misuse” case or “stress” case based on my personal beliefs. But products are made by teams, and I don’t know the real story behind the teams that made those products. What did they believe in? What did their business leaders believe in? It’s so easy to point fingers and call a bunch of companies super-villains just because they’re doing bad shit in my personal system of beliefs.

Don’t be evil

As Alan Cooper said, exhorting people and organizations to “not be evil” clearly doesn’t work. It’s hard to fit human beings into a binary framework. We’re not monsters. There are a lot of good people in the tech industry. And most of us care about our society. So why do bad things happen regardless of our good intentions? Because when we make decisions alone, we’re likely to make mistakes.

We usually know what’s good and what’s bad for ourselves, but we don’t know what’s good at scale.

We don’t know how to talk with each other about ethics. We don’t have a system of shared beliefs.

We all make mistakes. Our judgment might be flawed, we’re ignorant, and we need more perspective. So what do we usually do? It’s hard enough to overcome our egos and admit to ourselves that we need help, so we seek answers from like-minded people to reaffirm our opinions rather than challenge them. When we ask questions to the same limited group of people, we get the same answers over and over. We get biased. And we build those biases into our products and services.

We build weight tracking products that scorn women for getting pregnant. How could this happen? Perhaps this tech company didn’t have enough women on their team to remind them that gaining weight during pregnancy is perfectly normal.

We build soap dispensers that don’t recognize people with dark skin. This soap dispenser was meticulously designed and engineered by professionals, the tech specs of this sensor were discussed at length, and it went through some proper QA testing. How could THAT happen? Perhaps there were not enough people of color on the team — or simply no one to ask the right question at the right time. Because it shouldn’t take a person of color to make sure there are no racist soap dispensers in our workplaces. But too often we’re too afraid to speak up, to assert an opinion that’s different from our peers’.

Diversity is super-important to help overcome biases and bring broader perspectives into our product discussions. Our companies are making bold statements about it, but rarely do something real. And when they do, it’s still super-hard to share our diverse opinions with one another. It gets emotional. It gets political, so we keep avoiding it.

Our personal beliefs are mostly black and white. We’re so sure what’s good and what’s bad for ourselves. Just look at how passionately opinionated are people on the internet. But when you put a bunch of people on a team and ask them to make ethical decisions as a group, it gets much harder and much less binary because people are different, and believe in different things.

Shared ethics are not binary; they are a product of endless conversations we ought to have with one another.

We need to shift from personal ethics and morals to politics. We need to get out of our comfort zones and stop avoiding these conversations about our beliefs.

Design is always political

When I first started talking about this ethical stuff at work, a whole bunch of people (much to my surprise) approached me in private, telling me, “You can’t talk about politics at work, it’s inappropriate.” I was like, “Wait a second, but I’m not even talking about politics all that much — I’m talking about beliefs and social justice.” And they told me, “It’s still your liberal agenda, so you can’t really talk about that stuff at work.” Wow. It was interesting to me that people instantly equated talking about beliefs and talking about politics, and that both felt uncomfortable for them.

I got curious, and I started reading a lot on the subject. I’m an immigrant, and I was surprised to learn there’s a sort of social taboo in the US around politics at work — it’s actively discouraged by companies. In Ukraine, where I’m from, and in many other countries, people talk politics all the time — at work, in pubs and restaurants, on the beach and in the church. People feel that asking complete strangers about their views on the society they share with each other makes perfect sense — how else would you get out of your bubble and learn something new?

Then I found something really interesting. One would think there are so many HR policies about politics at work because it makes people feel bad, right? Well, it turns out, most people are okay with it. According to the Clutch survey in 2017, only 12% of people actually felt uncomfortable about it. What’s even more fascinating is that the vast majority of them felt unhappy because their political views didn’t align with their coworkers. Wow, so maybe it’s not the talking itself; it’s having different beliefs that scares people? Alright, so I dug deeper. Why is it so hard to talk to someone who has different beliefs?

The University of Illinois did a fantastic research study in 2005. They measured how far people sat from each other in a meeting room if they disagree — specifically, if they believe it’s a matter of morals. They found that if someone considers their position on an issue to be a question of right versus wrong or good versus evil, they’re less likely to want to interact with a person who disagrees on that issue.

When we have a moral conviction that our beliefs are “right,” we can’t handle other opinions because they challenge our identities.

We’re all inadvertently xenophobic. We’re afraid of anything different; we’re afraid to open ourselves up to new ideas and beliefs. What if those beliefs turn out to better than mine? Does this mean I’m a terrible person? We alienate ourselves from people with different views, and we get into a bubble. We feel like if we don’t, we’d have to contest or challenge their beliefs.

What makes it even harder is that many people tend to equate political with partisan. Our beliefs are usually pre-packaged into a bunch of convenient shortcuts we’ve allowed into our language. Red. Blue. Republican. Libertarian. Progressive Liberal.

Do you know why it’s easier to talk about this stuff in Europe? Because there are many parties, not just two, and they change every few years. So it’s just harder to generalize and package all of our unique beliefs into a couple of shortcuts.

But why does it have to be this way? I think it’s next to impossible to change another adult’s system of beliefs over a short heated conversation. Imagine if we could un-package these shortcuts and just talk about individual beliefs — especially in the context of our design work. What if those people, whom we choose to see as opponents, happen to believe in the same things we do?

It is possible to express genuine and respectful curiosity to learn other people’s perspectives without trying to change their mind. It is possible to find common ground: things both you and your colleagues care about. And at the very least it’s quite possible to simply declare your own beliefs — so that people would understand where you’re coming from when making any design decisions or arguing for any change in your product.

It feels like we’re at the inflection point. Can we evolve from personal craft and personal morals into shared beliefs and shared responsibility? Can we overcome our fear and start having all of these hard political conversations with our colleagues?

Values-driven design

I’ve brought up a shared beliefs framework by Alla Weinberg before, and some people asked to explain how it works. Here’s an example: It starts with the willingness to change. What is our team experiencing right now that we want to be different? Say we’re building a chatbot, and most of our teammates agree that interacting with a “female assistant” feels bad. So as a group, we declare that we believe in gender equality. How do we behave as a group when we have this belief? We stop saying “hey guys,” and start saying “hey folks” instead. We start being respectful of each other’s pronouns. And we end up making a bot that’s completely gender-neutral so that we can avoid bias.

What happens when our team collectively builds a system of these shared beliefs? They could become our most important design tool. Remember how we used to create design principles for our projects and our teams? Those usually come in threes too. Say, these three: simplicity, ingenuity, delight. Sounds familiar? We pre-package a bunch of really important stuff into these convenient, generalized labels. Shortcuts. Just like we do with politics. And what kind of decisions are driven by these basic labels?

You know the answer. We build oversimplified experiences that ignore the “edge cases” and do not account for nuance. We fall for genius new algorithms without realizing the consequences to society. We force people to experience “delight” to make them addicted to our products and endanger their health and well-being.

What if we use ethical principles instead of design principles? What if we use our shared beliefs as a design tool?

All of this ethical stuff is super-old, remember? Ancient Greeks talked about it. And all of this happened before, in other industries. David Panarelli wrote an amazing article recently about the parallels between medicine and design ethics. Here’s a story that you might’ve heard before.

In 1932 there was an experiment by the US Public Health Service at the Tuskegee University in Alabama. 400 African-American men with syphilis were told they were being treated for their condition. They weren’t. The goal of the study was to research the untreated effects of syphilis until the time of death. Those 400 men were told the study would last only six months. It lasted 40 years. The experiment was meant to end only when all participants had died and been autopsied.

Over the next few decades, this slowly unfolded into a major public scandal, until in 1974 a National Commission was formed to figure out the ethics of medical research. Five years later they’ve published a thing called The Belmont Report — basically, the updated version of the Nuremberg Code.

And here are its three ethical principles:

  • Respect people as individuals
  • Do no harm, while maximizing the value
  • Examine any affected population

See, doctors found it essential to set ethical principles even to their research — not to mention actually slicing people open, which is also guided by tons and tons of protocols, ethical studies, and constant iterative discussion in the industry. And we’re only starting to talk about it.

This year I went to the IA Conference (formerly known as IA Summit), and I was so happy to see exactly this conversation spreading like wildfire. What are our shared beliefs as designers? What are our values? Our ethical principles? How might we behave as an industry if we have those shared beliefs? How might we measure our progress based on those shared values?

The more people asked those beautiful and passionate questions, the more I heard the same answer, over and over. It’s capitalism, bro. What happens when our businesses stand in the way? What if our company makes money off something we believe is unethical? Alright, let’s look into that.

It’s capitalism, bro

Design is about the exchange of value — between individuals, business, and society. If we collectively believe in social good, but our business fails, we’re out of a job, and there won’t be any social good whatsoever. So how can we participate in defining the business model? How can we share our beliefs across the entire company?

Businesses are about numbers. Design is about stories. We can and we should tell stories about our work and our beliefs. But is there a way we can turn our beliefs into numbers? How can we quantify social good? There are some ways. We can measure how good or how bad people feel about our company. This is called NPS, and it doesn’t really work, but it’s a start.

Alternatively, instead of measuring all the bad things people are saying about our company, we can downright measure the amount of money our company is spending on lawsuits, call center complaints, and government penalties. Many people in our companies could give us this data: lawyers, corporate strategists, PR, and HR folks. What if we start talking ethics with all of them? What if, together, we could change the way our companies do business?

One of my favorite examples is the LinkedIn class action in 2015. LinkedIn made a business decision to aggressively increase its market share. It resulted in a design decision to secretly spam people using a misleading design pattern. And it ended up costing $13 million in lawsuits. In addition, between 2015 and 2016 their stock price was cut in half. How’s that for shareholder value?

Speaking of shareholder value, it was popularized in 1970 by Milton Friedman. Friedman made a statement that only individuals can have a “social responsibility,” while corporations have a fiduciary duty to their employees and stakeholders. A corporation has to increase profits at all costs — otherwise, it’s “pure and unadulterated socialism.” Notice how politically charged his doctrine was, to begin with? There are still many people who believe this — and often these same people argue that politics are inappropriate at work.

We can and we should talk about our beliefs at work. If we disagree with our company policies, we should ask questions. This stuff works. Google folks did it. Microsoft folks did it. And many more of us are doing it every month.

We have a choice to be socially responsible, even if our companies aren’t.

Sometimes, our companies might be in a very different place on their ethical journeys, and we can’t stand it anymore. Well, being able to quit because of your ethical beliefs is a privilege. Lots of people cannot afford to lose a single paycheck. But if you can, remember that you always have that choice.

This stuff is hard. We’d have to learn how to be genuinely curious about other people’s beliefs without offending them. We’d have to overcome our corporate culture of not talking about it at work. We’d have to get direct support from our senior leaders and make sure our company’s values and beliefs are aligned with ours. And if we get to this point with our teams, we’re already pretty far along on our ethical journey and are far less likely to do some evil by design.

Starting Monday

Alright, here are some actionable items for all of us.

Join the industry conversation. Read a book, watch a talk — or better yet, give one yourself. Let us know what you believe in and why. Join a discussion group, come to your local meetup and help amplify our values and shared beliefs.

Talk about your beliefs with people around you — and beyond. Talk to lawyers, PR and HR folks, company strategists and security people. Seriously. Put it on their calendar, smile and ask them how much does it cost your company to be evil. Collect the numbers and make a business case about it.

Normalize the conversation. If you’re an executive or a design leader, please make yourself accessible. Encourage people to start conversations with you, to share their beliefs and to ask about yours. Go where the people are. Show up on Slack every day. Organize meetings where people talk about beliefs with each other in a friendly civil and productive way. Capture what people say, capture your shared ethics and constantly bring them back to your people to make it your company’s system of shared beliefs, together.

Try something new. Schedule a workshop with your partners and stakeholders and start asking some hard future-proofing questions to collectively understand what could go wrong with your product, and what risks are you possibly facing. There are so many new tools out there — cards, frameworks, canvas, value maps — whatever you like. Step out of your comfort zones and try something. And as you keep doing it, these conversations will get more comfortable over time.

Never stop believing. When it gets really hard, when you start feeling desperate, remember there are many more around you who share your beliefs. Hug someone when it gets dark. Cry a little. And remember, it’s getting better every day. We’re having more and more of these conversations as an industry, and it’s great. We’re definitely on the right path.

--

--

Kit Oliynyk

Product design, design culture, and design ethics at large enterprise scale.