A User Researcher’s Guide to Getting Started With Ethics: How to Incorporate Ethics into your Design Process
There are a lot of motivations people have for going into the field of User Experience, but a common one is the desire to manifest, in one’s work, a genuine care for people. This concern for all humans (not just users) has been the central motivation in my own story, which has taken me from customer service agent to non-profit technologist to UX Researcher.
People are at the heart of the work that we do. As Designers and Researchers, we are often the closest point of contact with the people using our products (next to customer service agents, who often don’t have as much of a voice in the product development process). We are trained in methods that build empathy and that help us understand human needs and motivations. And in the midst of coinciding health, social, and economic crises, working to improve people’s lives and protect them from harm is all the more pressing.
If you’ve read this blog before, you might have heard us say that we’re building tools to make it easy for our users to do the right thing, and difficult to do the wrong thing. Salesforce CEO Marc Benioff says, “Technology is not good or bad, it’s what you do with it that matters.” As designers and researchers, we know that the way we design technology matters. We have the opportunity and responsibility to use the power of design to influence the use of our products. At Salesforce we are committed to building products responsibly to make it easy for designers, developers, and admins to use them responsibly.
So what are some things that we, as designers and researchers, can start doing right now to ensure the well-being of our users and mitigate harm for the larger community? How might we square what we know and what we don’t? How can we uncover insights that lead to experiences that are valuable to all stakeholders, beyond just our users?
What we’ve learned from incorporating Ethics-by-Design into the product development process at Salesforce is that it needs to be part of every step of the process. Designers should try to identify risks, but won’t be able to do so exhaustively on their own, given that we can’t imagine our way into understanding the experiences of others. Therefore, we need to be intentional about attempting to uncover risks during research with both users and non-users, and ensuring that we are led by our values in the design process.
“Our values are our guideposts…even if customers aren’t asking, or don’t know to ask, or don’t want this [ethical feature] but should have it. This is a substantive way to operationalize improving the state of the world in a way that’s consistent.”
— Justin Maguire, Chief Design Officer, Salesforce
Without further ado, here are a few tips we’ve learned from incorporating Ethics-by-Design into the product development process at Salesforce:
Develop a community of ethical practice
You can begin building a collaborative community around ethical design by just having conversations. Start talking to your colleagues informally about the technologies you’re designing and whether the implications of that technology are fully understood by the team or the company. Bounce ideas off each other about ways to create space to raise concerns or explore potential risks, even if they’re not jumping out at you. Host a workshop to consider ethical concerns using some of the resources below. It’s been especially helpful for our UX teams to ground these conversations in external reading, as well as our company values and our Ethical and Humane Use Principles.
I started to find my own community of practice while studying technology design in an interdisciplinary Master’s program at UC Berkeley’s School of Information. In our projects, we grounded ourselves in the work of interdisciplinary social scientists (Burrell, Eubanks, boyd, Noble, Shilton, Ruha Benjamin, Sweeney, Crawford, Winner, Irani) and legal scholars (Mulligan, Nissenbaum, Citron) who study the social and legal context of technology use, in order to think critically about potential new problems we might be creating with our “solutions”. This led to a former classmate, two colleagues, and me to ask questions that resulted in the creation of Salesforce’s Office of Ethical and Humane Use. It can also help in collaborative product design and policy advocacy.
Get familiar with existing tools
You don’t have to make big changes to start considering the ethical implications of your work. Here are several simple and effective tools that can easily be integrated into workflows:
Consequence Scanning: This workshop format created by Doteveryone allows anyone to review their products and experiences for intended and unintended consequences. You can read our blogpost on how to get started with Consequence Scanning and our lessons learned here and access Doteveryone’s materials here.
Ethical OS: A framework that allows anyone involved in the process of building technology to discover and mitigate risks before they happen. Check out their toolkit here, and the Ethical Explorer cards that can be used to consider potential harm caused by technology.
However, product development teams may not be able to imagine all the risks that exist, particularly those that may fall outside of their experience, situated-ness, or privilege. As UX professionals, we know that we can’t imagine ourselves into understanding the experiences of our users — it is why user research is such a big part of our work.
Conduct research that uncovers ethical risks
The best time to uncover ethical risks is early on in the design process, during the user research phase. There are a few key ways to discover ethical risks as we design our studies, such as:
Include questions to elicit concerns and increase ethical awareness
Some questions that have worked well for us are:
- What is one headline you fear being published about your company’s use of this technology? OR What is one headline you fear being published about the use of this technology in your industry?
- Imagine you are a brand new user of X product, what are some things you might unintentionally do with it that could cause harm to other people?
- To do your work responsibly, what are the principles that you aspire to uphold?
- What institutions or systems might your use of this tool impact (healthcare, finance, education, law enforcement, child welfare, etc)?
Create opportunities to observe
Users might not always articulate concerns without being prompted. In our research on Trusted AI, we’ve learned that many of our users haven’t had much practice identifying ethical concerns. In addition to asking directly about misgivings, build activities into the study that allow for observation. For example, consider including a potentially concerning use case in your user testing flow to cause reactions and get your user talking about potential issues.
When we understand the context of use, we can identify the impact of our design decisions on both users and non-users. For example, AI analysis of call transcripts can be really helpful for employees, connecting them with relevant resources while on the job. However, the technology can also be used to aggregate and evaluate employee performance which creates risk. Those performance insights could be used to determine pay rates or continued employment, and therefore should include protective guardrails to prevent discrimination or misuse. Context-of-use helps us understand whether existing regulatory or process safeguards can protect against harmful use of a product, or if we need to build those safeguards into the product. The context of use can help us understand the differing impacts on users of a product across industries, as well.
Include users with marginalized identities and non-users in research studies
When we collect data about the impact of design choices, the experience of marginalized or vulnerable users can’t be considered as an afterthought, or weighed less than the majority users — these marginalized users need to be especially prioritized. And on top of that, users aren’t the only ones who are affected by a product or tool. As designers, we should also consider who is in our scope of impact. Our accountability stretches beyond the user, and we need perspectives from more people than just users to map, track, and mitigate impact. In our recent study on Responsible Marketing and Personalization Practices, we brought our users (marketers) together with their users (consumers) to explore the expectations and concerns of both groups.
Develop your business sense
In a perfect world, deciding that an action is “the right thing to do” is enough. But we’ve found that the business case for Ethics-by-Design is also strong, and that relevant business analysis can lead to greater alignment across stakeholders. Familiarizing yourself with how ethical risks and ethics-by-design can impact your company in financial or reputational terms can help further your cause.
Some questions to think about include:
- Might the ethical risk you’re concerned impact overall adoption of a product? (e.g. Does lack of explainability in AI products negatively impact adoption?)
- Could it erode customer or societal trust in your company?
- Are there financial implications that might come from that loss of trust?
- Are there potential PR implications for your company or customers?
Think about designing for friction that enables education and reflection
As designers and researchers at Salesforce, we’re working to design moments of purposeful and productive friction into the user experience. Rather than make everything as easy and frictionless as possible, we’re focused on identifying places where it’s important for users to pause, reflect, and potentially seek out additional resources or education (and we also make sure those resources and educational tools are easy to access). While this might seem contrary to much of what design has been focused on for the past couple decades, it’s these moments of friction that allow all of us, from designers to end-users, to become advocates for ethical business practices.
Think about ethics as an opportunity, not necessarily a constraint
Ethics-by-design gives us an opportunity to explore and guide our impact on society. When we bring risks out into the open, we get to think creatively about how to mitigate them. Even when we decide not to build or release something, we create space for innovation and new frontiers for problem-solving, in pursuit of positive change for both our business and the world.
We at Salesforce won’t claim to have all the answers. We’re always learning from experts and communities how we can design more ethical products and services. We’re continually seeking ways to put our values into practice, and still discovering spaces where we can have impact.
This journey is ongoing. We will continue to share our approaches, our experience, our challenges, and our aspirations as we go.
Check out our Ethics & Privacy Principles to learn how we are building our COVID-19 solutions in line with our values and our commitment to protect people from harm.
Learn more about Salesforce Design at www.salesforce.com/design.
Follow us on Twitter at @SalesforceUX.
Want to work with us? Contact email@example.com
Check out the Salesforce Lightning Design System