If you’ve seen Black Mirror before, you know that well intentioned design can go horribly wrong. This happens in “Nosedive” (episode 1 of season 2), where everyone in the future gives each other ratings from 1 to 5 for just about any reason throughout the day. Theoretically, each person’s ratings could be used to build transparency and trust between people. Instead, they prove to be an unrelenting construct that cause people to be disingenuous, manipulative, and cruel to each other.
Artificial intelligence is no different. It is a powerful technology that can be used for good, but also has the ability to negatively impact our society. This message rang loud and clear during a dynamic discussion at an SF Design Week panel hosted by Adobe called AI and Immersive Design, and has also echoed in what I have seen through my own work within this space. The Adobe panel featured a group of experts which included Ana Arriola, Saschka Unseld, and Sagar Patel. One of the most important topics discussed was the growing impact that AI will have on our lives, and the contention that “AI is what we feed it.” It led to an important question:
How can we avoid bad AI design?
Based on experts in the field, as well as my own experience working on AI projects, there are 4 best practices that AI teams can follow in order to avoid the potential dark side of this technology.
1. Expand who assess AI experiences
During the AI panel discussion, Ana Arriola shared that many companies such as Facebook are forming teams tasked with overseeing thoughtful AI design. These teams must look at the historical data-sets that feed AI in order to ensure that our society’s past beliefs and actions about topics such as gender, race, and equality, aren’t negatively impacting the algorithms that shape our future AI experiences.
Dustan Allison-Hope supports Arriola’s belief that we need teams to be responsible for assessing AI design. During his interview with Roya Pakzad in an article titled “Artificial Intelligence and Corporate Social Responsibility,” Allison-Hope warns that the right people are often missing from such groups. He suggests that “different sets of communities” should get involved, including “engineers, data scientists, and product development teams in general” to ensure more informed decision-making.
As key members of a product development team, experience designers bring a valuable lens to AI assessment teams. A designer’s role is to consider how new product or service experiences will fit into people’s lives. There are many opportunities for designers and user researchers to explore and test how AI experiences will impact users. A designer’s role is to understand user needs and build empathy, and advocate these to product team members. Particularly when it comes to understanding the needs of users who are marginalized or under-represented at large technology companies. Designers bring their research-based knowledge and insights to enrich envisioning new concepts that mitigate the risks of bad AI experiences.
2. Explore possible AI scenarios (good and bad)
Allison-Hope states that there aren’t many examples of AI-Based technology: good or bad, because the technology is so new. He argues that before we create standard guidelines, we still need “different use cases and real-life examples,” as we “might realize some principles need to change in practice.”
Given that our learnings about AI are still unfolding, experience designers need to stay abreast of the latest AI case studies. We also have to be comfortable exploring the unknown. In lieu of many examples today, we can stress test our ethnographic understanding of user needs to consider potential negative impacts of our designs. It’s easy to ask ourselves “what will go right?,” but it’s just as important to also ask ourselves “how can this go wrong?” in order to prevent design choices with damaging and unforeseen consequences. Imagining a Black Mirror episode resulting from your latest decision is a good way to get into this mindset.
3. Design for ongoing human training of AI
Another good way to prevent bad scenarios is to incorporate ways that humans can intervene with AI experiences. At EPAM we worked with a team that aimed to do exactly that. We explored ways to present people in business operations roles with insights to help them better train AI interacting with customers. A key challenge we faced was finding ways to visually communicate massive amounts of data resulting from these conversations. Throughout the project, we asked ourselves the following key questions:
How might we help analysts….
- identify topic areas that the AI is struggling to successfully understand?
- investigate these problem areas through both qualitative and quantitative information?
- use their acquired understanding to train the AI and track their progress over time?
Through this project, we identified multiple ways design solutions can help humans assist AI technology to better understand nuanced information. This involvement of humans in the process not only improves the customer experience, but also adds an important level of trust and confidence.
4. Be more transparent with the public
Another way to build user trust in AI is through increased transparency. During the AI panel discussion, Saschka Unseld compared AI experiences to the food that we consume. People expect that food companies list all of the ingredients in the products they are selling. In a similar vein, Unseld believes that companies should list all of the “ingredients” of AI experiences, which are the data-sets and algorithms that are used to create them.
While full transparency would be ideal, it’s hard to imagine companies sharing information that they often see as proprietary. I contend that we need trusted third parties to perform this due diligence for us. It would speak volumes if companies were willing to open themselves up to evaluation of their AI practices from an ethical perspective. It would give consumers transparency and choice, and also generate the attention this topic needs and deserves within companies.
As experience designers, this would introduce new questions about how we can help consumers access and understand AI ethics ratings. These ratings may be relevant for consumers not only when they make a big purchase decision (e.g., buying an Amazon Alexa), but also when they experience free services such as Facebook or Google. Designers are essential in exploring how AI ratings can and should appear throughout the multitude of micro-experiences that consumers have on a daily basis.
Artificial intelligence has tremendous potential to help humanity. It has countless applications including things like making energy consumption more efficient, identifying early signs of illness, and even inspiring creativity (e.g., Watson BEAT produces song inspiration for composers). If designed well, AI can enhance our lives and support people in doing what they do best.
However, just as the technology featured on Black Mirror, AI can have negative side-affects. It can surface misleading content, limit our thinking, and inundate us with unwanted messaging. It has inherent bias that must be corrected on an ongoing basis in order to prevent damaging influence. The pervasive nature of AI and the apparent invisibility of its bias makes it critical that we scrutinize its design to ensure a fair playing-field for all users.
Experience designers have the opportunity to apply best practices in this space by 1) being part of teams that regularly assess AI, 2) exploring multiple AI scenarios, 3) designing ways for humans to train AI, and 4) surfacing key information to users about the ethical quality of their experiences.
EPAM is a consultancy that is developing AI experiences. As an experience designer working alongside engineers and data scientists, I see my responsibility in shaping positive AI experiences for users and the larger society. It is great to see a growing awareness of AI and its implications. I look forward to more great discussion within the design community on this topic as we see AI evolve over time.