3 Google I/O take-aways: Creating responsible generative AI products

People + AI Research @ Google
People + AI Research
6 min readMay 20, 2024

by Mahima Pushkarna, Design Lead, People + AI Guidebook and PAIR, and Sally Limb, Senior UX Designer, Responsible AI UX

A diagram of five principles with a green and black image above each principle. Principle 1: An image of a game console, “Design for user autonomy”. Principle 2: An outline of 2 people, “Align AI with real-world behaviors”. Principle 3: A shield with a check mark, “Treat safety as an evolving endeavor”. Principle 4: Messaging icons, “Adapt AI with user feedback”. Principle 5: A heart shape, “Create helpful AI that enhances work and play”
Five product principles that span the product development process

How should product design and development evolve so we can create helpful and safe generative AI experiences for people? It’s the question we’ve been investigating since PAIR’s earliest explorations and applications of this new technology and in recent papers such as Google Deepmind’s “Holistic Safety and Responsibility Evaluations of Advanced AI Models,” which looks at how we have deployed different safety evaluation techniques for our leading models, and what we have learned. It’s a difficult question given it’s a profoundly promising new technology with unique challenges. It’s hard to predict the variety of new ways people may interact with generative AI.

In addition to conducting user research and technical experiments, the extended PAIR team engages in a steady stream of hands-on workshops with product teams at Google and across the industry to help us understand how product and AI practitioners approach the design challenges of generative AI. These workshops were done in partnership with Google product teams including Cloud, Google for Startups, and Retail Ads, as well as other teams exploring social benefits and potential harms of AI, including Technology & Society and Moral Imagination. This wide range of collaborations offers us a wide range of perspectives and insights. The result of all this work led us to develop product guidelines that we’re including in our rolling generative AI updates to the People + AI Guidebook.

As we developed these guidelines for generative AI, we evaluated each against a litmus test:

  • Is it long lasting and people-centric?
  • Will it lead to more helpful products, regardless of the AI used?
  • Will it lead to safer and robust AI systems?
  • Will it lead to more helpful and useful generative AI systems?
  • Will it lead to the creation of appropriate, safe, and helpful value to people?
  • Is this principle considerate of the product platform in which it lives?
  • And finally, is this principle considerate of new ecosystems and behaviors the product might create?

In our May 14 talk at I/O we shared some of what we’ve learned. While we’ll continue to refine our guidance as this technology evolves and people make it their own, here are three high-level take-aways that address the how, the who, and the when and where of generative AI product design and development.

How: Tightly integrate design and development

Design and development are typically considered to be two separate functions that come together at very specific moments, such as when handing off a spec, or using recommendations from UX research to tackle a product problem. But the responsiveness and complexity of generative AI can become more efficient to manage with continuous collaboration by both groups.

Generative AI’s ability to respond to virtually any request makes it difficult to predict where a user journey may wind up. It’s similar to a lively dinner conversation, when you end on a topic far removed from where you started. Or another metaphor may be when you’re informally seeking information from a variety of multimodal sources you might use to learn about something new, say, when planning a trip to a place you’ve never been: you start with an online search, watch a video, read a book…the journey may not be linear and certainly not predictable.

Given the unpredictability of generative AI, integrating product design and AI development closely throughout the end-to-end product development process can help teams deal effectively with such an open-ended and complex product experience to create safe, meaningful, and helpful generative AI models and applications. The expertise of each discipline can help create a shared understanding of the potential effects of seemingly small changes to functionality and its expression to users, especially when it comes to ensuring the highest levels of product safety and reliability.

In addition to bringing the two team’s process flows closer together, we strongly suggest creating interaction design policies for your products, as one important way of guiding the integrated processes toward specific and actionable goals.

Who: Humans throughout the loop

The well-established human-in-the-loop approach typically means humans review and rate responses from the system as part of the refinement of the model. But to create safe and helpful generative AI, we need to radically loop in people from many walks of life, and across the entire AI building process. That means including a diverse range of viewpoints to understand what the model should do. It means using participatory research methods to define what the model should optimize for, and what data the product needs. It means prototyping the generative AI features as early as possible to test them with users and experts, so the developers can prepare for the right range of inputs. It means evaluating outputs for unexpected model behaviors. And because these models are often asked to address an array of needs, they should be red-teamed by experts. That way the product team can create safety policies informed by lessons from research with all of the great variety of people now in the loop.

Human decisions drive the model and product development process. And with generative AI we’re developing a technology that can be turned to unanticipated uses simply by writing a prompt. That means designers and developers have to be creative, imaginative, and principled in anticipating what users might want to do with this tool’s responsiveness and malleability, as well as how people might use it in unintended and negative ways.

The loop needs to reach out beyond the office walls, for no group on its own can sufficiently anticipate this technology’s uses and abuses. We need to talk with the communities that will be using it and the communities that will be affected by it, making every effort to include the widest range of people we can. We need to listen carefully and empathetically. One way that generative AI can help build for more people and with more people from a variety of communities and backgrounds is that it makes creating product prototypes far easier than ever before. With simple prompts to LLMs, more people can create and test product prototypes, even without writing a line of code, making access to product design and testing open to broader groups of people.

We’re calling this overall approach “humans-throughout-the-loop.”

When and Where: Where it’s used, for as long as it’s used

The timeline for the design and development of generative AI applications starts with the initial spark of an idea, but extends all the way through the product’s lifetime of use. That’s required for maintaining the highest standard of safety and responsibility, because many generative AI products are going to significantly change — and improve! — the environments into which they’re deployed, and, in turn, will be changed by those environments.

This creates new challenges, as does any dynamic adaptive system. The integrated design and development process needs to anticipate that the culture in which its products are embedded and tested for safety will change. So we need to provide the functionality, design, tools, and policies that let generative AI products evolve along with their extended communities of users.

For more information

Those are three “big picture” take-aways. For more details, we hope you check out the new, and thoroughly updated People + AI Guidebook, which contains a wealth of suggestions, guidelines, and how-to’s. Let us know what you think!

This post represents the work of many Googlers, including Josh Lee, Dan Littlewood, Vanessa Milan, Jaih Craddock, Soojin Jeong, Roxanne Pinto, Ari Alberich, David McGaw, Chris Butler, Reena Jana, Lucas Dixon, Ayça Cakmakli, David Weinberger, Devki Trivedi, Kathy Meier-Hellstern, Quinn Madison, James Wexler, Ryan Mullins, Emily Reif, Ludovic Peran, and our executive sponsors Marian Croak, Ricardo Prada, and Amanda Storey, and many more.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.