Meet the new co-leads of PAIR: Lucas Dixon and Michael Terry

People + AI Research @ Google
People + AI Research
9 min readMar 30, 2023
Portraits of Michael Terry and Lucas Dixon
Portraits of Michael Terry (left) and Lucas Dixon (right) by Mahima Pushkarna, Senior UX Designer, PAIR

Back in 2017, we announced the launch of PAIR by stating, “We believe AI can go much further — and be more useful to all of us — if we build systems with people in mind at the start of the process.”

In the six years since, we’ve continued to bring together researchers across Google to study how people pair (every pun intended) with AI systems — and then continuously created and released free technical tools, visualizations, and resources for other researchers, AI practitioners, policy leaders, and the curious public. We’ve also seen AI quickly evolve over these past years.

Today, we’re announcing a new era for PAIR, and are excited to share that PAIR is now co-led by Lucas Dixon and Michael Terry. They will continue to adapt PAIR as AI itself evolves in the age of generative AI while building upon the work of PAIR’s co-founders, Fernanda Viegas, Martin Wattenberg, and Jess Holbrook, and of Meredith Ringel Morris, PAIR’s most recent lead, as well.

Recently, our editor, Reena Jana, chatted with Lucas and Michael not only on their shared vision for PAIR’s next chapter, but also their thoughts on the next chapter of the history of AI research itself, from a people-centric PAIR point of view.

Google founded PAIR to focus on “the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive.” With recent advances in generative AI — and responsible AI — how is PAIR’s research focus evolving?

Michael Terry: Despite these advances in AI, the focus of PAIR remains the same — showing the new applications AI creates, and making AI broadly inclusive and equitable. What has changed is the new possibilities afforded by the latest crop of generative AI.

One of the things that excites us is how generative AI enables a whole new group of people to quickly customize AI for rapid prototyping of new forms of human-AI interaction (HAI). Generative AI now gives people the ability to control and customize AI using natural language alone. For example, a software developer can use a large language model (LLM) to help with software development by entering a request such as, “Write code in Python to open the file ‘readme.txt’ for writing.” Or they can create an image using a text-to-image (TTI) model by providing a description like, “A quaint, gingerbread house like one you’d find in a children’s book, rendered as a painting.”

These advances in AI are really noteworthy: People can now describe what they want AI to do in everyday language, and the AI can often understand them. It’s not perfect, but it’s a significant change in how easy it is to customize and control AI for our own individual goals.

These capabilities are completely transforming the way we think about interacting with AI, and they create lots of new opportunities. PAIR is really excited about how we can leverage these capabilities to make AI easier to use by more people. We also see lots of opportunities to help educate people about AI by using these new forms of AI.

Lucas Dixon: Something very unusual, perhaps unique in the history of the growing complexity of AI, is happening. It’s now getting easier for people to understand the key thing that controls modern AI’s behavior: a small bit of text or a small dataset, maybe just 10s examples. This is happening because, to make AI catch a pattern, you need far less data than ever before. Also, you can now read every example in a dataset, and edit examples that seem wrong, and that will have a meaningful change of behavior when you tune your model. AI systems are getting bigger and more complex, but in the process, surprisingly, they are becoming more scrutable.

To see why this is happening, we need to look closer at the way modern generative AI works. It has two stages of development: one is creating a base model (a so-called “foundation” model); and the second is controlling the base model to do something specific, like write a poem. The first stage is very computationally expensive and slow, and as a result it is done by fewer and larger organizations; mostly by companies with huge investments in AI. However, the second stage is cheaper, easier and more accessible than ever: natural language understanding has made enough of a breakthrough that natural language itself is now the key control medium for AI.

This means PAIR’s work on generative AI is increasingly about the alignment of model behavior with the people it affects. What’s new is that the boundary objects that connect different stakeholders are now more understandable and more interactive than ever: they are small datasets or fragments of text. Perhaps it will be possible for products to give people simple understandable textual representation of their preferences? What then are the tools we need to help align generative AI with human values? With social responsibility? With the goals of startups to create disruptive and amazing new products? Or with artists who are interested in exploring the boundaries and nature of the human experience?

Let’s dive into these topics in a bit more detail. Michael — can you say more about this notion of quickly customizing an AI? Customizing AI seems like it wouldn’t be easy for a non-technical person. What is really required?

Michael Terry: You can customize an AI model like LaMDA or PaLM using what is called a “text prompt.” At a fundamental level, these language-based AIs are like a supercharged autocomplete. Given some initial text, it comes up with text likely to follow that initial text. A text prompt is simply that initial text you send to the AI.

For example, if I write, “Hello translated into French is”, the AI is likely to autocomplete it by outputting “Bonjour.” Or, I could write, “Foods that pair well with apples include.” In this latter case, it will suggest things like cheddar cheese and peanut butter. In both cases, I’m sending my text to the same AI — it is just responding differently based on the initial text.

These text prompts are akin to customizing the AI to perform a specific task. In the first example, the text “customized” it to translate English into French, and in the second case, the text “customized” it to recommend foods to pair with apples.

With these new forms of AI, one of the biggest hurdles for people to initially overcome is to realize that this is all there is to “customizing” these new forms of AI — finding a text prompt that will lead the AI to produce the result you want. It typically takes 5–10 minutes to pick up on this, and then once it clicks, people start exploring individual ideas for how AI can help them solve their specific problems.

In this way, prompting generative AI can also be understood as a new way of “programming” AI — that’s opened up for more people, even those who have never learned to code. How do you each envision PAIR playing a role in making AI more accessible for more people around the world?

Michael Terry: PAIR has a number of efforts in both research and education to help make AI easier to use and understand. In our research, we developed tools like PromptMaker and PromptChainer that make it easy for people to interact with these new types of AI. (PromptMaker has since evolved into its own external product, MakerSuite.) We also regularly publish educational materials, like AI Explorables and the PAIR Guidebook, which has guidance for UX teams, too. Stay tuned for updates on these — and a Q&A with Ayça Çakmakli, the new UX lead in Responsible AI here at Google and a close collaborator on PAIR, in the coming weeks.

A gif of a screen recording showing an early research prototype for an interface for PromptMaker, a tool for rapidly prototyping new ML models using prompt-based programming, presented at CHI 2023.
An early research prototype for an interface for PromptMaker, a tool for rapidly prototyping new ML models using prompt-based programming, presented at CHI 2023.

Lucas Dixon: One of the most interesting recent trends in generative AI is parameter efficient tuning (also called modular learning): this changes the paradigm of how we control language driven models. Instead of providing a text prompt, the user can provide a small set of examples. So in a way, it’s a bit like few-shot prompting — where the prompt contains a small set of examples — but it avoids many of the ad-hoc characteristics of prompting (e.g., where ordering of examples matters, delimiters matter, and specific phrasings matter). Parameter efficient methods generalize well from a small number of examples — they don’t overfit like traditional fine tuning methods do. We wrote a paper on this recently that highlights the potential of this method by showing how to make state of the art classifiers for safety properties with as few as 80 examples. When you only need that many examples, and when a model can help you generate the data, we’ve gone from a world where you need months or years of work and a big ML team (e.g. this is what I needed to make the Perspective API) to a single engineer being able to do the same work in a day — and they don’t need to know much about ML.

As this is such a new way for everyday people to partner with AI, what implications do you see emerging in terms of responsibility and safety?

Michael Terry: Responsible AI (RAI) is an important area with a lot of ongoing research. One of the key responsibility advantages we see with these new forms of AI and the ease with which you can customize them is that you can start evaluating your ideas much, much faster than in the past. If it only takes 10–15 minutes to create an initial customization of the AI, that means you can start getting feedback that day from stakeholders and potential users, which allows technologists to build with everyone in mind. This is a responsible practice and a really useful capability people can lean into: Use the rapid prototyping capabilities of these forms of AI to get feedback early and often, before investing too much time in developing an idea that may not actually meet people’s specific needs.

To make this more concrete, let’s consider again the example of suggesting food pairings. Maybe I have a product idea for an app that helps me write down a grocery list (like a to-do list, but for groceries). And for this app, I’d also like to have the AI suggest additional foods to consider for each item on the list.

With text prompts like the one previously shown, I can add a proof-of-concept of this idea to my prototype in relatively little time. And with that proof-of-concept, I can test it with users. Do they find this type of capability useful? Why or why not? What issues might arise when using an AI in this circumstance? For example, in testing, I might find that the AI only suggests particular types of foods (maybe it’s always suggesting vegetables). Seeing this, I can start to get a sense of how the AI should ideally perform, but also start to think about how to design for potential failure cases.

Lucas Dixon: For me, one of the most fascinating things is the flip side of these models being trained on data that includes horrible language: you can easily use them to train classifiers that detect the very same horrible language. The same AI that has unintended biases, and can output irresponsible things, is also a powerful tool to tackle the responsibility challenges for AI. It helps fix many of the challenges it creates. This is really important because: 1. we can’t undo the revolution of generative AI; and 2. Today’s large generative models present a key new technology to tackle responsibility more generally in the tech-sector, be it social networks, or other technology-enabled problems related to human communication. For example, parameter efficient methods just won an ML competition to detect and explain sexist language. So, perhaps the most exciting area for me is being able to help people communicate more effectively, and using this to reduce what appears to be growing polarization.

Given the scale of the responsibility challenges AI developers face, human-centered work is perhaps more important than ever. Luckily, as Mike highlighted, modern generative AI also enables really fast prototyping, so it’s a very exciting time for responsible AI with many new ways to look at it. We now have a clear responsibility to use generative AI to make more socially beneficial technology.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.