Banner that says “The AI-Augmented Life” in light green against a rounded, rectangular, dark green bubble and “#01” below it also in light green on a dark green, scalloped circle. The banner has a light yellow background and three circles, one in light green, green, and dark green, on each side.

Building the AI-Augmented Life

People + AI Research @ Google
People + AI Research

--

by Adam Connors

Hello, I’m Adam and I’m here to introduce a new initiative within the People + AI Research (PAIR) team that we’re calling AI-Augmented Life.

So much has been written and said about how generative AI — like Gemini and ChatGPT — will transform our lives. It promises to enhance our productivity, help educate our children, search, plan and create with us, and has the potential to touch almost every aspect of our world. But the devil is in the details, and we know from experience that humans are notoriously bad at predicting the future. So how can we possibly get a better handle on how the next few years will play out?

The AI-Augmented life project aims to employ user studies and prototyping to explore the range of practical ways that AI might change our lives in the coming years. Think of it as a breadth first search; a human-centered exploration rather than a product-centered one. We will endeavor to: live the future now.

We want to form the habit of examining our day-to-day lives and asking ourselves:

How might AI change this moment?

To help us understand what’s possible, we’ll build functional prototypes and use them in user studies and in our day-to-day lives in order to better understand and reflect on how our habits and expectations could evolve as a result of these new technologies.

What would it be like to use AI to help us write these blog posts? Buy a book? Learn a new skill? Decide what to watch on television? Cook a meal? Play a board game? Have more rewarding conversations with our friends?

And throughout, we will ask an important question:

What might be gained, and what might be lost?

The possibilities are endless, and the only way to understand what the coming years might bring will be to try it.

As a team, we’ll be blogging regularly about our experiences, here on the People + AI Research channel on Medium. We’ll share our successes and failures, moments of clarity and frustrations. Our hope is that by sharing these experiences openly and being willing to learn as we go, we can create a dialogue in which we can more deeply explore the potential (and potential downsides) of AI.

It’s going to be a journey, and it’ll only be successful if it takes us to places we haven’t thought of yet. But here’s a taste of some of our thinking as we start to imagine our AI-augmented lives.

We want to build prototypes

This blog isn’t about trying every tool and gadget we can get our hands on, nor is it about imagining a far-flung future in which anything is possible. We intend to base our projects on user studies and functioning prototypes, which we’ll share in the form of an experiment log in our Github repo wherever we can.

AI is making a million new things possible, but there’s often a lag between what’s possible and what’s ready to be a commercial product. Prototyping in this space creates a fascinating opportunity where we think there is much to be learned.

We want to live with AI

We’re really interested in habits: those ubiquitous parts of our lives that we hardly even see and which AI will soon become an integral part of. What will our children or grandchildren take for granted when it comes to AI?

AI generated portrait of me created by Lucas Dixon using Imagen. Prompt: “colorful watercolor cyberpunk painting, a happy portrait of a person with messy hair looking directly at the camera, with a camera mounted on his shoulder, one headphone, and an eyeglass on one eye.”
Courtesy: AI generated image of me created by Lucas Dixon using Imagen. Prompt: “colorful watercolor cyberpunk painting, a happy portrait of a person with messy hair looking directly at the camera, with a camera mounted on his shoulder, one headphone, and an eyeglass on one eye.”

User studies are a valuable way to challenge our assumptions about how people might interact with a particular experience, but we’ll gain a deeper perspective by immersing ourselves in these experiences over longer periods of time.

How will it feel to talk to my AI every day? What happens if I take it shopping, or if I take it to hang out with friends? Some of it might get a bit silly in places, but there’s a serious research question at its heart: If I have a personal agent I can talk to anywhere, in what circumstances will “chatting to my agent” displace “browsing on my phone”?

This will be a personal journey as well as a research project

We’re thinking of this project as a series of personal journeys into AI, and so it will be unashamedly anchored in our own lives and the directions we extract from user studies.

I took this photo of my family from behind, while we were sitting on a grassy hill looking out at a beautiful, green landscape. My oldest son is closest to me and wearing a black hoodie with his head covered and red bottoms with white and blue details. My youngest son is the second closest and wearing white sunglasses with black lenses, a navy blue shirt with a light blue trim. My partner is sitting the furthest away, wearing an orange dress with white polka dots and a tan sunhat.
My own English suburban life will shape my experiences and therefore my projects.

At a personal level, I am both a software engineer and a novelist: half the week I write software for Google DeepMind, and half the week I write young adult fiction as A. Connors. So I’m obviously fascinated by how generative AI and writing interact, and I think that by exploring how AI can augment creativity we can learn a lot about its capabilities and limitations.

But we also know that our own limited, personal perspectives won’t be enough. To be successful this blog has to include more voices than just our own. To that end, we hope to engage with people from all walks of life (both inside and outside of Google) so that they can bring their own perspectives.

So what does AI say?

Let’s start as we mean to go on, and see what Gemini thinks of this blog so far (minus this section):

Prompt:

Image of prompt that reads, “This is the first blog post for a new series where we’ve going to augment as much of our lives as possible with AI. We’d like your feedback. Please answer the following questions: 1) What do you think of it? 2) what would improve it? 3) What is missing? 4) What doesn’t work well?”

Response: What do you think of it?

Image of response that reads, “The blog post is a strong introduction to an exciting and relevant project. It effectively communicate the core concept of the AI-Augmented Life initiative, highlighting its potential impact and the methodology behind it. The conversational tone and personal anecdotes make it engaging and relatable.”

Aww, shucks… I bet you say that to everyone. I do like that Gemini has been optimized to be nice to us. I’d be curious to see how poorly I’d have to write in order to get a less positive response. I’d like to make Gemini squirm.

Response: What would improve it?

Image of Gemini’s recommendation for improvements that reads, “More specific examples: While the blog post mentions several potential areas for AI augmentation, providing more concrete examples of prototypes or experiments would make the content more tangible and relatable.”

More examples? I tried to offer a few. Could I have offered more? Perhaps, but there are only so many words available and I hope I’ve given enough to tease the direction of this blog.

That said, in response to an earlier draft, a colleague asked for more concrete examples as well. Why was I more receptive when a colleague made that suggestion than when Gemini made it?

My colleagues carry more natural authority for me than Gemini at this point in time. That’s a good thing. But I wonder what it would take for that to change? And I wonder what the unintended consequences of that would be?

Gemini’s next recommendation for improvement that reads, “Inclusion and Diversity: Emphasize the importance of involving diverse perspectives in the research and development of AI-augmented experiences. This would ensure that the technology benefits a wider range of people and avoids perpetuating existing inequalities.”

Yes! Absolutely. Guest contributors and interviews, in short: more listening. This will definitely come into future posts.

What is missing?

Gemini’s response for what’s missing, which reads, “Ethical Considerations: While the blog post briefly mentions potential downsides, a more in-depth discussion of the ethical implications of AI augmentation would be valuable. This could include topics such as privacy, bias, and the potential for dependency on AI.”

This is a very fair point, and I suspect many humans will have the same response. We’ve given it some thought and our conclusion is that our goal in this project is to explore the possible and call out at each step what might be gained and what might be lost. This will likely touch on ethical questions, but there are better places for more general discussions of AI ethics, and so our focus will be specific to each case.

Finally

We hope that this will be an exciting, personal, insightful journey, but this project will only be worthwhile if we can be open and direct about the challenges AI has already encountered, the mistakes we’ve already made, and the broader debate around AI and ethics.

It needs to be a conversation. So please engage in the comments: challenge us; point out our mistakes; catch us when we fall; keep us honest, open, and authentic; point us towards the most interesting possibilities; and call out the pitfalls we haven’t yet thought of.

Subscribe to receive more posts in this series and be a part of the journey!

Best,

Adam

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.