PinnedPeople + AI Research @ GoogleinPeople + AI Research3 Google I/O take-aways: Creating responsible generative AI productsby Mahima Pushkarna, Design Lead, People + AI Guidebook and PAIR, and Sally Limb, Senior UX Designer, Responsible AI UXMay 202May 202
People + AI Research @ GoogleinPeople + AI ResearchLLM Comparator: A tool for human-driven LLM evaluationBy Minsuk Kahng, Ryan Mullins, Ludovic PeranMay 141May 141
People + AI Research @ GoogleinPeople + AI ResearchGenerative AI is reshaping our mental models of how products work. Product teams must adjust.By Reena Jana, Mahima Pushkarna, and Soojin JeongApr 262Apr 262
People + AI Research @ GoogleinPeople + AI ResearchInteraction Design Policies: Design for the opportunity, not just the task.By Mahima Pushkarna, design lead of the People + AI Guidebook.Jan 203Jan 203
People + AI Research @ GoogleinPeople + AI ResearchUpdating the People + AI Guidebook in the age of generative AIBy Reena Jana and Mahima PushkarnaNov 22, 20232Nov 22, 20232
People + AI Research @ GoogleinPeople + AI ResearchGenerative AI was a helpful tool for an adventurous art exhibition.By Emily Reif, with Camille Benech and Lucas Dixon (with special thanks to Shahryar Nashat, Sara Sadik, and Rachel Rose)Oct 5, 2023Oct 5, 2023
People + AI Research @ GoogleinPeople + AI ResearchGenerative AI: A Golden Opportunity for UXby Ayça ÇakmakliAug 9, 20233Aug 9, 20233
People + AI Research @ GoogleinPeople + AI ResearchThe Generation, Evaluation, and Metrics (GEM) Project: Improving dataset transparency with the Data…By Mahima Pushkarna and Andrew Zalidvar, with Sebastian GehrmannAug 11, 2023Aug 11, 2023
People + AI Research @ GoogleBuild generative AI products responsibly with MakerSuite, using the PAIR GuidebookWith all of the excitement around generative AI, we have been working on new tools, like MakerSuite, to let people prototype with large…Jun 2, 2023Jun 2, 2023
People + AI Research @ GoogleinPeople + AI ResearchExploring how AI models can avoid “confident incorrectness”Much of the conversation around AI models today is healthy skepticism and centers around raising proactive questions about risks and harms…May 12, 2023May 12, 2023