Meet Ayça Çakmakli, Google’s new Head of Responsible AI UX

People + AI Research @ Google
People + AI Research
8 min readApr 21, 2023

--

Illustration for Google by Mahima Pushkarna

Because the People + AI Research (PAIR) team at Google takes a human-centered approach to the research and development of AI tools, educational resources, and design frameworks, we work closely with UX researchers, designers, and content strategists. And as AI itself is rapidly evolving with the emergence of generative AI, so too is PAIR’s approach, as our new co-leads Michael Terry and Lucas Dixon shared here on Medium recently.

We believe that for generative AI products to be developed in a people-centric way, it must be done so responsibly and be guided by appropriate principles (like Google’s AI Principles). To do so, generative AI development needs to involve and reflect the global communities it affects, and must be guided by a diverse set of UX researchers and designers both internally and externally. So our editor, Reena Jana, met up with one of PAIR’s newest internal collaborators with broad industry experience in product design and user research, with a global and inclusive point of view: Ayça Çakmakli, Google’s new Head of Responsible AI UX. They discussed how Ayça brings her lived experience, UX thought leadership, and deep consumer product expertise to the emerging — and unique — UX design challenges posed by generative AI.

Q: First things first, welcome to Google and to the extended PAIR team! Before we dive in, can you give us a brief bio?

A: Thank you, I’m honored to have joined Google and to be part of this team at such a critical moment in tech with all the advancements in generative AI, which I refer to as genAI for short. I’m Ayça, and I grew up in Istanbul, Turkey. After a few disparate career false-starts in my 20’s, one in political science and the other as an installation artist designing around themes of robotics, computer viruses, and the migrant Roma population in Istanbul, I found my way to Smart Design in NYC while pursuing my graduate degree in Product Design at Pratt Institute.

Along with a few other cutting-edge design consultancies at the time, Smart Designhelped pioneer many of the essential user empathy building design research strategies that define the now ubiquitous, human-centered design process. It was at Smart that I first learned how to operationalize my sensitivity to the relationship between user and product and cultivate my ability to draw out design actionable user insight and guide the design process.

I find myself drawn to design challenges where I observe a clear mismatch between real user needs and product performance — where the needs of marginalized and/or vulnerable user groups are overlooked by current product experiences, and this is where I’ve been able to focus my career efforts over the past several years. While at Instagram and Meta, I was able to establish guidelines for a more empathic approach to embedded UX work abroad, create country-specific design frameworks for Japanese users, and design a toolkit for creating safer experiences for youth on social media.

Q: How might your lived experience, as well as your professional expertise, bring a new perspective to the RAI UX team, to help continuously improve our genAI research experiments and product development?

A: With genAI in its infancy, poised to reshape much of the internet as we know it today, we’ve been given an opportunity to prioritize a new range of users, establish a more acute connection between user needs and product benefits, and address more significant user/society problems with technology. And today society is faced with no shortage of major problems: from the disintegration of truth and trust in institutions, the viral spread of mis/disinformation, media bubbles and information silos, the ongoing climate crisis, our struggling education system, to mental health and well-being, etc.

Fortunately, thanks to great work in genAI here at Google and elsewhere, we have a technology powerful enough to help us address these challenges and more, but we need the right products to do so. Bringing these products to market in a way that will maximize benefits to the user while avoiding any unintentional harm will require a deeply empathic and personal approach to UX research & design. We may need to rethink some of the ways we develop, launch, and iterate on our digital products, and I think I’m well-positioned to help here.

Q: On a personal level, what does designing Responsible AI mean to you?

A: Honestly, even defining “Responsible AI” as an industry-wide term can be a challenge, let alone designing it, but these are the right questions that we should be pressuring ourselves to continually address, as our knowledge in this area grows. The way I think about designing responsible genAI user experience, at this particular moment, at a high level, essentially breaks down into two directives and subsequent areas of study: Maximizing benefits to the user/society/environment, while minimizing risks and harm.

Ideally, “responsible” product design of any sort should be conducted this way — we want to be confident that the natural resources, energy, manpower, and externalities introduced are far outweighed by the benefits the product brings to the user and society. And, considering such powerful tools like genAI, a thorough cost/benefit analysis, in this vein, is imperative. At the risk of coming off trite, let me briefly elaborate on a few points. From a User Experience perspective, designing Responsible AI means:

A. Minimizing risks and harms to the user, society, environment.

The Responsible AI and Human Centered Technology team in Google Research, which includes PAIR, RAI-HCT’s partner team, Responsible Innovation in the Office of Compliance and Integrity, and several other teams at Google are doing great work to understand the abundant risks of genAI to users and society at large. We have an impressive range of internal and external experts — across the social sciences, ethics, human rights, law, machine learning, philosophy, design and innovation — helping Google establish best practices for genAI development and deployment. Google has invested a lot of resources in our efforts to avoid risks and harms and act responsibly, and we should be quite proud of this. We can’t underestimate the effect of ignorant or nefarious actors with this technology. Minimizing risk to the user means continuing this great work to better understand the potential risks of genAI, and ensuring that this effort is not in vain, by making our learnings actionable, capable of being seamlessly incorporated into an iterative model design/engineering process.

B. Maximizing benefits to the user and society.

Sometimes in the field of design, we discover an unmet user need and design a new product to address that need. Oftentimes in the tech industry, this happens in reverse: engineers develop amazing new technology, then we go find a user need the new tech could address, then we design a product. Both approaches can result in great product design that users love, if/when the user needs highlighted are significant and the product/UX designed addresses these needs effectively. This is key.

Maximizing benefits to the user means, in the end, that we’ve aligned significant user needs with technology benefits via great product design. Our role in UX is to ensure that the product and engineering teams are equipped with this insight throughout the development process and are working towards building the products that truly improve the lives of our users, not simply attract their attention and generate usage. It is worth making an effort to work big to small and focus this technology on our most significant problems.

C. Making research actionable for design & engineering.

One of our biggest challenges in building responsible genAI, from my POV, will be in establishing a workflow that continuously seeds the engineering and stakeholder teams, charged with high-level decision making, with valuable and actionable user insight. User experience research insights and complex safety/ethics expert guidance need to be framed in a way that they can be incorporated into the design process not applied as speed bumps, after the fact.

To me, from a UX perspective, responsible genAI, at this time, means we’ve done our best to maximize product benefits AI offers to the user, while minimizing risks. We’ve done the due diligence to understand exactly what is on the balance, and we’ve made the informed decision on just how to proceed with the responsible deployment of our powerful genAI products.

Q: How has your research approach in genAI evolved over recent months?

A: Over the past several months, we’ve been fortunate to be able to observe the many genAI releases across the market, which has helped to affirm the AI Principles, safety/ethics guidelines, and conscientious approach we’ve been discussing internally. One relevant theme that has crystallized might best be summarized as Managing User Expectations.

A list of Google’s 7 AI Principles.
A list of 4 applications Google will not pursue.

When there exists a chasm between what users expect your product to be able to do and what it is actually capable of doing, problems abound, and the user experience tends to suffer. Every mistake, fabrication, or “hallucination” these genAI products make undermines the technology’s positive potential, diminishes user trust in the products and brand, and sometimes does direct harm to individuals, culture, and society. This is why carefully managing/strategizing exactly who has access to these technologies is so important at this early ‘learning phase’ of development.

So, one theme my team has been stressing is that building more responsible genAI product experiences means better managing user expectations. I’m happy to elaborate more specifically on this important point, but perhaps I’ll leave it there for now.

Q: How might we better encourage responsible prompting for genAI models and systems?

A: If we’ve learned anything from the social media explosion of the past decade, we should’ve learned that unintentional harm inflicted upon users and society can surface even with the best of intentions of the engineers and UX teams who created these platforms. Responsible prompting of genAI models should never be assumed and underestimating nefarious or ignorant actors, enabled by such powerful technology, would be a huge mistake. User motivations are entirely out of our control. Fortunately, so many aspects of these tools are still well within our control, and we need to understand clearly what they are and wield this control with responsibility and wisdom, if we hope to protect our users & society from significant harm. We determine:

a. The capabilities of our models, i.e., what our models can do, what data do they have access to, how is this data curated, how are these models trained, evaluated, and iterated upon, how interpretable (traceable and transparent) the outputs are, where these models excel, what they should avoid, etc.

b. The access to our products that utilize generative AI, i.e., who has access to products with what capabilities, how/when/why user access should be granted, how we responsibly make public new versions, how we test, learn, gather meaningful feedback, how we might preserve the ability to increase/decrease user access when required, etc.

If we rely solely on the responsible prompting of users, we’ve missed several of our best opportunities to protect our more vulnerable users (i.e., anyone who might not be equipped to distinguish fact from fiction or opinion, fall prey to mis/disinformation, misunderstand the context or validity of a given response, etc.) and provide them with a truly beneficial product experience. Fortunately, we are well aware of the need to establish responsibility protocols earlier in the development workflow. My team tries to reinforce the idea that responsible AI begins at bedrock.

Like the equalizer dials on your stereo, we should be able to fine-tune the myriad model characteristics that define its capabilities within a product and delicately finesse our approach to user access given our thorough understanding of the needs of users, their vulnerabilities, and what an ideal and safe user experience calls for.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.