Coaching AI technology teams to drive user-centered improvements

by Stephanie Houde (IBM Research, US), Qian Pan (IBM Research, US), and Jessica He (IBM Research, US)

Stephanie Houde
Human-Centered AI
8 min readJun 12, 2023

--

Picture of one person who is teaching another person by pointing to some images

As advances in generative AI bring new possibilities to AI-infused user experiences, it is crucial to guide the development of AI systems in ways that are responsible and beneficial for users. The rapid progress of AI development and the risks that AI technologies pose to users amplify the need to address some familiar challenges:

  • Many AI research and development teams lack dedicated UX researchers and designers. This can lead to a pattern of building technology-driven solutions that fall short in addressing real user needs.
  • Even with user research experts available, deep technical domain expertise is needed to ask users informed questions. Involving AI experts is necessary for conducting effective user research and generating actionable insights.

As UX Designers who collaborate closely with AI technology research teams, we hypothesized that we could empower such teams to operate in a more human-centered fashion if we provided them with a small amount of targeted, timely guidance on how to independently gather user feedback to answer user requirement questions for their projects.

We conducted an experimental user-research coaching program within our AI research organization to explore this possibility. In some cases, our efforts resulted in more human-centered technology development as a direct result of insights gathered by AI technologists who had never before conducted their own user research. We received positive feedback from the teams we coached, as well as learned important lessons from the teams we were unsuccessful in coaching. We hope our learnings inspire other human-centered AI researchers and practitioners to consider coaching as a means of bridging the UX skills gap and promoting a stronger user-centered culture for their AI teams.

What we did

Over the course of an eight month period, we worked with seven AI technology research teams who required input from prospective users to inform next steps in their projects. Members of the teams we engaged with have backgrounds in artificial intelligence, data science, computer science, and software engineering. They conduct state of the art research in machine learning, build early stage technology prototypes, and regularly publish at AI conferences. Some team members had been exposed to the benefits of UX research conducted by others in earlier projects, but none had taken part in planning and conducting interviews with prospective users themselves.

We had full project schedules of our own, but made time to spend a few hours consulting with each team over the course of several weeks. We focused our efforts on coaching those teams to prepare for and conduct informal qualitative interviews with target users whose profiles matched those of potential future consumers of their AI systems.

Of the seven teams we worked with:

  • Four teams successfully planned and piloted user interviews.
  • Two of those four went on to interview multiple target users and incorporated their findings in determining next steps in their projects.
  • Three teams discussed their user feedback needs with us, but were not ready to conduct their own interviews yet.

Steps for coaching success

We learned many lessons from our coaching process, leading us to identify five steps to ensure a successful coaching experience.

Step 1: Ensure the team is ready

Identifying teams that were ready to benefit from coaching support was key to a successful engagement. We first talked with project owners to carefully assess their readiness. Each of the teams we ended up coaching knew that they needed feedback from target users to advance their projects at the current stage; they just didn’t know how. They were motivated and ready to make the most of any help we could provide.

Step 2: Set expectations on coaching and ownership

Our coaching effort started with a short kick-off meeting. The primary objectives were to learn about their specific project needs and skill sets, and to let them know what we could do for them as coaches. We set the expectation that we could provide guidance in the form of tailored advice, templates, and feedback, but the teams would own the research process and talk to target users themselves.

Step 3: Demystify user research

Speaking directly to target users was a completely new and challenging experience for many of our coaching participants. It was essential to provide our teams with introductory information and a supportive atmosphere where they felt comfortable practicing, learning, and asking questions.

In our kick-off meeting, we introduced the basics of user research, such as speaking with target users in a semi-structured fashion to elicit feedback on their ideas and prototypes.

Two slides briefly describe how to conduct qualitative interviews and emphasize learning and discovery as the goal.
Slides from our kick-off meeting that outlined how to conduct informal, qualitative user interviews in order to generate ideas for user stories. We sensitized AI researchers to be curious and aim for discovery rather than solving problems from the outset.

In subsequent coaching sessions, we communicated that everyone, not just user researchers, can learn the basics of how to conduct a fruitful user interview. These basics included:

  • Recognizing your own biases. The mantra of human-computer interaction is, “I am not the user,” and we communicated the importance of using active listening skills to identify real user needs and pain points.
  • Asking open-ended questions. Open-ended questions such as “what are your goals?” and “what is hard about that?” allow for spontaneous insights and discoveries that may not be revealed when asking close-ended questions about the utility of specific features.
  • Using progressive disclosure. It’s useful to allow users to share what they see and think they can do before revealing how you intended a system to work. This allows you to identify problems users will have in a real usage scenario.
  • Less is more. The AI teams we worked with were accustomed to running research experiments with large sample sizes and felt skeptical that a small number of interviews would be enough to yield actionable insights. When introducing the basics of qualitative user interviews, we emphasized that even just 3 to 5 participants was enough and far better than none.
  • Lightweight prototyping with familiar tools. Another misconception that AI teams had was that getting feedback required using professional design tools to create prototypes to show prospective users. We clarified that teams can make effective mockups using familiar tools, such as slide-making applications and screen capture tools. One team had great success using PowerPoint to mock up new ideas on top of a screenshot of Github, where their new features would live.
A UX mockup depicting a future GitHub Issue with AI-generated code vulnerability comments is shown.
A PowerPoint mockup of Varangian, an AI-based tool for detecting code vulnerabilities. The team made this by overlaying a picture of Varangian output made in GitLab on top of a Github screenshot to give target users a sense of what one possible integration would look like. They used this to seed discussions about this idea and other possible approaches and improvements. Their paper describes the details of their underlying AI technology.

Step 4: Provide tailored materials

We provided each team with a customized, partially-written research plan template as an actionable starting point. All of our teams found this to be one of the most useful resources. The template consisted of sections with suggestions for teams to write in their own research goals, a recruiting plan, an interview schedule, and an interview script. Each section contained example text, such as how one might ask a user to think-out-loud, as well as tips and advice.

A torn document depicts an excerpt from the introduction and background portions of a sample qualitative study script.
An excerpt from the interview script section of our qualitative study plan template. The full study plan includes sections that guide teams to fill in their research goals, recruiting plan, and interview schedule in addition to the interview script.

We helped each team to tailor the template to match their own research goals. For example, one team had several ideas for AI-powered enhancements for their application and wanted to know which ones were most desirable to users, but they didn’t have the UX background to know how to elicit this feedback. We suggested an exercise in which target users are asked to spend a hypothetical $100 to fund various features. By matching the team with the right user-centered method, we were able to help them answer their research questions.

A mural with yellow sticky notes describing use cases and green sticky notes with dollar amounts are shown.
A Mural exercise used to prioritize enhancements for AI-powered research community applications. Target users allocated dollar amounts according to how they valued each feature.

Once we provided the starting materials, we made ourselves available for questions in additional meetings and asynchronous communications via Slack. But, we made it clear that it was the team’s responsibility to own the user research plan from then on.

Step 5: Gather insights through hands-on experience

Interviewing users for the first time was a daunting prospect for most teams, but it was important for them to take ownership from the beginning of the process in order to establish rapport with their informants and build confidence that they could carry out this work themselves. To help them feel more comfortable, we started them off by having them conduct a pilot interview shadowed by a coach. The coach observed the pilot, then debriefed with the team immediately after to provide feedback to improve their interview technique.

Although the AI research teams were all initially uncomfortable with interviewing users, getting them to conduct their own interviews and analysis proved to be extremely valuable. It allowed them to ask users meaningful questions that were informed by their deep technical expertise, and it also gave them direct exposure to the insights they needed to evolve their projects. One team learned enough from just the pilot interview to make improvements to text labels in their application prototype, and two other teams learned that much greater revisions were needed in their technology before it would be worthwhile to pursue additional interviews. Two other teams went on to conduct 5–6 full interviews on their own, which revealed important insights that informed their next steps.

The empowerment that came from having hands-on experience equipped these teams with the skills and confidence to continue conducting lightweight user research on their own, and ultimately, approach their work with a more user-centered perspective.

”Your insights and guidance is extremely useful and valuable to our mission and you’re helping to enable us to perform some level of user studies by ourselves. Last but not least you make us feel comfortable in asking for your help.” — AI research team leader

Reflections

Coaching AI technologists to conduct their own informal user studies was a two-way learning experience. The teams we coached adopted a user-centered mindset and learned skills that expanded their repertoire of approaches to problem-solving and making informed decisions in technology development. We, as coaches, learned how to communicate and expand the impact of our work with AI technologists, and we built new connections with our colleagues.

We collected feedback from the AI research teams we coached and reflected on how our coaching program could be improved in the future. We identified two valuable additions:

  • Interview skills workshops that cover how to ask open questions, how to apply progressive disclosure, and how to keep the discussion going when participants fall silent.
  • Create an annotated repository of user study plans and interview examples from real studies to provide additional interview strategy ideas to follow.

Try it yourself

If your organization has a greater need for user research than experts available to conduct it, or if AI domain expertise is required for speaking with highly technical users, you might consider following a coaching model such as ours. Democratizing user research by teaching others how to do it on their own can help usher in an organization-wide user-centered mindset.

Here are some materials to help you get started.

  1. Project kick-off presentation. Use this presentation to introduce the coaching program and set expectations.
  2. Study plan template. Provide this template to teams who are learning to conduct their own user research with this template. The template has embedded instructions, tips, and examples that can be customized for their own research purposes.

Do you face similar challenges in working with technical AI teams? Have you tried out our coaching methodology? Leave us a response and let us know about your experience!

--

--