Reflection on user research for AI startups

lex fefegha
The Comuzi Journal
Published in
4 min readApr 3, 2018

I have been reflecting a lot lately about AI and user research. Thought I’d share. This piece was inspired by a chat I had a week early with Anna Gát who is building an AI startup Ixy.

When I started my academic research regarding futures, AI and ethics. I approached AI startups as I assumed it would be easier to talk with leadership about certain things, eg —

How do you test your AI powered product with users?

What ethical framework has your company developed internally in order to build a better product?

I was surprised by the lack of answers or the lack of regard to this area of focus.

Tristan Harward of Appcues outlined that in his piece for Invision that very few companies conduct qualitative user research. Stats from the 2018 Design in Tech Report indicate:

12% of early stage start ups engage in qualitative research, 32% mid-stage and 46% late stage startups engage in qualitative research.

To be honest, in my almost ten years of building digital products & services, it has always been difficult to sell user research as part of a project to an early stage startup. Most times you are working with very modest budgets, every penny is a fight, and spending 10% of the budget on ‘just talking to people’ can get some crossed eyed stares in meeting rooms when suggesting that.

While future AI systems/products will be more sophisticated and intelligent than the likes of current products in the market. I stop to think and wonder -

How do we make sure we are building products powered by AI that takes in consideration into human behaviour?

How do we build products and systems that facilitate responsible use and safety?

I do believe that AI startups should consider putting their products in the life of early users, observing them, either natively in their own environments via ethnographic research, or by asking them to demo how they use these products with a cognitive walkthrough would be so valuable.

We would need slow down the process a bit and take some extra steps before jumping right into product development and being so solution focused.

Just taking those extra steps and slow down a bit in regards to AI product development would be so helpful.

An example of what I proposed is one by the People + AI research group at Google with their ‘Human Centred Machine Learning’ approach. Employing a human centred approach aids the team at Google to explore how machine learning can stay grounded in human needs as they develop their AI-powered products.

A snippet from Jess Holbrook’s post on Human Centre Machine Learning emphasises the importance of identifying human needs:

So our first point is that you still need to do all that hard work you’ve always done to find human needs. This is all the ethnography, contextual inquiries, interviews, deep hanging out, surveys, reading customer support tickets, logs analysis, and getting proximate to people to figure out if you’re solving a problem or addressing an unstated need people have. Machine learning won’t figure out what problems to solve.

The next question I have on my mind, is where is the best place for user research for AI should conducted in a usability lab or in the field. At Comuzi, our argument would be for the field is the best place. User research would only be effective if it is authentic.

As context will help frame the way that participants provide inputs to the AI for further learning. For example, if an AI system is intended to help guide users through purchasing products at your local Tescos’, then if at all possible, the research should be done in a real store, not in a lab simulation which is really expensive.

The challenges of testing AI in the field is while an interface, particularly a screen-based interface, can be prototyped pretty easily, prototyping artificial intelligence is much harder, if not impossible to achieve in the field.

On the other hand in the case of AI system that aid in store environments is applying a Wizard of Oz approach, where there is ‘a person behind the curtain’.

In order to test out approaches to how the AI system might behave, an additional researcher could observe the study and give responses that mimic expected intelligent agent output. From a technical feasibility, to avoid the research participants deciphering responses that seem clearly human, the additional researcher could type responses and have a realistic speech synthesizer read those responses giving achieving that Wizard of Oz apporach.

To wrap up, these are some thoughts that I hope can spur and provoke more conversations about what is the best approach to test AI products and systems in real human environments. I would love to see more case studies examples from startups working in this space. Happy to talk with anyone regarding this :)

ps — The team at Comuzi are working on a number of playful interactive tools that will engage with people on their thoughts and concerns regarding AI and other emerging technologies. Stay woke!

--

--