UXinsight festival 2021
It’s UXinsight again! An event with a clear focus on user research, which already showed last year by including an extra day for research ops that it is more than just an industry meeting that exchanges information about the status quo.
This year the motto was “Learning through Failing”. At first glance, a motto doesn’t seem to have much to do with user research. Isn’t that why we do research to minimize failures? However, a look at the agenda quickly showed that even researchers can learn a lot if the handling of mistakes is right.
The keynote
The keynote “Becoming a Master in Failure” by Remko van der Drift (occupation: “Failure Expert”). Remko showed with exciting examples (Adele at the Grammys) how important it is to admit mistakes quickly. I think especially we user researchers are easily susceptible to always wanting to make everything perfect and create an environment where nothing can go wrong. Of course, this is not wrong, we work with sensitive data of real people.
Especially when planning and conducting studies you can expect a certain perfection from us. However, this should not lead to mistakes being overlooked. To stay with the example of Adele: She broke off a vocal number at the Grammys because she started the song in the wrong pitch. Just like a song, a research session can start wrong or the results can be misinterpreted. It is important not to continue as if nothing had happened, but to admit the mistakes. (If it makes sense) start the song again from the beginning so that everything really works correctly.
Talking about “failure”
The other talks, which were all held under the motto “Failure”. The speakers showed examples of how we can achieve better research results by analyzing our mistakes. Stephanie Pratt told how she wanted to set up research processes in a still quite young company. It was much too early for the company, it wasn’t ready for it yet. From these mistakes, she developed a catalog of questions that will help her (and the audience) to better assess the UX maturity level in the future.
For me, the talk was exciting precisely because the topic of UX maturity is sooner or later always linked to the topic of research repository. Above all, how past research is handled is a point that distinguishes a company that simply takes UX seriously from a company that really lives UX. But my personal highlight was a very classical session. Global User Research: When shit hits the fan! As a researcher, I really like to listen to war stories from user research. Listening to all the little details that can and will go wrong during user research projects, are not only fun to hear. But it also helps me to better plan my own projects.
Unsurprisingly the number one reason for failure during these projects was either budget planning or problems with stakeholders.
Whether it’s small budgets (forget cab money in a foreign city) or big budgets (miscalculating exchange rates), it is important to properly plan every step of your project. But even if you have everything planned out there is always the weather. One of the panelists told us a story where they had a whole team of translators, stakeholders, and researchers, ready to visit a remote village in South Africa, only to discover that the roads are underwater and are not accessible.
Friday: “Reflect”.
The last day was themed “Reflect”. Here my highlight was “Strategies to avoid the Biases that Influence our Research” by Yael Gutman. Research bias is an important topic that I also include in my own training and webinars as often as possible. And you can even find an article about biases here on our blog. While these biases can’t always be 100% avoided, talks like this help us regularly remind ourselves that these biases exist, like confirmation bias. That means that you already have an idea of how the research will probably go. Without knowing you force everything to fit your interpretation.
Inclusive Mindset: Always look at the bigger picture and include additional point of views and experiences.
Leverage partner’s experiences: Involve other parties in your research and ask them for feedback. Share your ideas and learnings with them.
Active Listening: This one especially helps with confirmation bias. Let the user talk. Try to hold yourself back, observe more and try to listen actively.
Include other data sources: Qualitative research is just one part of the picture. By including other data sources (e.g. quantitative data) you will get a more holistic view and discrepancies between qualitative and quantitative data. It either hints you towards a possible bias or, even better, shows you fields of interest that you should look into.
One strategy that especially stuck out to me was the peer-reviewing of your interview scripts. By involving your colleagues (maybe even from other companies) and reviewing your scripts they can drastically improve your interview scripts and bring in a new perspective. This strategy was later discussed in the UXinsight-Festival-Chat where other participants spun the idea further and dreamt of a central platform to let your research be scripts peer-reviewed by other researchers in the industry.
The closing talk
The closing talk “Ethical Considerations in UX Research” by Victor Yocco, explored the ethics of UX research, a recurring theme in the UXinsight festival. And because the Motto was learning through failure, Victor listed some older (and newer) Research Projects and showed us where they failed ethically. Some bad examples that probably most of us know, like the Stanford Prison Experiment (now right to withdraw, no debriefing, no abortion after failure was obvious) was quite obvious. But he also showed us newer experiments with ethical considerations that are not that obvious.
OkCupid, a dating service that compares user profiles and gives you a score of how good of a match another profile is. The service showed some users significantly better (or worse scores) for some profiles than the software actually calculated. They wanted to prove, that by showing a higher score the probability that these two profiles actually are getting into a relationship is higher (regardless of the actual score).
The problem: The user did not know that they were part of an experiment. Most of you now say that this is obviously an ethical problem that you would spot in your own research. In a world with intelligent ads, and A/B tests, think about not always telling your participants what the research is actually about. There are a lot of pitfalls. Victor formulated some questions that you can ask yourself before your research.
How honest and open will I be with participants?
Will I share the timeline for creating the product? (if not, why?)
Will I share the company that is sponsoring the research? (If not why?)
Will I debrief or follow up with participants in any way?
One of the solutions Victor mentioned was similar to Yael Gutman: Peer Reviewing.
By letting other researchers review your protocols they can spot bias and ethical considerations and help you improve your research.
And I think this is a great way to summarize this conference.
The best way to elevate our research is working together as a community and helping each other out!
This article was written by Dominic Staub, who is an experienced market researcher and user researcher with over 6 years of experience in qualitative interviews. As Head of Research at Usertimes Solutions GmbH he is responsible for the internal user research of a web-based qualitative data analysis tool. Dominic also works as a user research consultant and helps companies to establish their own research processes and user research within the company.