Five Takeaways from the UXPA 2018 Conference
When I attend a conference I’m pretty happy if I come away from it with at least one practical takeaway per day of the conference. The UXPA International 2018 conference in Puerto Rico was three days long, so I’m very pleased with five takeaways. Here they are. (You can also see my photos from the conference on Flickr.)
- The Sentence Completion Method for UX Research
The opening keynote at the conference was given by Dr. Carine Lallemand of the University of Luxembourg. Keynote talks are supposed to be inspiring, which hers was, but it was also informative, which some keynotes fall short on. Carine works on bridging the gap between academia and industry, particularly regarding UX methods. So she covered a number of methods in her talk, some of which I wasn’t really familiar with. (She even provided a handout about the methods that was stuck to the bottom of everyone’s chair!) One of the techniques she covered is the sentence completion method, which is surprisingly simple but powerful. You just ask users to complete sentences, such as her examples related to a keynote talk: “According to me, a keynote talk should be _________”, “I’d be positively surprised by a talk that _________”, or “I’d feel bored from attending a talk that _________”.
In a study of the eReading experience, they compared the use of a traditional 7-point rating scale to evaluate the experience versus the sentence completion method. They found that using the rating scale 80% of the ratings were positive (5, 6, or 7). But using the sentence completion method (“The experience reading on an eBook is ____”) they found that only 64% of the entries were positive. If nothing else, this shows that the sentence completion method taps something different from a traditional rating scale. And the sentence completion method generates far richer insight into the respondents’ reactions to an experience.
To learn more about the sentence completion method in UX research see the following:
- Sentence Completion for Understanding Users and Evaluating User Experience
- Sentence Completion for Evaluating Symbolic Meaning
- Sentence Completion : une méthode UX vraiment ______ ! (in French)
2. The User Experience Questionnaire (UEQ)
I’m excited when I get one practical takeaway from a single talk, but I’m thrilled when I get two. This is another one from Carine Lallemand’s opening keynote. I’m familiar with a number of standard questionnaires for assessing perceived usability or user experience (e.g., SUS, QUIS, SUPR-Q, SUMI, AttrakDiff) but I wasn’t familiar with the User Experience Questionnaire (UEQ). It consists of 26 semantic differential scales (e.g., “annoying … enjoyable”, “creative … dull”, “clear … confusing”) and is available in 20 languages. The intent is to capture a comprehensive impression of the user experience. You get scores on six scales: Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty. The reliability and validity of the questionnaire has been evaluated and benchmark data is available for comparison purposes.
The example below shows how the data from your study could be plotted against the benchmark data for the six scales. There are even easy-to-use Excel spreadsheets on the UEQ website to help with summarizing your data and creating charts like this one. In this example, you can see that the respondents thought what they were evaluating was very attractive but not very efficient or dependable; other scores were about average.
3. Using Cognitive Interviews to Test Surveys
When I see that two different sessions at a UXPA conference are discussing a research method I’m not really familiar with, I take notice. That’s what happened here, in these sessions:
- Text fields and dropdowns and radios oh my! Usability testing online surveys (and forms): Eva Kaniasty
- Improving Your Surveys and Questionnaires with Cognitive Interviewing: Jean Fox, Jennifer Edgar, and Scott Fricker
Cognitive interviewing is a technique that I had only seen used on TV shows like CSI where the police are interviewing a witness and using verbal probes to try to improve recall of an event. But the technique can also be applied to UX research. It’s not all that different from the think-aloud protocol in usability testing. These sessions discussed the use of cognitive interviews specifically in the design and pretesting of surveys. Like think-aloud, cognitive interviewing can be done either concurrently (while filling out the survey) or retrospectively (looking back at the survey after completion). It can also be done with verbal probes by the moderator or as a more traditional think-aloud.
The goal is to see whether the respondents’ interpretation of the questions is what the designers of the survey intended. The method focuses on several stages of the respondent’s processing of each question: comprehension, retrieval of information from memory, judging the relevance of the information, and answering the question. Having the respondents talk about their thought process for each of these stages allows you to see possible disconnects with the designer’s intention. For more information about cognitive interviewing to evaluate surveys, check out the following:
- What Do Our Respondents Think We’re Asking? Using Cognitive Interviewing to Improve Medical Education Surveys
- Cognitive Interviewing: A Tool for Improving Questionnaire Design (book)
- How cognitive interviewing can improve your questionnaire design
- Using Cognitive Interviews to Improve Survey Instruments
4. Cognitive Biases Are Important!
This one also falls in the category of two different talks addressing a similar topic — namely cognitive biases. The talks were:
- Designing for Human Behavior and Cognitive Bias: Jasper Liu
- Know Thyself, and To Thine Users Be True: Understanding and Managing Biases that Can Influence UX Work: Karen Bachmann
I knew about cognitive biases in general (e.g., the confirmation bias, where people tend to pay more attention to things that confirm their preconceptions), but I wasn’t aware of all the biases that were covered in these talks. And I also hadn’t seen this graphic that they both used for illustrating the extremely wide range of cognitive biases that there are:
One of the more interesting cognitive biases, which Jasper covered in his talk, is the “Ikea Effect”. The idea is that people tend to value something more if they’ve played a role in creating it. (Thus the name — since you generally have to assemble things from Ikea.) The main point that Jasper made is that in designing a user experience you have to strike a balance between asking the users to do too much vs. doing everything for them.
This reminded me of an online study I did a number of years ago comparing two different designs for deciding how the cash held in a brokerage account would be invested. In Design A we basically did all the work for them — we said that we were going to invest the money in an FDIC-insured account (the “safest” option) and they could click a link to change that to another option. In Design B we spelled out both options and presented them as radio buttons: an FDIC-insured account or a Money-Market CD. We then looked at abandonment rates in an online study comparing the two designs (i.e., what percentage of the people who started the process with each design simply abandoned before finishing). Design B (the one where the users were presented with the two choices) resulted in a statistically lower abandonment rate.
5. Less is More
One of the other talks that I really enjoyed at the conference was by Nim Dvir and entitled “Less is More: An Empirical Investigation of the Relationship Between Amount of Digital Content and User Engagement”. He studied two different versions of a landing page in a live A/B test:
Using Google Adwords and Unbounce, people were randomly directed to either the long or short version of the landing page. The call to action (to enter their email address) was at the top in both designs and presented in the same way. The longer version provided additional information about how the process works, some user testimonies, and assurances about how their email address would be used.
The total sample size for the study was n=27,900! The difference in the percentage of people who signed up in the two versions was dramatic: 29% for the long version vs. 42% for the short version. So users who were given less information were more inclined to provide their data (email address). As Nim was quick to point out in his talk, this finding shouldn’t be over-generalized. The short version worked in this particular context but it might not in other contexts. But it was enough to convince me to try my hand at my own A/B test of two different designs for a landing page along similar lines. (I’ll share the results of that study when it’s finished!)
Next year the UXPA 2019 Conference will be held in Scottsdale, AZ. Perhaps I’ll see you there!