Five Takeaways from the UXPA 2018 Conference

UXPA International 2018 Conference, Puerto Rico, June 26–28, 2018.

When I attend a conference I’m pretty happy if I come away from it with at least one practical takeaway per day of the conference. The UXPA International 2018 conference in Puerto Rico was three days long, so I’m very pleased with five takeaways. Here they are. (You can also see my photos from the conference on Flickr.)

The opening keynote at the conference was given by Dr. Carine Lallemand of the University of Luxembourg. Keynote talks are supposed to be inspiring, which hers was, but it was also informative, which some keynotes fall short on. Carine works on bridging the gap between academia and industry, particularly regarding UX methods. So she covered a number of methods in her talk, some of which I wasn’t really familiar with. (She even provided a handout about the methods that was stuck to the bottom of everyone’s chair!) One of the techniques she covered is the sentence completion method, which is surprisingly simple but powerful. You just ask users to complete sentences, such as her examples related to a keynote talk: “According to me, a keynote talk should be _________”, “I’d be positively surprised by a talk that _________”, or “I’d feel bored from attending a talk that _________”.

In a study of the eReading experience, they compared the use of a traditional 7-point rating scale to evaluate the experience versus the sentence completion method. They found that using the rating scale 80% of the ratings were positive (5, 6, or 7). But using the sentence completion method (“The experience reading on an eBook is ____”) they found that only 64% of the entries were positive. If nothing else, this shows that the sentence completion method taps something different from a traditional rating scale. And the sentence completion method generates far richer insight into the respondents’ reactions to an experience.

To learn more about the sentence completion method in UX research see the following:

2. The User Experience Questionnaire (UEQ)

I’m excited when I get one practical takeaway from a single talk, but I’m thrilled when I get two. This is another one from Carine Lallemand’s opening keynote. I’m familiar with a number of standard questionnaires for assessing perceived usability or user experience (e.g., SUS, QUIS, SUPR-Q, SUMI, AttrakDiff) but I wasn’t familiar with the User Experience Questionnaire (UEQ). It consists of 26 semantic differential scales (e.g., “annoying … enjoyable”, “creative … dull”, “clear … confusing”) and is available in 20 languages. The intent is to capture a comprehensive impression of the user experience. You get scores on six scales: Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty. The reliability and validity of the questionnaire has been evaluated and benchmark data is available for comparison purposes.

The example below shows how the data from your study could be plotted against the benchmark data for the six scales. There are even easy-to-use Excel spreadsheets on the UEQ website to help with summarizing your data and creating charts like this one. In this example, you can see that the respondents thought what they were evaluating was very attractive but not very efficient or dependable; other scores were about average.

Data from a sample study (black line) compared to benchmark data for the six UEQ scales.

3. Using Cognitive Interviews to Test Surveys

When I see that two different sessions at a UXPA conference are discussing a research method I’m not really familiar with, I take notice. That’s what happened here, in these sessions:

Cognitive interviewing is a technique that I had only seen used on TV shows like CSI where the police are interviewing a witness and using verbal probes to try to improve recall of an event. But the technique can also be applied to UX research. It’s not all that different from the think-aloud protocol in usability testing. These sessions discussed the use of cognitive interviews specifically in the design and pretesting of surveys. Like think-aloud, cognitive interviewing can be done either concurrently (while filling out the survey) or retrospectively (looking back at the survey after completion). It can also be done with verbal probes by the moderator or as a more traditional think-aloud.

The goal is to see whether the respondents’ interpretation of the questions is what the designers of the survey intended. The method focuses on several stages of the respondent’s processing of each question: comprehension, retrieval of information from memory, judging the relevance of the information, and answering the question. Having the respondents talk about their thought process for each of these stages allows you to see possible disconnects with the designer’s intention. For more information about cognitive interviewing to evaluate surveys, check out the following:

4. Cognitive Biases Are Important!

This one also falls in the category of two different talks addressing a similar topic — namely cognitive biases. The talks were:

I knew about cognitive biases in general (e.g., the confirmation bias, where people tend to pay more attention to things that confirm their preconceptions), but I wasn’t aware of all the biases that were covered in these talks. And I also hadn’t seen this graphic that they both used for illustrating the extremely wide range of cognitive biases that there are:

Cognitive biases codex, categorization by Buster Benson, design by John Manoogian III. See full-size version on Wikipedia.

One of the more interesting cognitive biases, which Jasper covered in his talk, is the “Ikea Effect”. The idea is that people tend to value something more if they’ve played a role in creating it. (Thus the name — since you generally have to assemble things from Ikea.) The main point that Jasper made is that in designing a user experience you have to strike a balance between asking the users to do too much vs. doing everything for them.

This reminded me of an online study I did a number of years ago comparing two different designs for deciding how the cash held in a brokerage account would be invested. In Design A we basically did all the work for them — we said that we were going to invest the money in an FDIC-insured account (the “safest” option) and they could click a link to change that to another option. In Design B we spelled out both options and presented them as radio buttons: an FDIC-insured account or a Money-Market CD. We then looked at abandonment rates in an online study comparing the two designs (i.e., what percentage of the people who started the process with each design simply abandoned before finishing). Design B (the one where the users were presented with the two choices) resulted in a statistically lower abandonment rate.

5. Less is More

One of the other talks that I really enjoyed at the conference was by Nim Dvir and entitled “Less is More: An Empirical Investigation of the Relationship Between Amount of ‎Digital Content and User Engagement”. He studied two different versions of a landing page in a live A/B test:

Two versions of a landing page studied in a live A/B test. (Gafni & Dvir, 2018)

Using Google Adwords and Unbounce, people were randomly directed to either the long or short version of the landing page. The call to action (to enter their email address) was at the top in both designs and presented in the same way. The longer version provided additional information about how the process works, some user testimonies, and assurances about how their email address would be used.

The total sample size for the study was n=27,900! The difference in the percentage of people who signed up in the two versions was dramatic: 29% for the long version vs. 42% for the short version. So users who were given less information were more inclined to provide their data (email address). As Nim was quick to point out in his talk, this finding shouldn’t be over-generalized. The short version worked in this particular context but it might not in other contexts. But it was enough to convince me to try my hand at my own A/B test of two different designs for a landing page along similar lines. (I’ll share the results of that study when it’s finished!)

Next year the UXPA 2019 Conference will be held in Scottsdale, AZ. Perhaps I’ll see you there!



I’m a User Experience Research Consultant, Author, and Speaker. I have over 40 years of experience in human factors, usability, and UX research.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tom Tullis

I’m a User Experience Research Consultant, Author, and Speaker. I have over 40 years of experience in human factors, usability, and UX research.