“Usable” doesn’t mean “Beautiful”, nor does it mean “Useful”

Step-by-step Usability Testing pointers for people starting out

Angela Obias-Tuban
Design Research in the Philippines
7 min readMay 25, 2015

--

The second part of a UX Research talk I gave, for Communications Research university students.

TL; DR Version:

  1. Know what you’re measuring (usability vs. aesthetic appeal vs. usefulness)
  2. Measure effectiveness: Ease / Speed, Success/ Pain Points, Error-correction
  3. Focus on behavior, not claims
  4. Recruit for a range (not just one usage pattern)
  5. Balance natural and scripted testing

When you’re building a house, it doesn’t just matter that it’s pretty. It matters that you can actually use what it’s for. That the doors open easily, that the windows let air in, and that the pipes work.

When designing websites and applications — the same applies.

“Beautiful” designs don’t automatically mean usable.

Like this bus stop.

“The Bus: Stop Project” initiative by Kultur Krumbach — Bränden bus stop by Sou Fujimoto

A town in Austria commissioned esteemed architects to build bus-stop art pieces. This one had to be closed down due to potential injuries sustained by users. Despite the poetic idea of allowing bus stop users to climb to the top to see a view of the city.

A website could be gorgeous, but not help you do the function it needs to let you do efficiently.

Slides: Lecture on Basics of Design Research, for Randy Solis’ Communications and Management class

There are a number of guides about mounting a user research project. Favorites would be Steve Portigal’s Spinning Data into Gold, A List Apart’s Interviewing Humans and Ideo’s classic Human-Centered Design Kit. This post (and the presentation I gave) focuses on a few guidelines I want to emphasize and remind people about.

1. Behavior trumps claim

Usability testing is about behavior, more than verbal answers.

What do I mean? Remember when Facebook came out with the “ticker” on the sidebar? You remember how annoyed everyone was? You know why Facebook kept it for so long?

Facebook Sidebar Ticker; Image from Torchlight Digital

Facebook invests in a lot of testing. Other than having in-house research departments, they will even get a sample of live users, and will test features and designs on them then track its impact on their usage.

Facebook has one main objective: Keep you on facebook.

Through their data, the ticker proved effective in keeping people on the site because of “Oh, ____ did that; ____ posted something; ____ commented on ____’s post.” reaction. Without you even scrolling.

But everyone commenting on social media hated it.

In the world of design testing, however — behavior trumps claim.

It’s about effectiveness, not opinion.

2. Usable doesn’t mean useful either.

Usability isn’t everything. A design could be very efficient but people won’t use it.

The greater question there (beyond usability) is do they even want to do the action that you want them to do in the first place?

That is a different research objective and a different research method altogether. Focusing on relevance and needs, instead of efficiency. More market and user research, than usability testing.

3. So what are you testing, when you’re testing usability?

a. Basic Elements

You can find many checklists and templates online for website usability. They’ll typically be spreadsheets that tell you best practice guidelines and can help you do what is called a usability heuristics check on your own.

When I do usability testing with other people as participants, I tend to go old-school: with a moderated / contextual interview set-up, following parameters by guys like Jakob Nielsen (I’d say the Godfather of usability testing).

Because it’s a face-to-face interview, I need to simplify — only choosing up to 10 maximum tasks to do on the website (to manage respondent energy and time).

For my standard usability tests, I measure 4 standard parameters:

  1. Comprehension
  2. Ease / Effectiveness (Speed x Successful accomplishment)
  3. Pain Points
  4. Error-correction

*This works for me because they’re a clear and understandable expression of my usability test’s “north star”. It comes in handy when facilitating the test, when wireframing and when writing proposals for Clients.

It’s better if business owners, researchers and designers have a clear understanding of what questions the research will be answering. I’m sure other people have discovered their own “styles” of doing their tests as well, so I hope the community ends up sharing techniques with each other.

For an external article on usability metrics, read: Jakob Nielsen’s Usability Metrics article.

Comprehension:

Were the participants able to understand the design and what they need to do?

Ease and effectiveness:

Were they able to successfully accomplish the tasks, and how quickly (or how not)?

Pain Points:

Were there any frustrations or mistakes that they made while using the design?

Error-correction:

When they did make mistakes, were they able to correct themselves eventually? How? What were their workarounds?

*I phrased them as questions, but I don’t mean for those to be asked out loud ☺

They’re the questions you ask yourself as a test facilitator while the participants performs and relays his or her tasks.

b. Designing the Research Study: Points to consider

Test Material: What you’ll be testing

Figuring out how to go about your test also depends on what stage the project is in: Are you building a completely new platform? Are you redesigning an old one? Is the design team done with the first prototype and you all want to see how it can work better?

*In the presentation embedded above, there’s one slide that shows a short list of the different research methods for various stages of product development. For this post, the focus is usability testing — which is typically done after a design has been made (Even if it’s a mock-up or prototype — the earlier you get feedback, the better). Prior to designing a product, you can also do usability testing among your product’s competitors or pegs, to learn from how they work.

Respondent specifications: Recruit for range

I like to follow a guideline from Ideo’s Human-Centered Design Toolkit. Recruit for a range, not for a single profile.

For classic market research, you want to zoom your research into your specific target market — are we listening to teens, more than adults, female early adopters versus female conservative buyers.

For testing how a design works, you aren’t capturing their “taste”, but the effectiveness of the design. So it’s best to ensure that the design caters to most people who will access your product. How do you do that? By testing on the people who might have a hard time using it.

Game testing, with kids — Project for a Consumer Pharmaceutical brand campaign

This is why you recruit a range. My rule of thumb (since most projects have conservative budgets), is, at the very least, split the participants between your ideal usage profile and a set from your non-ideal usage profile.

Testing on just your target market (with a small sample) isn’t very cost-effective. If you test an app with only tech-savvy teens, for example, then you won’t see the errors that a less app-savvy participant may do. Remember, we’re trying to catch as many errors as possible, so we can resolve as many errors as possible.

Flow: Natural vs. Scripted

From the first tests I ran, I realized that it isn’t best to jump right into scripted task questions.

You aren’t able to capture the participant’s initial reactions to the design and the design elements that they would intuitively engage in. When designing a test flow, I would then consider:

2–4 minutes: Orientation (What this is for, How the test will work, Brief about “thinking out loud”)

1–2 minutes: Free usage (Allow them to navigate and explore the design on their own; tracking the sequence of their actions)

10–15 minutes: Asking the participant to do the 5 tasks most critical to the product objectives, then observing each.

10–15 minutes: Reviewing the experience: probing pain points, workarounds and comprehension issues. Also, potential use cases that they imagined (to gauge impression of the product).

I then base the test guide on this flow.

Recap

This was just a handful of points to consider when conducting and planning usability testing, i.e.:

  1. Know what you’re measuring (usability vs. aesthetic appeal vs. usefulness)
  2. Measure effectiveness: Ease / Speed, Success/ Pain Points, Error-correction
  3. Focus on behavior, not claims
  4. Recruit for a range, not one usage pattern
  5. Balance natural and scripted testing

If you want to learn more…

There is so much information online for free, for those interested in seeing procedural guides. This is my way of contributing a (hopefully easy to remember) what-if-you-needed-to-do-it-on-your-own set of reminders, to the voices that are already out there.

I do hope more product managers and brand owners get to try this on their innovation ideas (So you can see for yourself why Facebook does this so much). Will try to post more case studies in the next weeks!

Follow me here on Medium, or on Twitter, for more straight-talking, practical stories about how to plan, execute and analyze design research.Message me if ever you want to work together.

Links for people who want to learn even more:

From Usability.gov

Michael Meikson on they key parameters of UX:

--

--

Angela Obias-Tuban
Design Research in the Philippines

Researcher and data analyst who works for the content and design community. Often called an experience designer. Consultant at http://priority-studios.com