This article is for designers, developers, QA, PMs and software folk who have thought about running user tests on their site or app. This article will help you create online/remote usability testing using services like usertesting.com
During my time at Evernote, I was fortunate to be encouraged to do online usability testing. We used usertesting.com, and through hundreds of tests, I learnt how to write practical, useful tests.
Sidenote: I’ve tended to use(and have heard) the expression ‘user testing’. But it seems usability testing is the more accepted phrase. User testing seems reserved for the testing of an idea with a set of people. Usability testing tests the implementation.
What is remote usability testing?
Remote, or online testing involves someone, somewhere in the world, loading your site or app on to their device. Their job is to follow your set of instructions as they use your site or app. Your job is to work out where and why things fall apart. You’ll see their mouse, or their hand, in the video. You’ll also hear them talk out loud. They may also submit written answers and thoughts at the end.
- Does not need hardware.
- Is not free. As of writing, usertesting.com costs $49 per test for small numbers of tests.
- It also can miss subtle and nuanced feedback in early stages of product development.
- You can’t test your own hardware.
Why, and when to do usability testing
1. You think you’ve got something good.
Things are looking good. The design matches the spec. You’re feeling fine for the ‘big release’. At this point, you have a bunch of hidden assumptions because of your experience, or familiarity with your product. When you put this out in the real world you’ll discover
- Many people don’t read an icon, or other visual design in the same way
- The navigation, or app purpose, might be misunderstood entirely.
(“oh, I get it, it’s just like Tinder!”)
- The way you use your device or phone differs from many people.
(“Oh, I always force quit apps”)
Story 1: Scannable camera capture.
Scannable is an app that we developed at Evernote. It uses iPhone’s camera to take a picture of a physical paper document and turns that into a neat PDF.
It simply requires you to point your iPhone’s camera in a way where the entire document is visible.
Seems simple? Astounding was the percentage of testers who struggled with that core action. They were technically competent. They understood they needed to move the camera (phone) so that the piece of paper was in shot. But they didn’t know how to move their hand to accomplish that.
There was some small percentage who vaguely waved their phone at the piece of paper a few inches away, hoping to capture the entire page. We built some magic into that app, but not that much magic.
As a result, we added extra UI elements to guide users to ‘get the shot’ and to encouraged them to hold still when they did. The capture area got some padding for a more forgiving experience.
Story 2: the Skitch ‘record’ button.
Skitch is an image annotation app—you scribble on pictures. When we designed the phone experience, we wanted to leave as much room for the users photo as possible. So, we designed the color picker/swatch on phone as a single circle swatch of color. Tapping this revealed the other colors.
This seemed very understandable to us, a core assumption. Usability testing revealed many users interpreting this as a record button. After all, it was a red circle. We added a new first-time experience that expanded all the colors.
Incorrect assumptions are everywhere.
2. When it’s not clear which direction is right.
Sometimes you don’t know if a particular design will work, or when you want to A/B test. There’s strong opinions from all sides of the product team. Now that I’m designer+developer+PM at teampurr.com, I can have arguments with myself. Move on by running some tests.
3. When you want to test in an environment you don’t have.
Online testing allows you to specify:
- Hardware. Want to see your site or app run on low-spec Android devices, iPad Pro, or PC with German keyboard?
- Location. What’s the experience like for users in Europe when the servers are in the US?
- Experience. Besides the usual demographics, think about testing with the criteria ‘you must be a current user of [app]’.
Test other companies’ stuff!
A valuable, counter-intuitive, approach is to test your competitors’ stuff. Even before you build your own. Scannable was not the first document scanning app—we tested many existing approaches that existing scanning apps took. With that knowledge we could choose and build on the real-world winning UI elements.
Dont’ forget to test prototypes
These can be as simple as paper prototypes. Draw on some paper, photograph them, and use one of the many options to make them clickable. Or it could be as complex as a fully-functioning ‘snippet’ of an app built with real code. Caveats: your tester may get stuck in some yet-to-be-built corner. There are guided options when running online tests that can help. Saying more about prototyping is out of scope here, but tell me in the comments if you’d like to hear more.
Benefits of online usability testing
- Organizing an in-person usability test is work. Finding people, paying them in some way, many won’t turn up on the day, etc. Online testing saves you from that.
- Repeatable testing: Each tester has the same script.
- Watch at 2x! You don’t even have to watch the results in real-time. And if you’re not a pro, or don’t have time, you can opt to have an expert write up the results so you never have to watch the video.
- Access to hardware you don’t own. Being indie means you don’t have unlimited resources to buy a drawer of devices.
- Practically unlimited number of user testers.
- Quick turnaround.
This last one is important. It’s 4 pm and you‘ve just got the new ‘join a team’ feature in. You’ve got lots of questions. Does the feature have a good UX? Does it work on multi-monitor setups? With online user testing, you can get the majority of your testing done overnight as casual testers work. The very next day you can move ahead.
How to write tests for online usability testing.
OK, so you’ve decided to test online. Now to write the ‘script’. A usability script is a list of 1 or 2 sentence tasks. It’s simple and clear instructions for the tester to follow sequentially. In many ways it’s like computer programming. And like programming, things will go wrong! You’ll want strategies to stop your tester getting stuck in a loop, or wasting too much time on a single process.
Start with top-level tasks.
Let’s say I want to test out Tabby, an app I’m actually working on. It has account, team and other communication features. Say I want to test three things:
- Signing in
- Joining a team, and
- Share a win 🏆 emoji with your team.
I’ve found I get the most from asking users to try to accomplish something high-level like this. It’s closest to the real world. But then, if they get stuck, you’ll want to walk them through smaller steps. It’s typical for an online user test to take 10-20mins. Getting stuck in an initial stage can mean most of that time is spent flailing around, and this avoids that.
Here’s an example script. Notes in bold
- Hi, thanks for taking the test. Please make sure you have a Mac running at least Mac 10.12 ‘Sierra’
TIP: some testers ignore test requirements. Putting this here explicitly prevents them from continuing and wasting your time.
- Download Tabby from http://teampurr.com
- Launch the Tabby app by right-clicking on it and selecting ‘Open’.
TIP: Often you’re testing something that is difficult to install or access. Guide your testers through.
- Once the app is running, describe what you think the app does in 1 or 2 sentences
Note how I limit the answer duration. Avoid 5-minute answers!
- Sign in using the following:
This is your test account.
- Imagine you’re a remote worker, and you want to get a sense of how busy your teammates are.
TIP: Keep these instructions simple, and one or two sentences at most. Often testers are are slow readers.
- Imagine your coworker has given you their team-code: FELINE.
- Try to sign in to the team using the code FELINE. Move to the next step when you’re signed in to the team, or if you get stuck.
NOTE: Start with a high-level todo. The tester will only see one instruction on the screen at a time, so I repeat the code FELINE. Tell the tester they’re to stay on this instruction, but give them an out if they’re stuck
- If you’re stuck at this point, look for the icon with overlapping faces. Click this.
- Now type in your code FELINE and click ‘Join’
- Great! Now that you’re signed in, spend 20 seconds describing what you see and what you think each part means.
NOTE: I like being encouraging.
- Your next task is to send a 🏆to only your teammates, but nobody else. Try doing that now before you move to the next step. If you get stuck, move on.
- If you’re stuck, describe in a few sentences how you think how you think it should work.
NOTE: Try to understand their mental model.
- If you’re stuck, look at the list of teams you’re on. Try clicking each one. describe what you think is happening.
- Enable only your own team and then click the 🏆 button.
- —insert more tests here—
- Great! If you’ve still got time left try the following…
TIP: Here is where you add extra tests to get your full 20mins from your fastest testers.
- Thank you so much for your help. It means so much!
A note on sign-up and sign-on considerations
Online testers are usually more than happy to walk through any sign-up process. But once you’re confident with your signup process, then it’s common to want to skip as much as possible.
Creating a dummy account with existing data saves valuable time. It also lets your pre-populate the account with typical data or contacts. When doing this, keep in mind that multiple testers will ‘pick up’ your test simultaneously. If you have a single test account, you might get weird results.
One workaround is simply to trigger single tests manually. The other is to create multiple test accounts with dummy data. Make a list in a Google Sheet with the login details. Instruct your tester to go to the Google Sheet (etc.), select the next available account, and mark it as ‘used’. In my experience, paid remote testers will happily switch contexts.
Checklist for your own tests
- Have I run the test with a single person first? I’ll often find ‘bugs’ like broken links and instructions that lead down a dead end.
- Did I ask for general impressions?
- Does the test start at a high-level first with fallback steps for every major task?
- Are there ‘bonus’ tasks at the end?
How many tests? TLDR; five
You only need to run five tests. After five tests you’ll have enough to work on for the next sprint.
If you only take one thing away from this article: usability testing needs surprisingly few tests to be practically useful. And more tests don’t tend to show anything new.
Analyzing the results
You’ll cry. You’ll cringe, yell and shout at your screen. Eventually, acceptance. Here’s what I try to take away from each:
- Did they user complete the test? Pass/fail.
- Was the user able to complete the test without ‘helper’ steps?
- What was the average time to complete the test? Track this over many product versions.
- For each test, you’ll likely see a ton of small issues. For example, weird flicker as an asset loads when it should have been pre-loaded. Weird layouts on less-popular device screens. Turn those into tickets.
- Share 15-second clips with other stakeholders/developers/QA. Any longer and I found they tune out.
- Remote testing is fast and efficient.
- Write a robust script that gets testers back on track if they get stuck
- Run one test first. If all goes well, run the extra 4. Five is enough.
I love online user testing. It’s fast feedback, and very controlled. It’s great for remote work. I hope this article encourages you to try it.
Did I miss anything? Disagree with anything? Let me know in the comments.
And if you like this article, please 👏 to help others discover it. Thanks!