Setting up User Testing for First (mobile) & Second Screen (TV)
Early in the year, our design team decided to conduct a test with users to collect insights on usability and user experience for our VOD and Multiscreen platform.
About the product
Using a unique concept we call ‘contextual multiscreen’, screens automatically melt together when used in concert, switching their behaviour based on their context of use.
First things first…
User Testing Plan
The first step was to take into consideration questions like: how do we structure our iterative user testing program and how do we integrate these tests within our roadmap? Do we use certain cycles? Current product testing sessions? R&D testing? What budget do we need? Should we hire a recruitment agency?
To answer those questions, the first thing to be done was to define exactly what kind of information we would like to get out of that test; so we made all the scripts and test plans for it.
Why are we doing this?
Since we’re building a multiscreen platform, it’s very important for us to know how users react to Second Screen. So we need to find out some user behaviours such as browsing content (if while browsing they would look more at the TV or at their phone), interactions, if it would be easy for them to navigate through the webapp, to clarify some usability questions we had, and get general feedback about out product and how they felt about the multi-screen experience.
Defining the Test Structure
Who will be in charge of what?
Define testing members roles; the interviewer (tester A), the one responsible for taking notes (tester B) and observers.
Re-creating users’ natural environment
Since we are testing an entertaining VOD platform, we wanted to be able to recreate a living room right inside our office, so users could engage with our product in a more realist way. We decided to conduct the test in our office, which had an amazing living room with a cozy and beautiful view.
We made a spreadsheet with all the furniture and equipment we need, from tables, couches, and snacks, to devices, extra-cameras, mics, etc.
Recruiting participants can be tricky
To help align strategy and goals to specific user groups, we used personas to set up the type of person who would interact with.
Once we had our structure set, we decided to choose the company that would recruit the participants that would match with our personas.
All the participants were recruiting using Testing Time, a Swiss start-up which procures test users within the Netherlands.
I wrote a script to guide us during the test as users went through the tasks and the app. The script does not have to be said and followed literally, but it works if the tester gets lost in the middle of the process.
The script contains:
Task 1: Mobile + TV App
[ Walk users through 2nd Screen]
Users experience with our Second Screen connected
[ Walk users through the Website]
Users experience with the website (first screen only)
Questions to be evaluated after the session:
- What proportion of time did they spend looking at each screen?
- How did they feel about the movement of the second screen while they used their phone?
- Did the playback controls on the phone have all the features they expected?
- What would they expect/like to see on the second screen?
5–10 minutes: Warm-up
Make time to get the user comfortable — small talk is a surprisingly important part of the user testing process to make users ready and comfortable with the test to come.
+60 minutes: User feedback on a prototype
Show the user the product and allow them to use it. Give users a specific task to complete, and ask questions about why they did what they did, in order to gain an understanding on how they use of the product.
10–15 minutes: Wrap up
Respond to questions or issues that popped up during the testing session. Point out features you’d like them to be aware of, and we might had missed. Ask them if they’d use the product instead of their current solutions.
Tests were conducted at our office, using Look Back to record users’ faces and on-screen interactions. A camcorder also captured the entire session.
Look Back was a great tool to record our test. The UI is super easy to use, and besides recording two screens at a time, you can also go through the progress line and create a marker on important moments during the test, add comments to them or even edit the videos.
We used Survey Monkey to create a Q&A that would be filled by testers while users are performing their tasks.
During the test, testers B noticed that the Survey Monkey Q&A was difficult to follow. Because we wanted to make the users as comfortable as possible, their journey made through the application rarely followed the questions and answers we had created; so it was difficult to testers B to keep up with the tasks and fill out the form on Survey Monkey at the same time.
Once we realize this, in the following tests we’d use a simple .doc to write down key test considerations.
After the test, we simply went through the doc, edit the main highlights and classify them by categories, for example, notes on Second Screen Player Page, Second Screen Search, Website Discovery, and so on…
Documenting the results
A Google Slides doc was used to create the User Test Report, which included:
- Introduction to the Test
- Test Highlight Moments (videos)
- Memorable Quotes
- Top Level Findings (shown in a graph pyramid format, grouping the findings by importance levels)
Usability findings were on player pages, player icons, remote control, live channels and search.
We found out that:
- Single users focused much more on their phones than on the TV;
- There always might be another person in the room watching it, so we have to design a Second Screen (TV) UI that satisfies the needs of the second person.
Also, we learned that when conducting a User Testing with Second Screen, it’s always good to make sure:
- All the devices we provided are fully charged;
- All the mobile devices are unblocked;
- All your devices are already connected to the same Wi-Fi network
- You have all the assets available, such as HDMI cables, etc.
The test was not only important so we could have feedback to improve and fix the current state of our product, but we took the learnings even for the upcoming months after the test while designing or re-designing specific pages and flows of the product. Now we have more knowledge and user insights to reminds us what’s the best design approach for specific scenarios; a great example of this was the auto-play flow re-design, that I’ll talk about in another post :)
Thanks for reading!
Andrea Pacheco is a Product Designer & Visual Artist currently living in Amsterdam and working with multi-screen interfaces.