PART 3: FIELD GUIDE FOR REMOTE TESTING & LOOKBACK

mathieu dauner
This Is Why We Can’t Have Nice Things
3 min readJun 22, 2020

Practical advice to improve user testing in this new normal.

By Mathieu Dauner, Strategy & Research Lead

This is the final instalment of three part series. Part 1 (before testing) and 2 (during testing) can be found on earlier posts from the This Is Why We Can’t Have Nice Things publication.

Download “post-testing” checklist for remote testing

AFTER TESTING

Congrats! You’ve navigated a successful remote testing succession. You probably have a few more to go through on the day, but here are some outcomes and considerations to plan for after each test or at the end of all your testing sessions.

Outcome 1: Playback immediately

As soon as the participant has left the session, it’s important to quickly run a retrospective with the group. Capture feedback and observations from everyone. Build consensus around the most important observations. You lose the clarity and sharpness to your insights and observations over the course of day, which is why you must retro after each session. Also, it allows for people to jump in and out of different sessions throughout the day.

CONSIDERATIONS

Plan for retrospectives between each testing session.

As a rule of thumb, you’ll need a minimum of 20–25 minutes to review and discuss observations. Tack on a few additional minutes to prep for the next session. Inform all observers that they will need to stay the additional 20 minutes after each session to discuss and hear observations from the room. If they can’t stay, communicate that the retro is just as valuable as the actual testing session. If they persist in leaving early, encourage them to leave their notes with someone on the product team.

Retro after each session to ensure fidelity of observations.

It’s easy to confuse your observations and stories after several back-to-back sessions. Capture any feedback to improve the script or testing session format at the end of each retros, particularly the first few sessions.

Outcome 2: Clarity on how you will share

It’s critical to have a clear and articulated approach on how feedback will be played back to the group. Start coaching how observations will be shared before the session starts. Use simple observation frameworks (categories of observations, learning goals) to focus what observations are captured. Timebox and limit number of items shared based on size of group and time between sessions. Encourage people to playback stories, because that’s what people remember.

CONSIDERATIONS

Align on types of observations

Coach observers on the type of observations that the team is looking to capture and how this feedback will be shared at the end of the session. Make sure they are capturing observations in a word document during the session, allowing for easy copy and paste later.

Leverage collaborative tooling

Jump out of the testing platform and leverage collaborative space tools like Miro so the whole group can see and hear people’s observations in real-time. Have observers copy-and-paste their observations into a shared document/board as they present.

Timebox and limit the number of observations people can make

This forces people to share what’s most relevant, avoid repeating what others have already mentioned and allows enough time for everyone to speak.

Use voting to prioritise and align

If there is time, encourage voting around the observation areas that need the most alignment or prioritisation.

This is the final instalment of three-part series. Part 1(before testing) and Part 2 (on the day of testing) can be found in previous posts from the This Is Why We Cant Have Nice Things publication

Reach out to me in the comments section or message me on Twitter and LinkedIn if you have questions or would like to discuss this topic further.

Playbooks for collaboration with remote teams can be found here as well as toolkits for product development at the bottom of this page.

I work at a product and service development consultancy in Sydney Australia called Mentally Friendly.

Mentally Friendly is a design and innovation studio that is trusted by Australia’s biggest and most complex organisations. We help teams and decision-makers build confidence and momentum together.

LinkedIn and Instagram at Mentally Friendly

--

--

mathieu dauner
This Is Why We Can’t Have Nice Things

Strategy and Research Lead at Mentally Friendly. Follow me @mdauner or connect with me on LinkedIn.