4 tips to integrate user testing into your product workflow

Ofer Senderey
Soluto by asurion
Published in
8 min readJun 22, 2021

A practical guide for making your next user testing effort more efficient and impactful

Getting user feedback before launching new designs and features is (or should be) an integral part of every product team’s workflow. Unfortunately, this is not always the case. Researchers who support several product teams often face challenges when it comes to defining the right research scope and the type of collaboration with the designers and PMs. These obstacles can hinder our ability to provide the right insights at the right moment and to succeed in our ultimate goal — impact product decisions.

Here are four tips you can use to overcome common hurdles you’ll face as researchers and to make your user testing more aligned and effective, based on how we do it @Soluto (by Asurion).

Time it like a chef

Here’s a scenario: a team is working on a new version of a meaningful feature. Together, you define goals and research needs, design the test, and extract insightful pieces of information. Yay! But wait, the team is launching next week, and your insights call for adjustments, not to mention another testing iteration to validate them. If this was a restaurant, you just recommended changing the sauce and taking some ingredients out of the pots while the kitchen staff yells “Fire!”.

Deciding when to start user testing seems trivial, but research is too often timed according to the availability and prioritization of the product team rather than a planned process. That can result in one of two frequent mistakes:

(1) Testing too early — Conducting user testing in advance is tempting. While the product team conducts their bi-monthly plans, there’s more time to discuss assumptions and create a testing prototype.

However, in the fast-paced environment of product development, testing designs too early will typically translate into larger gaps in the communication between research and product. That can result in a lack of focus from both sides and unnecessary time wasted for refreshing our mindset each time we dive back into the topic.

Moreover, as the parts never cease to spin, planning tactical user testing six weeks in advance might prove premature at the finish line, as changes in goals, designs, and wireframes can take place on a weekly basis.

(2) Testing too late — testing at the same time that the new version is being launched has its downsides too. Other than the fact that it leaves less time and resources to adjust, the team is inevitably less open to hearing new ideas and suggested changes.

Guiding principle: Timing user testing to 2–3 weeks before an expected launch will make the testing process prioritized and more top of mind while also keeping enough time to iterate and refine.

2. Move forward in phases

When initiating research, it is often tempting to use it to answer as many questions as possible. But trying to capture everything in one or two tests might overwhelm our users, making it harder for them to provide meaningful, thoughtful answers. In that case, the overload doesn’t stop with the participants. As researchers, designing a packed and overloaded test means longer hours to analyze, synthesize, and extract meaningful data.

On top of that, trying to learn too many things at once hampers our ability to conclude on the way specific parts of the design affected the results, which means that we will have less validity to recommend the right course of action.

Here’s an example. Soluto’s primary product is ‘Home+’, a one-stop-shop service that, on top of covering all electronics within our customers’ homes, also includes 24/7 tech support from tech experts and more added value features to improve the customer tech life. In one of our recent efforts, we’ve looked to integrate a new feature that makes streaming data accessible — so our users will be free to decide which services they need and what’s the best way to utilize their streaming budget — into the Home+ Mainview.

This means we wanted to learn about:

  • What users understand about this feature in the context of the product’s main screen?
  • How attractive is the feature, and how it relates to the core service in the users’ opinion?
  • What will make them engage with it?
  • Is the following screen aligned with what users expect to see after clicking a CTA?
  • Do users find the presentation of the feature clear and valuable?
  • What type of new information should we provide them with, if any?

Instead of expecting users to take all of these questions in at once, we separated the testing process into three iterations, building one test on top of the results of the previous iteration. By doing so, we were able to focus on several questions each time and get better, more in-depth results. This separation had another positive effect — It enabled us to see the effect of specific changes in design, flow, and copy in higher resolution, leading us to make more informed decisions and be more confident we’re fixing the right issues and improving our users’ experience.

Example of how we divided our research goals into several iterations, testing how a new feature integrates with Soluto’s product dashboard

Takeaways: divide the user testing process into a series of tests. Order your research goals based on their location on the flow of the user’s experience and from the higher-level questions to more drill-down inquiries on specific elements.

3. Create quick feedback loops

In many companies, researchers work on several projects at the same time, trying to fulfill the needs of multiple teams. As a result, focusing on a specific project for several days might sound like an unwanted scenario that will cause delays. However, tightening the time gaps between one iteration to another could actually be more efficient and demand less time overall.

When we shift between several projects, that takes massive mental resources. We have to refocus our minds on the research goals and previous results, what were the technical issues in fitting the designs to the testing platform, which insights we wanted to further explore, and which research topics are already established. These issues challenge our productivity and efficiency. Taking a different approach and setting closer milestones in the process can result in fewer meetings around each iteration.

When we tested the way our streaming manager fitted within the Home+ dashboard experience, we conducted 45 minutes meetings to discuss the results, recommendations, and next steps. In addition, we held another quick alignment meeting just before launching each test. We could keep these minimal meeting times because the close loops kept our minds fresh and focused on our goals, recent findings, and the context of each of the testing.

Takeaways: Set the testing milestones closer to one another so that they will encourage you to:

  1. Share the results immediately after analyzing the data. If you identify clear cuts regarding some of your goals, share them with the team even before you finish.
  2. Launch the next iteration quickly. Accomplish that by setting a quick alignment session for the next iteration as soon as you share the results. That will also help the team to keep up with preparing the designs and prototypes for the next test.

4. Framing the user testing process

To establish this working process and benefit from it, it is critical to set expectations and advocate the right mindset. Even though product teams are well aware of the complexity of the user experience, they may often view user testing in an abstract way. ‘We want to see how users react to the new design’ or ‘let’s see if our solution works’ might be things you heard that reflect this type of simplistic view. As researchers, it is our responsibility to present the user testing process the right way. Here’s an example: “A series of tests, that are built one on top of the other, focusing on a bundle of specific topics each time. “ To promote this principle in the case of Soluto’s streaming manager, one of the first things we did was to present the team with an example of how changes to micro-copy in the Dashboard affected users’ perceptions later in the flow. We showed them that when we described the app in a way that got users excited to receive immediate value, users were practically blind to a ‘Coming soon’ title we expected them to react to in the following screen.

When users read the copy in the tip box on the first screen, they immediately assumed they would be presented with their own streaming data. As a result, none of them noticed the ‘coming soon’ title on the second screen and didn’t understand the CTA at the bottom.

Providing the team with a tangible example of how elements in design, copy, and flow interact with one another, had an invaluable effect on their ability to view user testing as a complex process that demands logic and a methodical approach.

Takeaways: Use the early stages of user testing to present examples of how copy, UI, and flow decisions affect and interact with one another. Leverage these examples to reframe the user testing process as a multi-layer process that calls for iterations and an agile approach.

Quick recap

Here is a quick summary of how to apply the four tips to make the process of usertesting more efficient and impactful:

  1. Time user testing to start 2–3 weeks before the new design is launched
  2. Divide the tests into iterations based on the way users advance in the experience to get insights in higher resolution and be able to conclude on the causality of the results
  3. Set tight deadlines and close milestones to ensure that you and the team stay focused and synced
  4. Provide the team, early in the process, with a tangible example of how a design or copy element affected the way users react to another element.

As researchers, we need to constantly retrospect on our working processes and seek the patterns and factors that either led us to succeed or held us back. Only by doing so can we improve — both our skills as individuals and the profession as a whole — and deliver better outcomes for our companies.

I would love to hear from you, researchers, and product team members. What are your thoughts and conclusions? What other best practices for integrating user-testing in the product work are you using?

--

--