Start small with iterative testing

Anna Paramita
Bootcamp
Published in
5 min readApr 25, 2024

Can the discovery process work iteratively alongside an agile delivery approach?

illustration from unDraw

Within a product trio (also known as the “discovery crew” in my team), there is a constant need to find that balance of desirability-viability-feasibility in achieving a desired outcome. With continuous discovery and the regular customer interviews, we learn a whole lot about our users and start to find opportunities and ideas to solve their problems. Commonly mapped to an opportunity tree, this then helps guide the discovery activities needed.

Research and testing

It’s important to gain confidence in your ideas through research and testing. Remember: you are not your user. No matter how much you think a user will love your solution, anything that has yet to be validated with evidence remains as assumptions.

Break down the assumptions

You will likely have multiple assumptions around every idea. Be clear on what they are, and define what you need to test first (is there a riskiest assumption that could make or break your idea?). Keep the tests lean, and just enough to learn what you need to go to the next step. Explore different ways to experiment and research, and see what works best for your scenarios.

Start small

Why start small? As mentioned before, you want to define which assumption you need to learn first. Tackle the riskiest assumption before anything else. From a user perspective, you want to test desirability before getting into usability details (after all, what’s the point of making something delightfully usable if no one wants it in the first place?). Once you’ve proved the first assumption, you can work through the next ones. But what if the assumption is disproved? Great! You got your learning sooner rather than later, and now you can quickly move on to something else. As long as you’re not over investing effort and time into the tests, you can refocus that energy elsewhere. Breaking down your assumptions helps give a level of focus and structure (which is often needed in a discovery space that is notoriously messy). Different assumptions may need different testing methods, and can be tackled incrementally.

An example of what this might look like:

  1. You need to gauge desirability for a new feature, so you conduct a fake door test. You collaborate with an Engineer to add a button and a simple “coming soon” page, to see if people are interested.
  2. If there’s not enough interest (low desirability), you try a different variation of the button placement and/or copy.
  3. If there’s still low interest, you conclude that there’s not enough desirability. Move on to a different idea to test.
  4. On the other hand, if there’s a lot of interest (high desirability), you proceed to the next assumption that: Users will understand how to use the new feature. So you run an unmoderated first-click test on a low-fidelity design mockup.
  5. Still going strong? You do a follow-up preference test on high fidelity designs to gain feedback on some variations.
  6. As you keep progressing, the effort and complexity of the test increases, as you get into more details. You and your team move to an AB test experiment to get more confidence through a high sample size, and uncover any remaining risks before going live.

When you might not test small

There may be times when it’s better to test multiple assumptions in one round of research instead. This may be the case if each assumption is closely related and need to be tested together. Or you may be further along the discovery, and have higher confidence in the solution and need to test more complex designs. So it might actually take less time if you can combine them well as opposed to doing multiple research iterations. Just be mindful that you ensure each learning goal is clearly defined upfront, so that the research results don’t get muddled up.

Rapid Prototyping

One of the most common methods for testing is through iterative designs and rapid prototyping. A prototype could be in the form of a paper sketch, a clickable prototype, or a code prototype. Whatever you choose, it comes down to how it can represent an idea in the leanest way possible. It also depends on the resources and capabilities you have.

  1. Paper sketch. All you need is a pen and paper, and you can practically sketch things up in a relatively short time. It’s as “quick and dirty” as you can get, and can be good to convey high level ideas very early on.
  2. Low fidelity (Lo-fi) prototypes are handy at the early stages of design concepts. This is when you might need to show a wireframe with little or no interactivity, and don’t need to spend much time on the UI or aesthetic details just yet. There’s a number of Figma plugins for lo-fi components which I find handy in speeding up this process.
  3. High fidelity (Hi-fi) prototypes. If you have a well-established component library in Figma, it may be just as quick to do a hi-fi prototype. It may be an “add on” to an existing mockup if one already exists. Hi-fi prototypes are handy to get feedback on concepts where you are looking for feedback on more refined designs.
  4. Code prototypes can be an option for concepts that require more technical complexities to test. This can be achieved through close collaboration with the Engineers in the team. Again, the effort to create it should be just enough to validate the discovery need. For example, I’ve mocked up lo-fi designs, to then have my Engineer colleagues refer to it to produce a code prototype (as there were functionalities that couldn’t be replicated in a figma mockup), without it looking too polished but just enough to get the appropriate user feedback.

Design prototype is also something that can incrementally evolve throughout the discovery process, in-line with the types of testing you need to do. You might start with a paper sketch to get feedback on a high level idea, and then move to a lo-fi or hi-fi design to get some usability tests going, and then go to a code prototype as the complexity of the tests increases, and it gets closer to being validated for delivery.

Working with delivery

Discovery should connect regularly with the delivery process. As you gradually increase the confidence through testing your prototypes, it should inform what you can iteratively deliver to users. With each validated assumption, is there something that can be released to users, while the next set of assumptions are still being researched? You want to help users gain the benefits as soon as possible. Not only does it mean you can deliver value early on, it also helps you collect real-world data and feedback that you can continue to learn from.

For example, perhaps your early test was just enough to validate what a simple version of your new feature could be. Just as you started small with discovery, the development effort should also be iterative. As the team work on the initial delivery, you continue to research and test the next assumptions. This research, alongside the early feedback you can gain, would then feed into the next iteration to deliver. Therefore, the incremental and continuous approach of both discovery and delivery works hand in hand.

--

--