Selecting method for testing product hypotheses: a practical guide

Anna Velcheva
12 min readMar 11, 2024

In my previous article Experiments and validating hypotheses in product Creation (digital bank case example), we discussed the importance of testing ideas when building or improving an existing product. Testing hypotheses helps to avoid mistakes, make smart choices, and come up with even better ideas for products.

Now, we’re moving on to the next big question: how to select the most effective testing method for the specific product hypothesis?

Today we will cover the following topics:

  • Criteria for Selecting a Testing Method: We’ll look at what factors you should consider when choosing a testing method, such as your project’s stage, the resources you have, and what you need to learn about your product.
  • Navigating Testing Methods: We’ll dive into a detailed overview of various testing methods, highlighting when and why each method should be used. This section will merge the pros and cons of different approaches, along with tips on avoiding common pitfalls.
  • Examples: To bring it all to life, we’ll share examples of how different methods were successfully used in real product development scenarios.

By the end of our discussion, you’ll have a clearer understanding of how to choose and apply the right testing method for your product hypotheses, armed with practical insights and examples.

Criteria for Selecting a Testing Method

Choosing the right testing method is crucial for any successful hypothesis test. With so many options out there, finding the best one might seem overwhelming. But fear not! By focusing on three main factors you can easily pick the method that will give you the most useful insights.

  • Stage of Product Development: Conceptualization and Design, Development Stage or Post-Launch
  • Testing goal: Qualitative vs. Quantitative and Exploratory vs. Validation
  • Available Resources: Time, Budget, User Accessibility, Skills

The research process often begins with qualitative, exploratory methods to gather broad insights and generate hypotheses. As the development process progresses and specific hypotheses are formed, the focus shifts towards quantitative, validation methods to rigorously test those hypotheses and make evidence-based decisions.

Conceptualization and Design

During the early development phase, the primary goal is usually exploratory. Teams are focused on understanding user needs, identifying problems, and brainstorming potential solutions. This stage is characterized by open-ended questions about what users want and need, and how they interact with existing products or services.

Qualitative Methods are naturally aligned with exploratory research objectives:

  • User Interviews: Talk directly with users to dive deep into their behaviors, preferences, and challenges, shaping the foundation of your future product.
  • Usability Testing: Observe users with early prototypes or designs to pinpoint usability issues. Use follow-up discussions to uncover their thoughts and experiences for further design refinements.
  • Focus Groups: Bring together diverse user groups to share and discuss their views and experiences, providing multiple perspectives for concept development.
  • Ethnographic / Participant Study: Observe people in different aspects of their lives to get the most comprehensive picture of their behavior and habits.
  • Diary Studies: Encourage users to document their interactions, thoughts, and feelings, giving insights into their behaviors and experiences to guide your product development.

Quantitative Methods are less commonly associated with exploratory research, but can still play a role.

  • Broad surveys or data analytics: these methods will help to identify patterns or behaviors across a broad user base, leading to further qualitative research to explore motivations and potentially inform your product's conceptual direction.

Development Stage

As the product moves into the middle stage, the focus shifts towards refining and validating the concepts and designs established during the conceptualization phase. The goals become more defined, with an emphasis on ensuring the product meets user needs and expectations before it goes to market.

Mixed Methods: This stage often employs a combination of qualitative and quantitative methods. While qualitative insights continue to inform the refinement of the product, quantitative methods begin to play a crucial role in validating the effectiveness and usability of the product. Key methods include:

  • Usability Testing: Perform repeated rounds of usability testing on updated prototypes or a functional beta version to identify and resolve usability issues. Use participant feedback to incrementally refine the design, ensuring the product aligns with user needs and expectations.
  • Surveys: At the development stage, surveys can be useful for validating your ideas. You can use them to collect feedback on features, usability, and user preferences. Surveys can also help you prioritize your development efforts.

Remember, there's often a gap between what customers say and what they actually do. Begin with tests that capture verbal feedback, then move to observe their actions for stronger evidence.

Post-Launch Stage

After the product has been launched, the "Post-Launch" stage focuses on optimization and continuous improvement. The goals here are to validate that the product meets the market needs, understand how it's being used, and identify opportunities for further development.

Quantitative research becomes predominant in this stage, providing the measurable data needed to assess the product's performance and impact. Commonly used methods include:

  • A/B Testing: Once you have sufficient traffic, start A/B testing to compare feature variations. Identify which ones improve user engagement, conversion rates, and other key metrics, guiding you in enhancing the product effectively.
  • Surveys: Employing questionnaires to gather feedback directly from users about their experiences, preferences, and perceptions. This method is valuable for optimizing existing features and identifying areas for further improvement.
  • Analytics: Deploy analytics tools to analyze user behavior and satisfaction. This data helps pinpoint areas for improvement, ensuring your product evolves in line with user needs.

Innovation never really stops. Think of adding new features like adding tiny new products to the bigger one, which means you need to keep exploring. Use a mix of qualitative research quantitative research over and over again. First, learn from people to come up with ideas to test, then use data to see if those ideas work. Often, this data will lead to new questions and make you want to learn more from people again, keeping the cycle of learning and improving going.

Available Resources

When choosing a testing method, it's essential to consider the resources at your disposal. Evaluate how much time and budget you can allocate, whether your users are readily accessible, and if you possess the necessary skills to conduct specific tests.

The goal is to test hypotheses as efficiently and cost-effectively as possible, with subsequent iterations for those that are critical.

It’s important to note that the time and budget required for a particular test can vary significantly depending on the nature of the test, the context in which it’s being conducted, tool availability, and the expertise of the team involved.

User interviews

Ideal for deep, qualitative insights into user needs, experiences, and motivations. User interviews are useful in the early stages of product development to help define user personas and pain points, or later to understand how users interact with your product.

But when you need quick, actionable data on a large scale this is not a good solution. Interviews are time-consuming and typically involve a small number of participants, making them unsuitable for gathering large datasets or making broad generalizations.

User interviews Pros & Cons

To avoid the common pitfalls of user interviews:

  1. Ensure Participants Match Your Target User Profiles: Use screening questionnaires to recruit participants who accurately represent your target audience, avoiding the recruitment of non-representative participants.
  2. Challenge Your Assumptions: Approach interviews with an open mind, ready to learn rather than confirm existing beliefs, to counter confirmation bias.
  3. Optimize Recall and Engagement: Encourage participants to share recent experiences, possibly supported by activity facts (so they give real answers, not just what they think you want to hear). Keep the conversation on track and make sure everyone stays involved to get clear and complete answers.
  4. Efficiently Organize Data: Adopt structured note-taking and utilize qualitative data analysis software (ex. Dovetail, Monkey Learn, ATLAS.ti) to systematically manage and analyze data. This helps prevent misinterpretation of participant’s intentions and drawing incorrect conclusions.

Example

We’re in the conceptual stage of creating a new digital banking app. We’re considering adding a personalized financial advice feature. We believe this could significantly benefit our users by providing them with tailored support for their financial planning, potentially setting our app apart from the competition.

However, before moving forward with the design, we need to confirm that our target audience truly values and desires this feature, how they’re currently managing their finances, and if they’d be open to getting advice from our app.

Usability Testing

Best when you need to observe how real people use your product and identify areas where they might encounter difficulties. It helps you understand user behavior, see potential problems, and figure out why they occur. This method is suitable for any stage of product development, from prototypes to final versions.

However, usability testing is less suitable for obtaining quantitative metrics or analyzing broad market trends. This is because it focuses on the qualitative experience of users rather than collecting large amounts of numerical data.

Usability testing Pros & Cons

To avoid common issues with usability testing:

  1. Recruit Representative Users: Select participants who closely match your actual user base to get accurate insights into user experience.
  2. Make Testing Straightforward and Purposeful: Ensure your usability testing is focused and relevant by defining precise objectives and preparing a structured test plan. Outline specific scenarios and tasks that highlight critical product features to guide participants and ensure consistent results.
  3. Keep Observers in Check: Observers need to watch quietly to collect information without getting in the way or changing how participants act.
  4. Avoid Overgeneralization of Findings: Complement usability testing with quantitative research methods to validate insights across a larger population. Be cautious in concluding and acknowledging the limitations of your data set.
  5. Iterate and Validate Findings: Conduct follow-up tests to confirm that changes based on initial findings lead to improvements in usability.

Example

Our team aims to improve the loan application completion rates, recognizing they’re currently below our targets. We plan to conduct usability tests to identify where users face difficulties during the application process. Following this, we’ll develop an interactive prototype based on our findings. A second round of testing with this prototype will then verify if we’ve successfully made the experience easier for our users.

A/B Testing

Ideal for testing small changes to see which version performs better. Use A/B testing when you have a clear hypothesis that can be tested with two variations,

You can readily test: different call-to-action buttons, webpage designs, alternative features, copy text, pricing, discounts, etc.

It’s not a good solution to use an A/B test when you don’t have enough traffic to statistically validate differences. It’s also less useful for exploring new, innovative ideas where there isn’t a clear hypothesis to test.

A/B test Pros & Cons

How to address A/B testing pitfalls:

  1. Optimize Test Duration and Sample Size: Use A/B testing calculators to determine the necessary test duration and number of participants. This will help you achieve statistical significance and obtain reliable results that are not skewed by time or lack of data.
  2. Isolate Changes: Limit your test to one change at a time or a minimal set of closely related changes. This strategy helps identify which specific element impacts user behavior, making your insights actionable.
  3. Account for External Influences: Document and monitor external factors like market trends, seasonal events, or marketing activities during your test period. Compare against historical data or use a control group to neutralize these effects, ensuring your test results reflect actual user response to the change.

Example

Our product team has launched a referral program, and we’re eager to encourage our customers to participate. To achieve this, we’ve designed several informational banners. Our goal is to test these different options to identify which one most effectively motivates customers to share their referral link.

Surveys

Effective for gathering a large amount of data quickly from a broad audience. Surveys are useful for understanding general user attitudes, satisfaction, or collecting feedback on potential features. They’re versatile and can be used at any stage of development.

Surveys are less effective for complex issues that require understanding context, motivation or emotions, as they rely on self-reported data, which can be biased.

Surveys Pros & Cons

To navigate around frequent survey issues:

  1. Screen Participants with Clarification Questions: Include initial screening or demographic questions to ensure that respondents are relevant to your research, enhancing the accuracy and relevance of your survey results.
  2. Craft Clear and Unbiased Questions: Ensure your survey questions are straightforward, avoiding leading language, double meanings, or technical jargon that might confuse or influence respondents. Crafting each question with clarity and neutrality will help gather more accurate and unbiased responses.
  3. Minimize Survey Length: Keeping surveys brief and engaging reduces the risk of participants rushing through or abandoning the survey. Break down long surveys into shorter, more focused sections if necessary, and consider using progress indicators to motivate participants to complete the survey.
  4. Pre-test Your Survey: Conduct a pilot test of your survey with a small segment of your target audience to identify and fix any confusing questions or technical issues before full deployment.

Example

Before deciding to add investment features, we sent out a quick survey to see how many of our users invest, what investment tools they know about, and which ones they use. Based on the data we gather, if there’s enough interest, we’ll dive deeper into the topic and continue working on designing this feature.

Analytics

Analytics serves as a vital tool in hypothesis testing rather than being a testing method on its own. It aids in gathering and analyzing data to evaluate product hypotheses, like the impact of adding a new feature on user engagement. By monitoring key metrics before and after changes, it helps determine if a hypothesis is supported or not. However, analytics relies on actual user data, limiting its use to post-launch scenarios rather than hypothetical or pre-launch evaluations.

Analytics Pros & Cons

Navigate common issues of using analytics for hypothesis testing with these strategies:

  1. Segment Your Data: Analyze different user segments separately to uncover nuanced behaviors and avoid misleading averages.
  2. Control External Variables: Account for external factors that could influence your data, such as seasonal effects or market changes, in your analysis.
  3. Ensure Statistical Significance: Validate that your findings are statistically significant to make decisions based on reliable data, considering sample size and appropriate statistical tests.
  4. Manage Data Overload: Prioritize data collection with clear objectives to avoid overwhelming decision-makers and to focus on actionable insights.
  5. Be Cautious of Bias and Misinterpretation: Be careful of biases that might affect how you see the data and remember that just because two things happen together doesn’t mean one causes the other. Trying to look at things from different viewpoints can help you avoid making wrong assumptions.

Measure Against Objectives: Regularly evaluate the outcomes of your testing methods against your defined business objectives and user needs. Are the changes you’re making moving the needle in the right direction?

Conclusion

Selecting the right testing methods to validate your product hypotheses is a critical decision that hinges on the information you need and the resources available to you.

In the early stages, prioritize cost-effective, quick tests to navigate uncertainties efficiently. As your confidence increases, you can justify higher investments in more detailed testing methods. However, it’s crucial to be aware of each method’s limitations and common pitfalls to ensure the reliability of your data.

Remember, product development is an iterative process. Be prepared to adjust your approach based on what you learn along the way. Embracing flexibility and being ready to pivot based on feedback is key to refining your product and moving closer to success. The journey of validation is as much about learning and adapting as it is about confirming your initial hypotheses.

--

--