Continuous Feedback with Lean Experiments

Chithra Ramadoss
FordLabs
Published in
5 min readMay 18, 2021

Recently I read an article that linked cooking to human evolution. Apparently biologists agree that cooking could have had major effects on how the human body evolved. They linked smaller teeth to softer texture of cooked food. Ancestral humans likely tried controlling fire first and then started their journey to create all the cuisines that we love today. Why am I saying all this? Because I think it is human nature to experiment, learn from it and evolve.

What is applicable in nature, is also applicable in product development. It is common to see most of the software development happen using the Agile framework. But not all projects leverage agile properly to deliver the customer value. We seem to have adopted the faster and quicker development cycles quite easily but left behind the continuous feedback loop which is the core of Agile software development.

At FordLabs, we spend a lot of time understanding the user problems. We use different data driven approaches to collect user data and extract insights. Thereafter, instead of spending all our efforts in developing a fully functional product, we often develop small and iterative solutions to test its viability, desirability and feasibility. Testing small solutions iteratively helps us to incorporate the continuous feedback loop into our product lifecycle. We do this using Lean Startup and Lean Experiments.

FordLabs Methodology

Lean Experiments

Lean Startup and Lean Experiments are concepts championed by Eric Ries that many startups adopt to validate their product ideas. It recommends building small features and testing them with the users to get feedback on the product’s viability, desirability and feasibility. Using the build, measure, learn (BML) approach, the product team learns iteratively and is able to manage risks better.

The Lean approach revolves around the answering two basic questions

  • Should we build this product?
  • How can we increase our odds of success?

Below are the steps that we take to execute lean experiments.

Step 0: Discovery

We spend a lot of time in conducting user research activities and discovering the problem space. My Data Drive Design blog has more details about different types of user research methods.

Step 1: Hypothesis

At the end of user research or after the problem space has been synthesized, we create the team’s hypothesis around what solution we believe would address the prioritized user problems.

Hypothesis are simple statements that can be validated or invalidated. An effective hypothesis is testable and measurable. Experiments reveal if the hypothesis passed or failed and also indicates the solution’s effectiveness. They helps convert our belief into validated learnings.

A hypothesis can be written in multiple ways but there are some common elements — users, outcome and action/solution.

If we [assumption] for [persona] then [outcome].

We believe that the [persona] will [do this action/use this solution] for [this reason].

We believe that [creating this feature] for [these users] will achieve [this result].

For example,

How Might We — HMW increase the number of Ford employees volunteering during the Covid-virtual environment.

Hypothesis — We believe that [by providing role based volunteering opportunities] for [Ford employees], we will see an increase in [the volunteering sign ups].

Step 2: Minimum Viable Product

As the next step, we start building the Minimum Viable Product which is the smallest possible product that allows us to validate the hypothesis. MVP can be any solution that can maximize learnings during the experiments with minimum effort. For example, a simple one pager can be used to garner user interest towards a product before building the entire website.

Minimum’ in MVP identifies the smallest set of features needed to generate learning. While developing a product, features that are essential to prove or disprove a hypothesis are only included in the MVP. User feedback will thereon drive any iterative product improvement.

A product is ‘viable’ if users find value in it. Running experiments with minimal solution helps us learn whether users find the product useful and if they are ready to invest more on it.

With the help of experiments, the product team is thus able to validate their product ideas and at the same time minimize the resources needed to build a robust product that may or may not succeed.

Step 3: Collect Metrics

Effectiveness of the solution can be measured using number of quantitative methods. Teams could pick from a number of methods including surveys, usage analytics etc.

For example, usage analytics from tools like Mouseflow or Matomo highlight how many users used the product in a given timeperiod such as per day/week/month, what is their user behavior, what devices and browsers they used etc.

Metrics data collected from such methods indicate what potential improvements are possible in the product. However, not all metrics data are useful. Generally there are two types of metrics — actionable metrics and vanity metrics.

Data that is indicative of user behaviors and patterns, help make decisions. These are called actionable metrics. For example, increase in sign-up for weekly newsletter may indicate ‘product value’. Similarly higher bounce rate from the homepage may hint at the website’s lower user value.

Vanity metrics, on the other hand, do not necessarily provide insights. They may look fancy but may not be useful to identify any improvements. For example, pageview indicating people visiting a site may not necessarily mean anything. Without any context such as who are visiting the site, are the visits getting converted to sales etc., metrics such as pageviews are not very useful.

So, what next?

Analyzing the metrics help teams to decide whether to pass or fail the hypothesis. When passed, teams persevere and look for newer feedback to iterate the product.

Failing the hypothesis is also an important learning and critical for the product. It is an opportunity to pivot to another solution.

Pivoting often leads to reviewing the hypothesis and going back to the drawing board. Experiments might indicate that the team is not focusing on the core user problem. This may warrant reviewing the outcomes of user research synthesis and identifying the right problem to solve. Once identified, the cycle continues with lean experiments.

--

--