Product Management: What I Learned About User Data, Research and Actionable Steps!

As I research product management disciplines and best practices to become a better PM entrepreneur, I came across content from Laura Klein that got me hooked. This post is lessons learned from her presentation on Quantitative vs Qualitative Research.

User Data, Research and The Actionable Steps:

If you use data well you could build better products, but if you use data badly you can ruin products.

Quantitative data doesn’t replace design or listening to users, it only gives us info on what we have built.

The data can help inform design — understand what’s working and what’s not. Inform research — give us feedback on product decisions.
Metrics can help understand whether our decisions helped or hurt user behavior. It tells us what we should be investigating…

Quantitative Data: WHAT (we should be investigating)
Qualitative Data: WHY (it happens)

For example, a checkout funnel (It’s more of a sieve):
For software as a service or e-commerce products.
Steps can be:
100% is incoming (100 buyers)
Create account -20% (-20 buyers)
Select plan -10% (-8 buyers)
Payment Info -50% (-36 buyers)
Address -33% (-12 buyers)
Confirm -25% (-6 buyers)
Thanks & Up Sell = 18 buyers

Users had an intent to buy something: 100 came in, entered the check-out funnel (we spent effort on acquiring users probably through google ads/FB/referrals), then they started falling out and only 18 purchased.

What happened?
Go back to the funnel and investigate the steps:
the step where we’re losing most people is Payment Info -50% (-36 buyers). So we start asking ourselves what are the reasons? Might be payment options are not clear or they don’t feel secure, or something else.

Don’t come up with reasons! If you really want to understand inflection point: you need to do two things:
1. Find it. Quantitative research: funnel numbers helped us find the inflection point.
2. Need to understand it: why it’s happening? Qualitative research: interview users and do A/B testing.

Using Quan and Qual Together

One Way: Starting with Quan:

Step 1: Identify the biggest problem!
Use funnel metrics!
(Quan): In order to identify the biggest problem, we look at funnel metrics and try to identify the largest hole in the usage flow.

Step 2: Understand why!
Observational usability testing!
(Qual): watch people use your product! Look for when they get stuck. You would pay particular attention to the people who get stuck on the payment page.
Based on your observation of between 4–6 people (that’s the correct number of people to support this sort of observation user study).
You would come up with reasons. Some reasons that surfaced up:
- Didn’t have payment info handy
- Confused by specific question/wording
- Looked for promo code (saw the promo code box, went to look for a code and never came back)

Step 3: Propose Solution!
Create solution hypotheses!
After you’ve observed the problems, you need to figure out: solving which of these problems is actually going to fix your Payment Info -50% (-36 buyers) problem?

When we want to test if a change was successful: we need to know what success looks like, and equally we need to know what failure looks like. Create a list with possible negative consequences of the change as well as the positive ones, so that we can predict what’s going to happen as we implement that solution.

We pick one of the reasons to test with, the promo code:
Trigger: People looked for a promo code (they see the box)
Solution: Easiest solution is remove promo code entirely
-Solution hypothesis is: If we remove the box there will be an increase in conversion.
-What might go wrong? A decrease in acquisition or revenue. (the reason in the promo code might have been to convert price sensitive people, or to promote sales) That’s why when we change something we want to keep an eye on other metrics that might be affected by our change.

Step 4: Learn & Iterate
Learn:
How would we know if we were right?
A/B test of the checkout flow with and without the promo code!
We are going to do Quan test: if just removing the promo code fixes things without hurting.
So we create two tests: one with promo code and one without and test the funnel again.

Quan test came out positive: we’re losing less people and haven’t seen much decrease in acquisition.

Now we need to run Qual test to check if what’s happening is for the reason we thought.

We keep testing, iterating and learning until we are happy with the results and until metrics look the way we want them to look!

Then when we’re done we can go back to the steps and identify a new problem, looking at our quantitative data again!

In brief, the steps are:
Step 1: Identify the biggest problem

Use funnel metrics
Step 2: Understand WHY
Observational usability
Step 3: Propose solutions
Create solution hypotheses
Step 4: Learn and Iterate

Another Way: Starting with Qual:

Step 1: Identify the biggest problem
(Qual) Interview current users:
Call customers to discuss what issues they are having using your product, or what alternatives they are currently using.
Another way to generate hypothesis is call former users to find out why they are former, what caused them to leave. You’ll learn a tremendous amount on how to improve and what caused people to leave.

Step 2: Understand what
Since we started with the why, now we have to understand what?
(Quan) Study relevant metrics - the idea here is to take what we learned in the qual and try to predict how it might affect the metrics.
Why would we bother to predict what metrics will change? You want to know if I fix that problem for the user, what metric do I expect to improve?
For example: If we think that fixing a problem is going to fix retention that’s great, but if we only have 3 users maybe retention is secondary and user acquisition/conversion funnel is first.

To identify which part to examine, you can use this framework by Dave McClure AARRR: Acquisition, Activation, Retention, Revenue, Referral
If you fix something you think is probably going to affect revenue, then check revenue metrics.
Or if it’s something you saw amongst power users, it’s very likely going to affect retention.
Or if it’s a problem we saw with brand new users as part of the on boarding process, well that’s going to affect engagement.
Or if you want to implement comments for example - don’t focus on how many people comment, but if they comment do they buy more stuff? 
Think in terms of if I count/measure it, will it help with a specific goal: purchase increase, time spent in the app increase (engagement & retention). What happens that matters and moves the needle?

Step 3: Propose solutions
Create solution hypotheses — what you think will go right and what you think will go wrong.
Step 4: Learn and iterate.

The process works not only for products but for many other functions like services, marketing, sales process- identify the steps where people fall in your pipeline/funnel and improve it.

For sales pipeline:

Step 1: Identify the biggest problem
Look at your sales pipeline
Step 2: Understand WHY
Interview people who said no
Step 3: Propose solutions
Create solution hypotheses
Step 4: Learn and Iterate

General practice:

When you’re trying to identify features: people are not good about predicting the future, but they are good at talking about stories from the past and the problems they have/had. Make sure to optimize your questions with that in mind

One thing to ask is how they have attempted to solve that problem in the past? How are they solving problems now? What product have they used to solve it?

Talk to enough people to find patterns in problems… that’s what you should design a solution for.

Statistics:

  • When you’re doing an A/B test (a split test): If you get to 300 people actually converted in each branch, then you’re safe with statistical significance.
  • For usability test the number is 5 people. If you’re not starting to see really strong patterns after 5, fix your recruiting, fix your personas, get new people and do it again.
  • You want to predict from the next set of 5 what you’re going to hear and should keep doing testing with sets of 4–5 until the problem or behave becomes predictable in a certain way.
  • Write it down, what’s your prediction? What do you think is going to happen? Confirm it with the data!
  • Focus groups: don’t do 12 people in one room, do 12 people interviewed separately. Observational testing will give you much better information. Focus groups are also incredibly hard to run well. One-on-one interviews win!
  • Create screener survey — it’s a method of recruiting the correct people for your surveys. (Screener questions: Keeping out the riffraff).
    For example: I want a left-handed dentist who lives in Boise: You write a survey to identify your testers:
    Q1: What do you do for a living? Dentist
    Q2: With which hand do you write? Left.
    Q3: Where do you live? Boise

Working on a product that doesn’t exist yet:

Understand the user journey: talk to x number of potential users to understand what the problem is.
Understand some key concepts: you can use quantitative to make sure that the educated guess is the right one.
In user centric design methodology: do the needs finding, then do design/wireframes, then prototyping.

What’s missing (and the lean startup was designed to address) is you have to do some sort of an evaluation test: you validated the problem, now you need to validate your solution idea before you build something.
You need to figure out the smallest possible thing that you can build to validated if your solution is in the right direction (you can run concierge test, you can run wizard of oz test).
You have to validate that the solution direction you’re going is good. It’s the MVP concept: the small great thing that you can build that your users can use as a solution.

Posted originally at helenapowell.com