Product School

Product School is the global leader in Product training with a community of over two million product professionals.

Practicing Product Analytics in a Market Full of Uncertainties

Gil Bouhnick
Product School
Published in
9 min readMar 23, 2019

--

Almost 4 years ago, I made a significant career shift from leading a large mobile B2B product to founding a B2C startup called Missbeez, a marketplace for lifestyle and beauty services on-demand. Two opposite poles.

Missbeez is a B2C product but it’s also a marketplace with different types of users: customers and service providers from different verticals. In this article, you’ll learn why we decided to collect data to make decisions based on real numbers rather than guesses. And how we did it.

Being a Data-Driven Company

From the early days of our startup, it was clear that we needed to invest a lot in our mobile analytics and BI infrastructure.

It was exactly as the cliché says: we moved fast, made a lot of experimental changes, and many of them led to changes in our users’ behavior.

It became pretty hard to predict the outcome of each change. We knew what we wanted to achieve, but we’ve learned that the market doesn’t always react as we expect.

More importantly, we learned that having a 2-sided marketplace made things more complicated because changes on one side of the marketplace could lead to unexpected reactions on the other side, creating a chain reaction.

Turning Data into Our Product Manager

Data became our passion.

We created and integrated a set of powerful monitoring tools: dashboards, business-friendly logs and even developed a mobile app that gives a live snapshot of our field activities.

Next, we added a set of technical tools such as Crashlytics (to monitor issues) and server-side logs.

Then, we embedded some product and UI analytics tools such as MixPanel and Google Analytics, along with some marketing-related analytics such as AppsFlyer, Facebook Analytics, and others.

Strategic pieces of information were pulled back from each of these systems into our own system of records, in order to ensure we had only one “point of truth”, and to make sure the important data was available not just for reporting purposes but also for our business algorithms and in-app logic. This approach is very important when so many systems are integrated to one another.

We turned into a classic lean, data-driven startup: we established a culture of frequent shipping and it helped us navigate quickly through the uncertainties of our growing market, try out new things, measure, learn, and tune.

We collected a lot of data, created an impressive BI layer, and made decisions based on real numbers rather than guesses or pressure from our vocal minority.

Not a Sprint. Not a Marathon. Curling.

We acted like one of those weird looking curling teams, where the curler throws the stone, and the sweepers use their brooms to alter the path of the stone until it reaches the desired location. Instead of polishing the ice, we optimized our product, polished our onboarding funnel, and improved our retention rates.

The Problem

It took us more than a year to understand that the practice of shipping-measuring-analyzing is too simplistic and is not sufficient for our product.

Throwing experiments in production is like throwing a paper airplane from a window.

You create the best paper airplane model you know and give it a throw.

As a thinking person, you analyze the course of the flight, and modify your airplane based on your analysis: you lift up the front, add a small curve to the wings, do whatever it takes to make it better.

You can repeat this experiment over and over again, but soon enough you’ll realize there’s no consistent correlation between your modifications and the performance of your airplane.

Soon enough you realize it’s not you, it’s the wind…

The wind is the market where the product works, and for many B2C products, the market is much stronger than most feature enhancements.

This may not be the case for every product of course, but for us, it felt like market fluctuations and events impacted our customers behavior and messed up our experiments more than once:

  1. Changes to our user acquisition campaigns made a direct impact on our conversion rates. Good leads converted better, regardless of our in-app funnel optimization efforts.
  2. Seasonality effect — sunny days were flooded with strong activity on both sides of the marketplace. They were stronger than many in-app promotions or monetization features.
  3. Marketing and PR activities often drove irrelevant traffic to the app, resulting in lower performance which was completely unrelated to the product itself.
  4. Configuration or technical hiccups — this one may sound a bit funny, but even a small bug could cause unexpected results… and not always bad ones).

And this list is just the tip of the iceberg.

Since our “experiments” were done in a serial order — each one could potentially suffer from a different kind of background noise, reducing the reliability of the results.

How Do You Neutralize Market Forces?

The simple answer is to create a sterilized environment where you can compare apples to apples. This can be done by working with cohorts, segmentation, accurate attribution and most of all: A/B tests.

The only problem is, it’s not that simple…

Let’s dive deeper into the details, and please keep in mind this list is based on our own experience and might not fit every product:

Structure Your Data

The first and most important thing is to collect high-quality data. The kind of data that will allow you to perform smart filters and build accurate cohorts.

#1. Collect as Much Data as Possible

Forget about database performance or efficiency considerations your tech team might raise. At the early stages of your product — the data is your king. Collect as much data as possible because you cannot predict what will be required on later stages.

You’ll have plenty of time to worry about performance and optimization later.

#2. Build a Strong Cohort Infrastructure

At the end of the day, you will want to analyze the data based on cohorts: users born at a certain date, week, months, etc.

#3. Store the Origin of Each User

Different users come from different sources and often behave differently.

Paid acquisition vs. organic users, Pinterest vs. Instagram vs. Twitter, Television ads vs. Facebook share.

Users need to be tagged with where they came from, using attribution techniques.

#4. Track Important Actions Made by Each User

Significant milestones should become part of the data collected for each user.

If adding an email or selecting a favorite service is considered to be a significant action — mark it down.

Drop all your Boolean fields because time is too important to be neglected. Replace your Booleans with dates so when the time comes — you will be able to tell when each action happened.

#5. Take Care of User Segmentation

Divide your users into segments based on their characteristics: location, language, and behavior. Keeping track of their important actions will allow you to create segments based on both their characteristics and their activities.

For example, you can segment your users based on their stage in the conversion funnel (registered, engaged, triers, payers, repeat payers, addicted users, etc.).

At the end of the day, you, as a product manager, need a way to get your insights on each group of users, based on when they joined, where they came from, what they did, and their overall status.

The 5 principles above are enabling that, but as I mentioned earlier, this is not always enough, due to market trends and ongoing events.

Conduct Frequent Product Experiments

This is where the fun begins:

#6. Define a Time Interval

Define a time currency: the optimal time interval that will be used in your experiments and reports. It can be a week, a month, a quarter, whatever best suits your business.

#7. Make 1 Change per Time Interval

The market is going to create a lot of background noise anyway, so make sure not to add more noise by limiting yourself to one (and only one) change per time interval, otherwise, it will be impossible to know which of the changes made the impact.

#8. Maintain a Detailed Captain’s Log

Log every product enhancement and every configuration change you make in your product. Don’t assume you know what’s important and what’s not — log everything including the ‘what’ and the ‘when’.

#9. Develop “On/Off” Switches

Frequent product changes and experiments may sometimes lead to mistakes or bad decisions. As a best practice, it’s great to have the option to turn off a new feature that is working really bad.

Here Come the Big Guns

While all of the above will help you manage your product enhancements and analyze the results — they might end up with mixed results due to market changes that are beyond your control. This is where A/B tests come to the rescue.

#10. Start with A/B Tests Yesterday

We tried to avoid it at the beginning and we failed. At the end of the day, A/B tests are the most efficient method to analyze your product enhancements while neutralizing market noise. You want to try something new, and the only way to know how it performed is by neutralizing seasonal or other market events. A/B tests allow that.

Some A/B options will require additional front-end development, some will require server-side development. Both options will need a mechanism that assigns the users to each group (group A, group B and sometimes group C). Note that you may need to balance each group with users from different segments (old vs. new, paying customers vs. non-payers).

This means some extra development effort for every feature you wish to split test.

#11. Measure Your A/B Results Along with Your Cohorts and Segments

This is where you want to ensure you’re comparing apples to apples and all previous capabilities (user cohorts, segmentation, sources) are involved.

When comparing the A’s with the B’s, make sure to break down the groups to cohorts and segments.

For example, you may find that addicted users respond better to option A, while first-timers respond better to option B. Or, you may learn that users coming from Instagram ads outperformed users who came through viral campaigns.

This knowledge will allow you to make educated decisions, based on business goals and priorities.

Alternatives to A/B Tests

We all have constraints.

Sometimes there will not be a cost-effective way to develop an A/B test functionality for a certain feature. In such cases, there are a few alternatives to the full-fledged A/B mechanism that might become handy.

#12. Publish a New Feature on One Platform Only (iOS/Android)

This technique assumes that your Android and iOS users are behaving the same. If this assumption is correct for your mobile app then publishing a new feature on only one platform might provide the answers you are seeking.

Measure your users’ behavior at a selected time interval — and analyze the results one platform vs. the other.

#13. Staged Rollout on Both Android and iOS

Google calls it a ‘staged rollout’ and in Apple’s AppStore it’s treated as a phased release (7 days by default but can be extended up to 30 days).

While it’s not the perfect A/B test alternative, it will help you split your users to 2 groups — the users who have upgraded to the latest version of the app and the ones who are still using one of the old versions. This option assumes you have the ability to “mark” each user and action with the version number (see next bullet).

#14. Tracking the App Version on Each and Every API Call

If you decide to implement one of the above alternatives, you’ll need a way to distinguish between users who used the new functionality vs. the old one.

As a best practice, (backward compatibility support) I recommend including the device type, the app version, and the user ID in each API call.

This will allow you to tag the data with the actual product release and will allow measuring different options (A/B and more) pretty easily.

Originally published at www.productschool.com on March 23, 2019.

--

--

Product School
Product School

Published in Product School

Product School is the global leader in Product training with a community of over two million product professionals.

Gil Bouhnick
Gil Bouhnick

Written by Gil Bouhnick

CoFounder and CTO at Missbeez. Playing at the intersection of technology, design and users. Creating products for 20 years. Owner of www.mobilespoon.net.

No responses yet