Fast & slow metrics: Conversion Rate

Carlos Oliveira
Building Fast and Slow
7 min readApr 16, 2020

This is part II of a series from Carlos Oliveira at Building Fast and Slow around how to think about your key metrics when building consumer products. In this second post, we look at one of the primary indicators companies use to increase their Average Revenue Per User, build audiences, increase orders and which has a whole field of optimization dedicated to it: Conversion Rate (CR).

Talking about Conversion Rate

The basic formula is dead simple:

Conversion Rate = Completions / Users

So if you run a commerce operation and your ultimate completion is an order, say you had 1000 users and those users made 10 orders, your conversion rate is 1%. Easy, right?

I agree. It’s easy to monitor, easy to compare, but over time, and as you grow, looking at all your users and comparing them to all your orders, becomes unhelpful when it comes to making product decisions.

Breaking it down

The most common and useful way to break Conversion Rate down is to use time. Say you’re looking at users who convert within the first day we’ve seen. them on the website. We’re now analyzing something like:

Conversion Rate (1d) = Completions (1d) / Users (1d)

But… sessions

If you’ve ever used Google Analytics(GA), for example, you’re now wondering how this is comparable to their definition. In GA, conversion rate is calculated using completions and sessions.

Conversion Rate (GA) = Completions / Sessions

A session in GA is a variable time interval, that begins when the user first arrives at the site and expires 30mins after their last interaction. One obvious consequence is that one user can have multiple sessions.

The point of the session definition is to try to capture one of your user’s visits to a website until they’ve left. If within 1000 sessions, there have been 10 orders, your conversion rate is 1%.

For the standard definition, GA doesn’t actually care if those sessions came from 1 or 1000 users. If your whole 10-member family is rooting for you and visiting your website 10 times a day for 10 days, that actually can account for your 1000 sessions. If your mom bought one article a day to boost morale, that could account for your 10 orders. Google only cares whether those visits (or sessions) turned into a purchase or not.

It does tell you how many users were behind your session count and allows you to change your definition, so it’s worth keeping an eye on both.

That definition, however, as you probably captured by now, changes everything. If you look at your 10-day conversion rate, it would vary between:

CR = 100% (10 orders / 10 users)

CR = 1% (10 orders / 1000 sessions)

Which, if you’re wanting to make actionable decisions, might actually change everything.

Wait, there’s more. The question you were probably trying to ask by looking at your 10-day conversion rate before you started drilling down, was “how many users have ordered something from me?”

That’s really 10%, right? All the orders came from our mom, and only 10 family members can actually account for all the visits!

CR = SUM(Unique Orders / User)

And that’s before we start worrying about double-counting users (if they’ve visited on different devices, for example, and we can’t figure out that that’s actually the same person).

So what SHOULD we care about?

What you want out of whatever way to measure CR you choose is that it is actionable, that it gives you a way to measure progress through time and make useful comparisons between groups.

So let’s look at how differences in looking at the metric can affect the decisions you make with it.

Imagine applying different timeframes will make your conversion graph plot something like the below:

CR over longer and longer timespans for the same user groups

With a chart like the above, it may be relatively easy for you to identify a website/app change that affects your 30-min CR (a 10% improvement in session CR would take you from 5% to 5.5%).

That might barely if at all register on your 7-day or 30-day CR, since these improvements might be either negatively offset over that period, or just be small enough as to not even show on that long enough timescale and user base.

Sometimes, you might even see the opposite effect, with that specific change decelerating the CR progression curve and causing it to stall sooner since there was an unforeseen (negative) effect downstream.

Here are a couple of examples.

Example 1:

Session-1 (30min) CR increases by 10%

  • This happened because you’re offering a 30% coupon for users who buy on their first visit, which led to

a decrease in Session-2+ purchases

  • Which happened because word got around about the campaign and new users flooded your site with instinct purchases, which then led to

30-day CR going down

  • Which in turn was because many of those customers who didn’t end up buying in their first session actually returned less than your typical new customer to the website. To add to that, your returning customers actually felt disappointed they were excluded from the promotional cohort and ended up purchasing less.
  • In this scenario, to measure the net success of the campaign, you’d need to compare and contrast behavior across both cohorts (new vs returning) and understand whether the end result was a net positive.

Example 2:

7d CR up

You’ve launched an aggressive basket abandonment campaign that targets users 3 days after they’ve left the website with something in their basket. This leads to some of those customers coming back and purchasing on their 3rd day, increasing conversion rates on the 7-day timespan, but led to

fewer returning visitors in weeks 2–4

What you’ve noticed is that there is now a reduction in visitors coming into the website on weeks 2–4. A lot of people were annoyed by your insistent email and have either unsubscribed or just not visited the website at all, let alone your basket page. This led to

30d CR going down

As the conversion rate improvement through your email clickthrough cohort was completely offset by those who visited less and unsubscribed from your communications.

There are multiple causal reasons for phenomena such as the one described here (and others) to occur, so it’s important to not take the numbers at face value and dig deeper into users’ behavior to understand what behaviors we have introduced through this change, as well as what we were trying to incentivize.

It is also important to have an evaluation criterion that reflects not just what could happen through a positive result, but also what types of scenarios could occur if the experiment went wrong (and choosing monitoring metrics that account for that).

Heuristics for decision-making

Your brain will be primed to start thinking about this as you develop a sense for the types of results experiments may produce, but there are a few questions/heuristics that might come in handy as precautionary measures when trying to move the needle on CR, though:

  1. Is CR the ultimate metric we want to affect? (i.e. is this a funnel efficiency problem)
  2. What secondary metrics can we measure that would let us assess the trade-offs of our decision? (7-day retention, % order returns/complaints per order, visit frequency, % users with a 2nd order)
  3. What is our typical purchase window and how do we capture the behavior we’re looking to effect? (e.g. do our target users shop once a week or more, do they browse a lot before purchasing and only purchase a couple of times a year, is there a bimodal distribution where some purchase on a whim and others take a long time to decide? — this might change our key metric time window as well as our secondaries/monitoring)
Two distinct conversion behaviors. One where customers typically complete their task within the first three days of initiating their shopping, with a long tail of longer-horizon purchases (7 and 30d), and a second where purchases either happen on a whim (within their first session or their first day of shopping) or wait a long time to make their purchase decisions (7d). Selecting different conversion indicators as your key metric could yield significantly different results for your optimization efforts.

Almost done: Don’t forget to define what a user is

Finally, this may look like a no-brainer, non-issue, but more often than not, it is not. How are you tracking your user count?

If it’s a web app, do you keep an ID that’s set on the user’s cookies on their first visit? Are you double counting mobile web and desktop users?

If you do, when do they expire? How and when are they set?

Do you use that ID to allocate them to your experiments?

When the user logs in and you can identify them, do you reconcile and backfill your data or is there a chance you’re double-counting in places?

This may sound basic but there are loads of confusion around what a user is, how to handle corner cases, and how that affects test samples and experimental groups. Knowing and understanding how it affects your comparisons and your analysis is key to making good decisions and optimizing your conversion rate.

That’s it! If you want to know more about key product metrics, Part I is here and you can read more about product management techniques on my profile.

--

--

Carlos Oliveira
Building Fast and Slow

Product Manager building something new. Previously building stuff at Skyscanner, Farfetch. Thinks he can make people’s lives suck a little less.