20 Rules for High-Metabolism Conversion Optimization Teams

Howard Yeh
HowardYeh.com
Published in
5 min readMay 2, 2017
Our internal conversion optimization practice focuses on constant motion and continuous testing.

Conversion rate optimization (“CRO”) is an important component of what we do at HealthCare.com. CRO is one of the key legs that supports our user acquisition economics. It increases our overall efficiency. And it centers us around our key performance metrics. I’ve shared a few principles that we’ve applied internally that might help you get CRO off the ground in your company.

We use Optimizely as our core platform, although the platform itself isn’t the key determinant. The important thing when it comes to building a CRO practice is to build a data-driven culture, and an experiment-driven culture, and make sure it’s staffed with the right mix of design, analysis, development, and product management functions (whether it’s 1 person or multiple people).

Conversion Rate Optimization has been an important part of several companies I’ve been a part of, and have founded. I believe it has been an essential part of previous company successes. As part of HealthCare.com, I’ve also been fortunate to have as an advisor/investor, Arthur Kosten, who built and managed one of the most-disciplined and efficient in-house conversion optimization practices in the world for one of the largest (and arguably, one of the best-run) ecommerce businesses, Booking.com. Their practice emphasizes speed, with continuous releases from an optimization team that numbers in the hundreds, is comprised of multi-disciplinary teams of designers, product managers and data analysts, is structured to focused on different parts of the conversion funnel. The scale of their practice is orders of magnitude beyond what we are capable of, and most companies for that matter. However, the mentality still holds. He talks about building your metabolism, and getting in the rhythm of running experiments all of the time. They aim to be in constant motion. And empowering team members to make decisions and make mistakes, as long as those mistakes are confined, the potential for wins are high, and that mistakes can be reverted quickly.

Combining my own experience, plus the master class I get from my advisors, and lots of great content available online, here’s a view as to our conversion optimization culture for our business, in the hopes that it might help yours.

Conversion Optimization Rules

  1. Keep the pace of new experiments going. We aim to be in constant motion.
  2. Before an experiment gets queued, know what metrics you are measuring success against. The default metric should be overall monetization, but specific experiments might target specific metrics.
  3. Experiments must have a hypothesis and a rationale for that hypothesis. Let’s not throw darts into the dark.
  4. Quicker-to-execute experiments get prioritized higher. It helps keep ourselves in motion.
  5. Take a systematic approach to optimize the different steps of our different flows. Every step in every flow is an opportunity to experiment.
  6. If something is broken or obviously inefficient on our flow, use judgement whether to run an experiment. If the experiment can be run quickly, run it to track early results before making a permanent change.
  7. Not every experiment will yield a winner. In fact, fewer than 50% of experiments will end up with a statistically-significant winner (and one CRO expert shared with me that their success rate is 1 out of 6, and they are OK with that).
  8. Every experiment is an opportunity to learn. Even experiments where the control version wins is still a data point. We learn even when we don’t win. And we give ourselves a better chance to win with more, thoughtful experiments.
  9. If a result of an experiment is not statistically-significant, then use your judgment on the best version in order to move onto the next test. Don’t let an inconclusive result get in the way of starting the next experiment. Remember above, “constant motion”.
  10. Think about the perspective of the user. What are they expecting to see, and what can we change in our flows to match expectations. Go backwards all the way to the acquisition channel, and where the user started their journey (even before landing on your site).
  11. In weighing MESSAGING vs. DESIGN, consider that experiments varying copy (i.e., text) are easier to execute than experiments varying design. (see Claire’s blog post from Experiment Engine.)
  12. Maintain symmetric messaging within the flow, even with experiments. Consistency matters as we are taking a user for point A to point B.
  13. Make sure we have the right data to run an experiment. If we don’t, figure out how we get it.
  14. Find ways to run concurrent experiments that do not conflict with each other. (For example, segment via Audiences.)
  15. Use whatever data is available, e.g., Google Analytics or more, to research potential pain points in our user flow. This will help us come up with experiments.
  16. Experiment ideas don’t need to originate from the CRO team. (Although it’ll be the CRO team’s job to prioritize and schedule them.)
  17. Build your intuition. While it’s sexy to say that we’re 100% data-driven, recognize that intuition still matters a whole lot. Particularly, in conceiving experiments, prioritizing experiments, reading inconclusive results, and planning follow-on experiments. Learn from others’ learnings. Build your first-hand knowledge base, log them and apply them.
  18. When considering potential improvements, borrow heavily from other sites that have great conversion funnels. (This is particularly true for sites starting out.)
  19. Prioritize experiments on higher-volume pages. Related point, it is OK for us to extrapolate learnings from an experiment on a high-volume page, and apply to a lower-volume page. Remember, we don’t have the same set of standards as scientists do running a lab experiment. In practice, we’re trying to generate and accelerate wins.
  20. Balance getting smaller wins (2–5%) with opportunities to get 10%+ wins. We should be going for both. But we shouldn’t be spending a lot of technical resources on 2–5% wins. Those should be quick-fire experiments, so we can save technical resources for the potential 10%+ wins.

--

--

Howard Yeh
HowardYeh.com

CEO/co-founder of HealthCare.com. 2x entrepreneur. 2x baby daddy. Husband. New Yorker. Startup junkie. Former VC. Former investment banker.