The Optimizer — ep.1 — series intro

Photo by Science in HD on Unsplash

Here’s what The Optimizer series is intended to be: a series of short case studies showcasing real optimization dilemmas and solutions from real life cases. Having done quite a few of these in the past 15 years, and being asked to talk about them on various platforms, I chose to share my experience here. I’m hoping that PMs and entrepreneurs will find something to relate to and implement on their own ‘turf’.

For obvious reasons I won’t be sharing the clients names and the data might be altered a bit to not reveal sensitive information. But everything is real. Nothing made up.

When it comes to optimization, mistakes are pretty consistent. Actually, the solutions are as well.

Having gone through quite a few optimization processes with various organizations from various fields, I find that many dilemmas repeat themselves very similarly across markets. Some solutions (actually most of them) are relevant in many different domains. Thus, there’s a good chance the optimization I did with my clients might be a great case to learn from for you.

As always — strategy first

A few words about strategy and my philosophy on optimization and testing. I believe that each product can only be one of two stages of life’: discovery or optimization

  • Discovery — Product Discovery, is all about finding product market fit. I’ll be very blunt — If you’re still in this process there’s no point in optimizing the product features and elements. You are wasting your time and efforts. At this stage you must think in multipliers (changes that would make your product X time better — YY% improvements simply aren’t enough). Optimizing onboarding/registration even by an amazing 50%, pre-PMF, is still not the impact you need.
    There’s no point in *optimizing pre-PMF!
  • Optimization — Once you’ve past PMF and you’re absolutely positive there’s a market demand for whatever value you’re providing, it’s time to be more efficient by maximizing the user’s value and the product’s key metric. Whether it’s engagement, retention or monetization, double digit percentage changes can be worth a lot in revenus and/or valuations.

* You can and should optimize testing, to validate or discard your assumptions much quicker and more efficiently (which is very important on its own). But that differs from optimizing A product.

I believe that although optimization is built around tactical ‘plays’, the guiding strategy is crucial. When it comes to optimization, experience (and many failures) have shown me that intuition is too often more misleading than helpful.

We think we know what our user thinks/wants, but in reality we don’t — at least not enough

The moment we accept that and don’t try to force or prove our intuition, we find that a strategic ‘cold’ approach to testing works better 9 times of 10. This means that many times (not all) we don’t need to make assumptions, but rather choose the range of our test scenarios.

For example: I can assume that a low price subscription would work best for my product, but that should not hold me back from testing a $50 annual plan as part of a range of 10 different pricing models.

Strategy also plays a crucial role in choosing what to test next, and more importantly, what not. We need to view our product, at its optimization phase, as a testing lab. When we do so, our testing strategy and priorities should rely not on the expected results, but rather on the expected impact the actions we will take, based on those test results.

Instead of a roadmap I suggest writing a test-plan, which basically lays down the tests we want to make prioritized by their impact value. The test-plan (like a roadmap BTW) should not be strict and allow adjustments and changes. If a certain test raises a new question that might have more potential impact than the next test in line, then it should take precedent.

Bottom line, optimizations work. It will consistently improve engagement, retention and most impactfully monetization, by tens of percentage points. It works not because we’re smart, but rather because we acknowledge the fact that we’re lousy at predicting human user behaviour. But through methodological and strategic testing we will constantly keep on improving.

It’s almost impossible not optimize if we follow the right strategy

So what am I going to write about in the upcoming episodes? Monetization (especially pricing models), retention, driving rating, and whatever else comes up. If you’d like me to share my experience on something more specific, ask for it.

About me:

My name is Yoav Yechiam. I come from an entrepreneurial background, with a few startups in my belt, some more successful, so less, one sold. Throughout the years I’ve found myself focusing on Product Management. Not because that’s something I aspired to be, but mainly because it seemed to be the most important role I felt I needed to take wherever I’ve been.

3 years ago I ventured out to become a consultant. Focusing mainly on product strategy and analytics. This led me to take on many projects that were aimed at improving a certain ‘stubborn’ metric. The strategies and tactics I’ve used, along with the lessons learned from these experiences are what we’ll discuss in this series.

Six months ago, I joined forces with a few other (very) experienced Product consultants to form the Product Alliance. A boutique consulting firm that is unique by having a multi perspective approach to the client’s solution. This is due to every project having more than one Product consultant analysing it.

--

--

Yoav Yechiam | The Product Alliance
The Optimizer series

Managing Partner at the Product Alliance, strategic consulting firm. Product Analytics Expert