Here at giffgaff, we recently did an intensive 5-day Lean UX run by Jeff Gothelf (and, partly, by myself — no pressure then, following Jeff…) for the whole company where we delved deep into the Build — Measure — Learn cycle and the whole hypothesis-driven product discovery/development idea. So, lots of canvases, risk/value matrices and a forest of post-its culminating in a Science Fair where every team showed off their amazing progress over a short space of time.
In the midst of all that, however, one concept struck an immediate chord with me, and I believe this concept to be truly at the heart of Lean UX. It’s an invaluable tool to help you always build the right thing, or, at the very least, not expensively build something nobody wants.
This concept is the Truth Curve. And I love it — here it is in all its glory…
Lean UX is mainly all about learning what your customers problems are, and how you can best solve them, as quickly as possible. This can be achieved in a number of ways — you can simply ask them. You can show them a paper prototype of what you’re thinking of building. You can do an A/B test. You can even build the entire solution to production quality and make it live, and see what happens. Obviously, building and deploying a solution is way more expensive and time consuming than just asking someone in the local high street, but gives us much more accurate and reliable data. What this simple graph shows us is the relationship between the cost, effort and time of doing these things versus the strength of the evidence you get.
And that is incredibly powerful. Particularly if you can quickly discover you shouldn’t build something, and save all that time, cost and effort.
The other interesting thing about the curve is the pink area on the left. This is fantasy land, where you sit around discussing what to build. Unless you are interacting with your end users or prospects, you are learning nothing. Simply getting out of the building and chatting with random people is better than that, but the strength of evidence you are getting is quite low. So, it is incredibly quick to get low quality evidence, but that evidence may be extremely illuminating. As an example, if you ask 10 people in the street and they all say your idea is terrible, that may have saved you a fortune…
Once you’ve gathered low quality data, what then? Well, if it’s positive, you can invest a bit more time and move up the truth curve — spending a bit more time but getting stronger evidence. By the time you get to “let’s build this in production”, your idea is fully validated and you know your customers want it; otherwise, you would have discarded it in favour of something else further down the curve. And that is incredibly powerful, isn’t it?
Now, what does all this mean in practice? All the activities described above are referred to as experiments, and if you plot the number of experiments against the strength of evidence you would expect if everybody is working in a Lean UX fashion, then that the frequency would be the mirror image of the curve — the further down the curve you are, the more experiments you would expect. Why? Because the left hand side of the curve is where the cheap experiments are, and we can often kill off ideas that have no legs with cheap experiments. So, if we plotted the first few weeks of Lean UX at giffgaff, was this the case?
Well, not quite…
As you can see, experiments were heavily biased towards the top, expensive end of the curve. The big spike at the right is A/B tests which are time-consuming to set up, and take a long time to run to achieve statistical significance. But, they do give highly accurate results.
So what’s the problem? Well, let’s look at the results of those tests
As you can see, the vast majority of these tests (~ 90%) weren’t positive, i.e. had the impact we thought they would. If you take into account these tests run for a couple of weeks, during which time you cannot update the code, you can see this is not great — how much better would it be to learn it was going to fail earlier, i.e. further down the truth curve. Then, we could just build the right things, and if we did do A/B tests, it would just be to confirm what we already knew. A good indicator that we’re following Lean UX would be close to 100% success rate with A/B tests.
So, moving forward, we want to move towards fewer, more positive, A/B tests and more experiments further down the curve. This will show we are being more “Lean UX”. We realise, however, that a high percentage of successful A/B tests and a different shaped experimental frequency graph are merely leading indicators of success, but we believe that following the approach will result in better business outcomes. That would be what we would measure next (or in parallel); are we hitting our KPIs/KRs quicker than we did before we started using Lean UX?
The truth curve; I love it…