Data, decisions and product management
Data has been central to my decisions in product management. I gave a talk recently about it. Here’s the transcript in full, so it’s a long-ish read.
Data from the outset
This slightly blurry typescript is from 1931. It’s a memo that a chap called Neil McElroy wrote to his bosses. He used to work at Procter and Gamble — P&G. He was responsible for the soap product Camay (which still exists today). He was also really frustrated because his product was always playing second fiddle to another soap brand at P&G called Ivory.
So he decided that the best way to get more of an advantage with his product was to propose the creation of a new kind of role, what he called a “brand manager”. The reason I mention this — bearing in mind this is back in 1931 — is because this is pretty much a prototype for what we call “product managers” today. What he proposed was a role that would work and coordinate with other departments and, more importantly, he advocated a data-driven approach.
As McElroy says, the brand manager needed “to make whatever field studies are necessary to determine whether the plan [for the product or brand] has produced the expected results.”
So even at the very beginning, product management relied on data and evidence.
The product manager does not specify the product
Now there are plenty of examples of failed products that illustrate the value of having data and evidence.
We still get the situation where some companies are expecting their product managers to come up with the ‘requirements’ for their product — even though they’ve never spoken to the actual people who will be using the product.
Then there’s the other aspect that, unless you’re building a software product that will be used by product managers, the product manager isn’t representative of the users either. So really not the right person to be determining what the product should be in the absence of any evidence.
There’s so much written on this topic already that it would be churlish for me to reproduce it all here. To cut a potentially long story short: a product manager and their team must get out there and meet the actual users of the product — no exceptions.
If you need more on that, there’s plenty from Marty Cagan, who wrote a book called Imagine and blogs regularly on SVPG, and also Steve Blank, who has dozens of great videos on his site and also The Four Steps To The Epiphany book. They and many others make the case for why it’s so important for product managers and their teams to get out of the building to actually meet their users and gather some evidence.
Assumptions and risks
So there are plenty of examples of products that really illustrate the value of having data and evidence. One particular failure I quite like using is this one.
Back in 2001, right in the middle of the dot-com bubble[1], there was a brilliant engineer called Dean Kamen.
By then, he’d already had a decent string of successes under his belt, including an innovative and successful insulin pump, and a motorised wheelchair that could, amongst other things, go up and down stairs, which was amazing, but also allowed the occupant to raise themselves to standing height so they could see people eye-to-eye.
Now, unlike his previous successes, his next project was much more ambitious. This time, in his own words, he wanted to reinvent personal transportation for everyone. He said it was going to be as big a leap forward from the car as the car had been from the horse and buggy.
Before he’d even launched this world-changing product, the patent application got leaked, and this sent the dotcom world into a frenzy of speculation about this thing called Project Ginger. And they knew it was about transport, but they didn’t know what it was. So some people were thinking it was something crazy like a jet-propelled scooter. And some other guys were thinking maybe it was a Star Trek transporter or something. But in reality it was actually much more down-to-earth:
Yes, Dean Kamen invented the Segway, which — if you’re not familiar with it — is a self-balancing, two-wheeled electric scooter. So not quite the Star Trek style of transporter people were expecting.
Now, the thing is, whilst it’s a bit of an anticlimax, actually as a piece of engineering, it was pretty marvellous — it did its job very well. If you think about what processors and computing power were back then, it did the job of being a self-balancing, electric scooter very well. So, given all of that, and given his track record, and given the actual build of the product was great, why was it not the world-changer he’d expected it to be? Why are we all not riding our Segways today?
Of course, no product could have lived up to the amount of hype the Segway received before it was launched. So let’s look at the assumptions that Dean and his team had made before they built the thing.
Product: top speed 12kph
Process: time between charges
Market: everyone will want one
Pricing: $4,995
User: won’t feel stupid on one
Regulatory: legal to ride
They made assumptions about:
- the product — were they the right features?
- how the product would be used — would it be convenient?
- the market demand and the price people would be happy to pay for it
- how the users of the product would feel about it; and
- most crucially, they made assumptions about the regulations that surrounded the product.
When they launched the Segway, it was illegal to ride in 32 US states and the District of Columbia. That’s quite a sizeable barrier to market.
And with over $100 million US dollars already invested in the Segway, the company had to spend even more money to lobby each state to change its laws to allow the Segway to be used.
Now the reason I mention all that is: don’t you think it would have been much better to check — particularly this last point about whether it was legal to ride — before they’d got to the expensive business of mass producing it?
Assumptions = risks
So really, the first takeaway I have for you is that your assumptions — the things you think you know about your product, about your users, all these assumptions you’re making without any evidence — translate directly into risks for your product.
Two systems of thinking
This chap is called Daniel Kahneman, and he is a recipient of the Nobel prize for economics. He tells us in his book, Thinking, Fast And Slow, that our brains actually have two systems of thinking: a fast, intuitive system that works from minimal information, minimal data; and a much slower, analytical system that does much more of the heavy lifting.
The fast system is designed to jump to conclusions very quickly. Back in ancient history, if we thought there was a tiger hiding in the grass, we’d probably want run first, analyse later. So that’s our fast system of thinking.
However, it’s also very prone to errors. Certainly I do — I occasionally mistake a tomato stalk for a spider and jump out of my skin. So that’s our fast system jumping to conclusions.
We engage our slower system when we undertake a more complex task like counting the number of letter ‘e’s on a page. It’s not something we can just jump to a conclusion about, it takes more effort, and so this slower, analytical system does the job properly.
So here’s the thing: because this slower system takes more effort to get going, our fast system always keeps jumping in first. It causes us to create a plausible narrative based on very little data — like my tomato stalk / spider mix-up.
Assuming without evidence
Now, in product management terms, this makes it very easy and tempting for us to convince ourselves that we understand what our users need, even though we have very little evidence to support that. Off our fast system goes and says, oh yes, I’ve got relatively little evidence, but I can make a plausible narrative for what users need. And it’s that kind of assumption, without any evidence, that leads us astray.
And yet I’m sure we’ve all experienced those lightbulb moments of realisation when we do actually go out and talk to our real users. Suddenly, when we get one of those lightbulb moments, all of our assumptions are turned on their heads — all it takes is just a little bit of extra data or evidence to flip around what we thought to be the case, and then we suddenly realise we had it all backwards.
Don’t drive blindfolded
It’s a little bit like driving blindfolded. When you want to drive somewhere, if you were to jump in the car and drive off with a blindfold on, there would be quite a high likelihood of coming to grief. You wouldn’t be able to see where you were going, you wouldn’t be able to react to things happening around you, you wouldn’t be able to see the pedestrians or the other cars around you as you drive along.
So why would you take the same approach when you’re plotting the course for your product? Without taking in the information around you about your product and reacting to it, you’re effectively increasing your risk and likelihood of failure.
So it’s a really good idea to open your eyes and use the information around you when you’re deciding what to do with you product — why would you want to do it in any other way?
Reduce risk — check your assumptions
Another way of looking at this is is to check your assumptions, is to reduce the risk. One of the ways to do this is to eliminate as much uncertainty as you have as you go along. The way you do this is by learning from your users as quickly and frequently as possible. When you’re at your most uncertain, right at the very beginning, your main job should be to get as much learning as possible and to apply that learning to your products, and challenge all those assumptions you have.
This graph here, by a chap called Roman Pichler, who’s another great product manager, who blogs and teaches and so on, illustrates what I’m trying to get at here.
A new approach in UK government
I spent about eight months recently as head of product for the UK’s Ministry of Justice — so, in government — and then a further three months more recently as the head of the product community for UK government as a whole at what’s called the Government Digital Service, or GDS.
And bit by bit since 2012, the whole of government has been moving to a very different approach for creating and managing the products — or services — that it offers people in the UK. And if you think about it, this is everything from applying for a driving licence all the way through to things like booking to visit a friend or relative in prison, or pretty much everything else, everything that government interacts with the public about.
The major revolution in thinking for them was that services exist to serve the needs of people first, not government. I know, it seems obvious, but it really wasn’t until relatively recently.
The (bad) old way
So they had a particular way before, the old way of doing things — and I know this happens a whole bunch in the private sector also — and it goes something like this:
A bunch of senior managers would get together and say “we have a problem”. They would generally decide the problem will be solved by a new CRM or ERP system — or sometimes both. Needless to say, they were usually wrong.
They would then task several middle-ranking managers to spend several weeks or months collating a whole bunch of assumptions, guesses and outright lies into a massive document they call a business case. This would then be used to retrospectively justify the conclusion the senior managers had already reached. And they’d still be wrong.
This hypothetical system would need a laundry list of specifications or requirements to flesh out what it needs to do so that the development team can get building. And again this is largely based on guesswork and results in an even larger set of documents than the business case.
Then some development would happen, which take several times longer than everyone expected, not least because sweeping changes would be needed mid-way through the build. And because the allocated budget has already been exceeded, whole sets of features would be cut out again.
So the resulting product ends up being less capable than the thing it replaced, and largely makes life impossible for the people who actually have to use the thing, but who only got to see it the week before launch in what would laughably be called user acceptance testing.
And so the users point out that — guess what — the thing doesn’t solve their problem, and that they in fact had a very different set of problems to solve, that the CRM or ERP system does nothing to solve them, and that the senior managers had completely missed the point in the first place.
Now I hope that doesn’t sound familiar, but I’m sure we’ve heard of places where that is certainly the case, and certainly in government that was very much how these large IT projects would play out. I’ve seen it a whole bunch of times, not just in government, but in private companies as well.
A better way
There is, however, a much better way. Instead — and this is the way that government tends to work now — the process starts with user needs, not government needs. So we’re putting the user needs right at the very forefront of our thinking. Whether from direct observation of data and analytics, or user feedback, we go in thinking that users have a particular problem that we can solve.
Then what we do is we go into a process called discovery, and we do this for a few weeks. This a combination of both desk and field-based research with real users to challenge our assumptions, and really to understand the size and shape of the problem, the people who have it, and whether it’s possible to solve, and indeed whether it’s worth solving. There’s no point in spending a million pounds or dollars to solve a ten-thousand-dollar problem.
The discovery team usually consists of a product manager, user researcher, designer and sometimes a developer or a business analyst if we need their particular skills to understand the problem.
It’s a perfectly sensible result for the discovery phase to end with the conclusion that actually what we thought was a problem isn’t a problem, or indeed whether it’s valuable or technically possible to solve. Now in the bad old way of doing things, project teams would only find out this much, much later on in the process.
So that’s the discovery phase, then the alpha phase is all about checking our understanding of the problem by running iterative tests and experiments, again with real users, that demonstrate we can solve aspects of the problem. By doing this, we’re learning more about the problem, and we start to learn about potentially how to solve that problem, about the solution.
At the end of alpha, we should have a pretty clear understanding of the users, their problems and the likely ways we’re going to solve it.
So then we move into beta. And all of these prototypes and experiments we’ve created up until now, we put to one side. Because now we start building the product for real. We want to build it as scalably, as robust as we need it, and as secure we need it to be, potentially — in the case of government — to be used by several million users.
The big difference here is that throughout beta, even though we’re not finished building the product yet, we’re still using the product out in the wild — we’re putting it in front of real users, and real users are using that product to solve their problems. They could be using it to apply for their driving licence or to renew their passport or things like that. And the reason why we do this is because it gives us this wealth of analytics and feedback from user testing and from people actually using the product, that helps us adjust and tweak the product to keep us on the right track.
When we’re able to demonstrate throughout this process that users are able to solve their underlying problem — whether it’s “I want to be able to drive a car, so I need a driving licence” or “I want to be able to travel internationally, so I need a passport” — then when they’re able to do that, then we stop adding new features. We can stop building things and we can shift our focus from building new stuff to more continuous improvement — small tweaks as needed to squash bugs or improve usability.
And then the majority of the team moves off onto the next major problem to solve.
The thing is that throughout the entire process, we’re running experiments, we’re gathering data, we’re doing analytics with real users, the actual people who will actually be using the product or service.
And it’s by doing this that we force ourselves to put aside our assumptions and engage our analytical part of the brain — evidence trumps opinion every single time.
Experiments aren’t scary
Experiments don’t have to be daunting or scary. Here’s a very quick template you can use. An experiment can be very quick. One example was in the Ministry of Justice, one of my product managers and his lead developer were having a pretty heated argument about whether the users would understand what a particular feature did.
So rather than listen to them arguing for the rest of the afternoon, I packed them both off with paper prototypes to a nearby cafe and told them not to come back until they’d spoken to 20 people. And when they came back about two hours later, the product manager grudgingly reported back that 18 of the 20 had proven him wrong. And this was a great thing, because now he and his lead developer were working with evidence, not opinion.
Any experiment you’re running follows this template. You’ve got some user research or evidence that suggests something you believe to be the case, your hypothesis or your guess. So if we trying running a particular experiment or test — in this case going down to a cafe and asking people if they understood what this particular feature did — and measure the number of people who did or didn’t understand it, then we should be able to see whether or not people do understand that feature. And in this case, we had the overwhelming result that 18 of the 20 didn’t understand this feature, and so as a result the product manager was actually wrong.
So you can be doing this kind of thing all the time, it doesn’t have to be a big elaborate experiment with hundreds or thousands of people, you can do it relatively quickly for any questions you need to have answered.
Put users first
The next key learning for you is: put your users first. Don’t put the needs of your organisation before those of your users. Absolutely take them [your organisation’s needs] into account, but put your users first.
The 4 main KPIs in UK Government
The next thing is getting into the analytics used in government. Every digital service created by the UK government now has to measure at least four key performance indicators (KPIs):
- cost per transaction;
- user satisfaction rating;
- completion rate (how many people are actually able to achieve their goal or solve their problem); and
- digital take-up;
which is whether or not users are preferring to use the online web service instead of phoning up someone or filling in a paper form, because that’s really the whole point of what they’re trying to do there.
Because every service must publish this data online, completely transparently — all the dashboards I think are all at gov.uk/performance if you want to take a look — it keeps everyone honest, but also changes the conversation from blame when things are going wrong to “what can be done to improve this?”
These four main KPIs make sense for the UK government because its broader goals are to encourage people to do more interaction with government more online and to make things easier for people to do. Other organisations with different sets of goals would probably want to measure different things that were aligned with their own particular goals.
Measuring the right things
The important thing here is that, in the context of the organisation we’re talking about, we’re measuring the right things.
We’re only really measuring things that would prompt us to take action if we saw the metrics going in the wrong way. If (for example) we saw a low completion rate, we could do some funnel analysis to see where people are dropping out, then run some experiments or user interviews to delve a bit deeper.
We really want to have this marriage of quantitative data and qualitative data, the quantitative tells us what is happening, the qualitative tells us why it’s happening. So when you’re seeing these patterns in your quantitative data, your web analytics or your other sources of data, and something looks a bit weird or piques your interest in some way, your first question should always be “why is that happening?” and your second question should be, “how can I test that to find out what’s going on?”.
Vanity metrics are pointless
Measuring vanity metrics like page views is basically pointless — they could go up or down for a variety of reasons completely outside of your control. Your page views might go up because an email campaign has just gone out, or perhaps because a search engine is indexing your site.
So the important thing here is that we’re measuring outcomes, not outputs.
We don’t really care how many people visited the driving licence website, we’re far more bothered about whether the visitors, the users, got what they came for.
Whether they succeeded in getting their licence first time of asking, or whether they were able to update their photo on their licence, or even just find some particular information. Did they get what they came for?
We’re really bothered about whether they succeeded in what they were trying to achieve and how easily they did it.
Measure your user outcomes
So your next takeaway is that you need to focus on measuring your user outcomes — what it is the users are actually trying to do, and what matters to them — not necessarily the outputs like page views or widgets created that tend to matter to senior management.
What is your roadmap focused on?
I bet that most of your roadmaps — these are the plans you have for your product, what you’re going to do in the next quarter or next few quarters — I bet that at the moment your roadmaps probably have items along the lines of “we’re going to build this feature” or “we’re going to add this capability”.
Guess what? If you’re doing that, you’re still probably focused on outputs, not the user outcomes.
When you put anything on your product roadmap, or in your backlog of user stories, in particular product managers should always be able to say why that is in there, what purpose it serves, how you’ll know it’s been successful, and most importantly be able to point to the evidence that drove the decision to put that feature in or build that particular product.
Each roadmap item is an experiment
So when you’re thinking about roadmaps, you really need to be thinking about what the user is trying to achieve (based on your user research, of course). Treat each roadmap item like an experiment. Remember:
data or user research suggest that this is the case …
so if we try this …
and measure that …
then we should see the following change …
Every roadmap item should follow that pattern. You should be able to know what success looks like from the user perspective as well as what finished looks like.
This is great for a few reasons.
Zero-benefit roadmap items
If you’re the person looking after this product roadmap, it helps you remove items from your roadmap that don’t benefit anyone at all. If you can’t point to an item and say why it’s there, why it’s important, how it benefits people, take it off your roadmap — don’t do it.
There’s no problem with putting something in to make life easier for your product support teams, or to make it easier for you to instrument and provide analytics in your product. Those internal users are perfectly valid users as well. But the point is, whether it’s an external user, your customers, or internal users within your own organisation, everything on your roadmap needs to be there for a reason.
More learning and iteration
The second reason why you need to be able to point to this more analytical way of running your roadmap is that it sets you up for learning and iteration. Because every time you run an experiment, if something comes back with a different result to the one you were expecting, the next thing you should do is find out why, and then use what you learn to do it better next time.
Avoid being sidetracked
And then thirdly, and this is a really useful one, particularly for product managers, but also for people up and down the organisation. If you’re trying to align your team towards a common goal, it’s very easy for teams to get sidetracked and pull in different directions if all you’re bothered about is outputs — building widgets and features.
But if the discussion is centred around helping users to achieve their goals, then it’s much easier to judge whether something is worth doing.
Ask yourself “will this roadmap item get us closer or further away to what our users are trying to achieve?”
So on that basis, if you’re focusing on building things that provide a user outcome, your user stories should align with your roadmap, your roadmap should align with your team’s objectives for the quarter or year, and your team’s objectives should align with your organisation’s overall goals.
What that means is that from the very granular bit of what we’re going to build today, tomorrow, next week, it’s always aligned all the way up through to what the organisation is trying to achieve this year, next year, in five years’ time. And in particular, product managers are responsible for ensuring that alignment, from those user stories all the way up to the company objectives, is happening within that team.
Teams align to user goals
So my last takeaway is that measuring user goals will help you to align your team much more easily, because they’ll care about what the users are trying to achieve. It’s a human thing rather than a feature or widget thing.
Summary of the talk
Okay, so just to summarise the main takeaways again:
we talked at the beginning about how assumptions translate into risks for your product and that’s usually because you think you know what in fact you don’t actually know for real;
and we talked about the fast system of thinking that jumps to conclusions and the slower system of analytics we need to engage;
we talked about government, the old way and new way of doing things by putting user needs first before government or before organisational needs;
and we talked about focusing on human outcomes, what it is that users are trying to achieve, not outputs like building widgets or features or that kind of thing;
and lastly we talked about how measuring these outcomes, these user goals, can actually really help to align your development teams, your product teams, and your broader teams within your organisation, all the way up from those granular bits and pieces you’re creating all the way up to your organisation’s longer term goals.
So those are really the five key things I hope you’ll take away from this talk about data and analytics in product management.
That’s it! Thank you for listening.
[1. Okay, in my own book, I actually have a footnote which says “March 10, 2000, is considered the date on which the dot-com bubble burst”, so best go with that.]