The Doctor Care Anywhere product team created a set of team values a while back. It’s a set of statements that we use to assist with everything from design decisions to hiring. It’s a really handy tool, and something we might discuss in another post. One of the values is ‘Data Obsessed — we measure our results so that we can learn from failure and celebrate success’. We review our values every so often to reassess — are they still accurate? Do they need tweaking? And this particular value is one that popped up — how obsessed are we? And is it a healthy obsession? We’re obviously in general agreement about the importance of data in what we do, but to what extent do we think it should dictate it? Should it inform, or should it drive? We thought it’d be interesting to pick a range of people from different disciplines in the team to pick where we think we sit (or should sit) on the scale.
My early experiences of working in product were not especially data-driven (or even obsessed). Perhaps this makes sense; the company for which I was working was a start-up — there is only so much data to work with when you are completely new to an industry and have a team of 3. However, I still thought of what we were doing as an experiment, the hypothesis being “will the market and contractors accept our service?”. We were building a two-sided, managed marketplace; the data, then, was in our ability to attract and retaining customers as well as suppliers. Because we were at low volume, we were able to talk to both sides of the marketplace regularly (I’m a strong believer of startups doing things that don’t scale). As such, qualitative, rather than quantitative data informed a lot of my early decisions in product management.
Since that time I experienced the other end of the spectrum; working in an incredibly optimization-focused environment gave me insight into purely data-based decision making. When the focus on conversion is so high, particularly with a high-volume eCommerce site, things like multivariate testing and funnel analysis inevitably play a stronger role. This was a great learning period for me; data, at scale and as part of a discovery track, can be incredibly useful in de-risking and validating a given solution. We can use it to help identify if an idea is useable and valuable.
My current view: when it comes to determining what to work on, problems to solve can come from anywhere, and shouldn’t require a great deal of data to be brought to the table (in other words, trust the various experts you have hired to know their business). However, in order to actually prioritise and commit to that work, we need a strong understanding of the problem, which includes data that is relevant, meaningful and actionable.
Given that, I would say that product development should be data informed — prioritisation, problem framing and solution validation obviously require strong data. There are also cases where the source of the work in the first place can (and should) be data, but I would argue that it should not be at the expense of all other sources; you are limited by what you are already measuring, and by extension, limited to making incremental improvements, while potentially ignoring some other valuable inputs.
Data, to me, is incredibly important in answering the questions of “What problem should we next solve?”, “How should we solve it?” and “How successful were we in solving our problems?”. It shouldn’t, however, be viewed in isolation, limit our thinking in what problems we consider worth solving, or replace a product manager’s empathy with their users.
Before answering the question, it’s probably worth a quick detour of what we mean by ‘data’ because it’s a word that means a lot of different things to a lot of different people. In my mind, when I see it in articles and blogs, I think people specifically mean metrics and numbers — conversion and retention rates, averages and medians — but data is more than just numbers. When I was younger, data came in the form of ‘Horrible Histories’, autobiographies and a sprawling selection of ‘A Very Short Introduction to…’. As I’ve moved into the world of product, it has become much more metric driven — a good thing in my opinion. We know that a lot of decision making in organisations can be heavily influenced by our own, narrow experiences or the loudest voices in the room. Metrics allow you to cut through some of that and that makes metrics useful. They can’t, however, tell you the full story and so we should always look for them to inform rather than to drive.
Where metrics and quantitative data really come into their own is when you are able to collect data quickly, accurately and with enough scale. It’s real, tangible feedback and helps you validate the work that you do. This helps to de-risk your ideas on how to solve a problem. Even when do something that has a negative impact, you’ve learned something new and it will help to inform the next decision.
What is even better is when you know what a ‘good’ metric is. A place where this is more obvious would be your signup flow which has an obvious ‘good’ outcome. A conversion funnel where you can see when people drop out and how long it takes them to move through it is where you would dive into your metrics and how to optimise them. Something that would be more difficult to distinguish as to what is good are long-term retention rates. It’s difficult to collect data if you are looking at long time frames because the feedback loop is too long for it to be of practical use.
It is important to remember that, even if you understand what a good metric is, your metrics are reflective of your current knowledge, based on your current users and their current behaviour in your current product. Instagram definitely wasn’t measuring how many people engaged with Stories before they had built it but they took a strategic bet. Clearly, it was one that paid off as they overtook Snapchat in 8 months and Stories now has over a half billion daily active users. The same is true for all products — there are a lot of other potential users out there in the world and your current metrics aren’t going to tell you how to get them.
With that said, I firmly place myself in the informed camp rather than the driven. Data forms a key part of the process of building great products, but it has its limitations. Ultimately, it is one part of a larger discussion.
I’ll always remember the day someone from the design team of a big UK retailer (that I won’t name) came to do a talk at the very traditional design agency that I worked at in the late 2000s. He took us through the processes that they went through when making updates to their product description pages. They were big on AB testing. Well, it was more like ABCDEFGHIJK… testing. They’d recently amended the “Add to basket” button — it actually changed fairly often. This particular amend involved the button’s size. 10 different size variations, each a pixel or so different, were distributed to 10 user groups and the data was analysed. Size 7 won, its particular depth evidently hit the sweet-spot and some marginal gains were made in encouraging customers to add products to their baskets. Next, they ran tests on its position. Then colour. And so on and so forth.
I was initially quite conflicted — isn’t this absolutely soul destroying for someone to work on? But something about it really started to excite me. As I say, we were traditional. We essentially created digital brochure-wear. Very beautiful websites with a short shelf life, perhaps to accompany the release of an annual report. We’d launch them with a tonne of fanfare, then they’d quickly be forgotten. And we’d do the whole thing again the following year, literally starting from scratch. We never revisited anything and it had really started to bother me — I felt like we weren’t making things that were useful. I surveyed the room and most of the attendees looked stoney faced. “But, isn’t the page… designing itself? Shouldn’t the designer be making those decisions?” someone challenged. “Yeah, they do. But the user is then deciding what works best”. The talk pretty much bombed, but I couldn’t stop thinking about it. Making beautiful looking stuff was easy. But making stuff that was beautiful and something that people wanted to use? Now that’s an exciting challenge. This was before ‘product’ was as much of a ‘thing’ — working for an in-house design team was considered a cop-out by agency people. But I realised it was what exactly what I wanted to do — not just designing nice looking stuff and foisting it onto our clients before it started gathering digital dust. I wanted users to be happy and here was a subset of design that would enable me to point at data and and know for sure whether or not they were. And if they weren’t, I could change it. I know for some designers, working on something that is never finished is really unappealing but I love the idea of constantly refining, optimising, and rectifying bad calls.
Do I think design should be driven by data alone? Of course not. If you’re working on the first iteration of a digital product then you may not even have any user data to use. And do I think the retailers user testing was too extreme? Possibly. I think it’s easy to end up with something that feels complete void of personality if you’re solely seeking to create something that doesn’t offend groups A through K. As fence-sittingy as this sounds, I think it’s about getting that balance right and knowing when to go with instinct. For example — do I think data should inform the overall tone of voice you use in your product? No — I struggle to see how you’d end up with something authentic or original. Should it help inform the wording you use on an “Add to basket” button? Most definitely.