As a young Product Manager, it is important to be Data-Driven, but there are traps on the way. Being fooled by Data? Collecting too much Data, too often! Here I discuss, how to become Data-Driven but not drunk on Data.
I recently completed my Data Product Manager course offered by Udacity. Through my experience gained professionally and through these courses, one thing was clear from the start, if you can’t measure it, you can’t improve it. Data-Driven product management is core to the foundation of successful products being used by millions of users around the world.
In today's digital age, technology is more dominating than ever before. Users are always adopting new habits coupled with changing mindsets, data is integral to understanding the problem and delivering data-driven solutions. But on this journey, sometimes not being blessed with data scientists or UX researchers on the team, it can be hard to understand the data at hand. This can lead to risks like tracking the wrong metrics, building something which does not offer any value to the customer, and ultimately misusing or being manipulated by it.
1. Measuring the right things
Depending on whatever stage your product or organization is at, the Product Manager needs to align metrics that reflect the organizational top goals and business results. Choosing which metrics to measure should deliver and tie-down back to customer success and business results.
Product teams are on a ship in open water and metrics are the driving compass that helps them understand the desired outcome. At times, there will be a focus group of metrics for a particular area of the product. This might leave the other important areas of the products, which are not measured. Hence, it is important, to constantly evaluate that measured metrics line up with overall business goals. Otherwise, product teams risk going in the wrong direction.
2. Being patient and avoiding “vanity” metrics
At times we come into a phase where we tracking too much data. For example, knowing when the user stopped the video during the onboarding process is not that useful. It is not something you would not spend your time on optimizing but it is still being tracked.
We need to keep an eye out for “vanity” metrics. These are metrics that are being measured that make us look good, but do not correlate with overall success or tell anything about the customer experience. They are simply are not actionable.
Being patient is critical. Product improvements, often take time to have any visible effects. When we tie down a KPI to a product feature that is recently released as part of the release cycle, we shouldn’t start looking at it every hour. And when it doesn't meet the team's expectations, we shouldn’t change the KPI on the fly when it doesn’t immediately move in a positive direction. Humans are complex and behaviors are unique. Small things like weather, holidays, website outages, life in general, all affect a user's need to hire a product to solve a problem. So give time to users, to understand and adjust to the features.
If conversion drops on the first day after release on a new feature, adding more variables to it will not “fix” this. Be patient and give it some time. If not, all the effort and time spent will go to waste on seeing any measurable result that the team could have expected and you might be killing a change that would have made a positive impact later on.
3. Running Tests
Many teams begin through A/B tests for quantitative research. A/B tests at it’s most basic is a way to compare two versions of something to figure out which performs better. Engagement with each feature is measured, and the option that performs better is chosen. This can be compared to a current feature against a potential new variant or two variants of a new feature.
The A/B test can be considered the most basic kind of randomized controlled experiment — Kaiser Fung
As part of these randomized experiments, we need to be sure that the sample size is not too small. This is because you need to achieve statistical significance. This will help you to verify the result that you are seeing, is not just background noise but also minimizes the act of this randomness through statistical confidence.
Have you ever run an A/B test and ended up with more questions than answers? More often than not, I have experienced the more variables, the more data you need for a statistically significant result. You need to be extremely precise on what you are testing and the associated impacts you can foresee.
Keeping it simple and testing one variable at a time, is the best way forward. If the test you want to run requires too many variables then you’re probably better off doing some qualitative testing first and putting aside any excess variables before setting up an A/B test.
In this process, it is important to measure the metrics that you have aimed for but also the other metrics which are going to be affected.
4. Qualitative and Quantitative Data
It’s all about data these days. At times, we undervalue qualitative data as compared to quantitative data. Just because it is not number-based, it is looked at as a less reliable source. It is always good and highly effective to look at qualitative data like focus groups, user interviews, guerilla testing, etc.
We want to create products that don’t only solve the user's pains but also that the user loves the product. Cognitive behavior and emotional connection play a deep role and are hard to measure.
That’s where qualitative research and data play a big role and provide product managers and researchers to understand the complexity that makes the emotional connection.
5. Product Analytics tools
There are many product analytics tools out there for data-driven product managers to track down metrics to understand how users are using your products. At times, there can be too many tools being used at once to measure one metric. Sometimes, the metric might not even provide any actionable data and is just existing for the sake of it.
But at the end of the day, product analytics is not about the tool but is about you. At times, we see that there is only one team “tracking” the metrics and no one else has access or visibility to these metrics.
In this way, it creates a dependency on one team and fills its backlog. This is the perfect recipe for resentment and a poor understanding of business issues. Instead of one dedicated team, ownership should be given to each feature team who are responsible for their own tracking, in autonomy. The shift in ownership of tracking the metrics for different teams leads to a similar shift in culture. On the other hand, it allows different team members to learn these tools and also have a go at the data being collected and ask questions.
These were just some thoughts that I wanted to write down. Of course, there are still many things about data and its pitfalls to be understood. But the idea is, to use data to track and validate the hypothesis but not expect that it is some “secret sauce” that will immediately improve all aspects of your product and that it will answer every question.
Understanding when to use data, and how much data to use, begins with first determining the importance of the decision at hand. Using data as the only strict answer can be pretty dangerous. But again. we are always learning. There is no perfect recipe for this but to understand quality is, even more, important than quantity.
Thanks for reading!