It’s tempting to lump all analysts together — a homogenous group who love SQL, spreadsheets and working with their headphones in. In reality, there are multiple disciplines within the field, which tend to appeal to different skillsets and personality types. Hopefully in this post I can shed some light on the relatively new area of Product Analytics, how to have the most impact within it, and what sets it apart from traditional BI.
What is Product Analytics?
Product Analytics concerns the ‘digital product’ — essentially your website and/or app. Your digital product might be your only product — perhaps you run a blog, or have created an app allowing users to track habits or make to-do lists. Alternatively, your digital product might be the first touchpoint for a user to buy services or physical goods. In many cases, there’ll be a mixture of these — perhaps a user can buy a physical product on your site, and then can self-serve any complaints or issues using a help feature.
Ultimately everything a customer sees or does on your digital product falls under the remit of Product Analytics; filling out a form, searching, adding something to a basket, reading content. There is some debate as to whether to call functionality within your website or app a feature, or whether it counts as another digital product. I believe it doesn’t matter — if you’re looking at web or app events data, chances are you’re a Product Analyst.
Is Product Analytics the same as Digital or Web Analytics?
Historically, the most common job title in this area was ‘Web Analyst’, this was gradually replaced by ‘Digital Analyst’, and now it’s becoming more common to see roles advertised as ‘Product Analyst’.
A decade ago, a Web Analyst role was often primarily focused on trading, with less emphasis on onsite user behaviour. While there may have been the odd foray into web data, such as top search terms or product pages with the most views, the lens was very much a commercial one. Often, websites were just another ‘shop’ for a bricks and mortar retailer, so the approach taken to analytics was cut-and-pasted from in-store POS.
In a physical store, the period of time between a user entering a shop and checking out is a black box. With online activity constantly growing, apps becoming commonplace, and the advent of purely digital products like Monzo and Spotify, we have more and more of that valuable pre-conversion data available to us. Product Analytics grew out of the need for businesses to optimise their digital products and get as many users making successful journeys on their product as possible.
Product Analysts, while needing solid business acumen and an appreciation for and understanding of commercials, are likely to be working closely with UX and user research teams, as well as product managers and developers. The vast majority of their analysis will be on frontend event data and focused on user behaviour, optimising micro-conversions across the site or app. A micro-conversion could be anything we determine, from scrolling to a certain depth on a certain page, clicking on a call-to-action, or signing up to the newsletter. Of course, particularly in ecommerce, there will be a macro-goal of increasing overall conversion rate, but in general a Product Analyst is focused on the entire user experience, even outside of the primary conversion funnel.
What makes a good Product Analyst?
1. ‘Why?’ thinking and an interest in user psychology
Just having great technical knowledge and being able to create a fancy dashboard won’t get you far in Product Analytics. To really excel, an understanding of your users is key. This comes with experience, but when starting out, it’s important to challenge yourself to hypothesise at every opportunity — ask yourself why you’re seeing what you’re seeing. Putting yourself into the shoes of users with various missions, not just thinking about how you would do things, will allow you to truly optimise the digital product.
For example, you might have multiple routes to conversion for a subscription product on your website or app.
One is accessed by users clicking on a picture of your product with a call-to-action saying ‘Let’s go’, and another is accessed by clicking a link which says ‘See our prices’. The latter journey takes users through a funnel that shows prices first, while the former doesn’t show prices until closer to the end. You notice that the users who click on ‘Let’s go’ convert much better than the users who click on ‘See our prices’, so you run an experiment, matching the worse performing journey to the better performing journey. The variant loses, so you conclude it doesn’t work and move on to the next experiment, right?
This is the point at which a Product Analyst can really add value — understanding your customers and their various mindsets is what will allow you to figure out the ‘why’ behind the ‘what’. In this scenario, you might hypothesise that users with different mindsets and levels of intent will choose to start their journey through the different routes and come up with some ideas to iterate on your experiment.
- Users clicking on ‘See our pricing’ likely want to see the pricing before making any kind of decision, so making customers jump through more hoops to see it causes frustration, resulting in lower conversion in the variant of your experiment.
- Users clicking on the ‘Let’s go’ CTA are more likely to be on a purchase mission and less likely to be on an exploratory mission, which means this group will probably always have higher conversion than the ‘See our prices’ group.
- This doesn’t mean we can’t maximise the conversion of the prices group, even if it won’t reach the heights of the ‘Let’s go’ group (a common mistake — just accepting some users or routes don’t convert as well, and only focusing on optimising your star journeys)
- Users who go through the ‘Let’s go’ flow get to see the product first and are more likely to understand its benefits and be more invested by the point they see pricing.
- We could iterate on the experiment by designing a page for the ‘See our prices’ users which shows pricing but focuses primarily on the benefits of the product, trying to get them emotionally invested.
- While there will always be a group of users with a set budget who were never going to convert having seen pricing, there is likely to also be a group of users who are on the fence. Just seeing pricing in its own step would put these users off, but seeing beautiful imagery, social proof, or benefits of the product alongside the pricing may just be enough to tip them over the edge.
This train of thought could result in a new design for this part of the journey, perhaps iterated on to hone in on what elements really speak to this group of customers. Do price-sensitive users respond more to social proof, or a solid list of benefits to your product? Or does a combination of both perform even better?
Everything you learn from analysis and experiments will deepen your understanding of your users and allow you to provide even better insight in future.
This deep thinking, tapping into customer psychology and questioning every interaction is what allows you create a great user experience for as many of your users as possible, while also maximising commercial KPIs.
2. Stakeholder relationship building
In most analytical fields, you’re advising on business decisions which drive fairly immediate commercial benefit; ‘switch off this campaign’, ‘buy more stock this month’, ‘increase the price of this product’. Most of the time your recommendations will be accepted and acted upon without too much questioning.
As a Product Analyst, many of your recommendations will be focusing on improving the experience for a subset of your users, not always resulting in immediate or observable commercial benefit. Additionally, you’ll often be providing recommendations to colleagues with different backgrounds to you — advising UX & UI designers, brand teams, and even copywriters.
This makes the relationship you have with your stakeholders crucial for driving change. Leadership might be reluctant to greenlight a project or feature without seeing a figure first. Creative and brand experts won’t always take kindly to a ‘number cruncher’ telling them they need to change the layout of a page, the design of a feature, or the copy on a CTA.
Depending on the appetite for change, you may also need to educate senior stakeholders on the benefits of optimising for UX, even where it might seem ‘pointless’. Being able to explain the effects of a ‘sum of small parts’ approach (over time, lots of little changes will add up), and the importance of a great user experience in not only increasing onsite conversion, but also brand warmth, peer-to-peer promotion and ultimately retention, will help you get ‘the business’ onside.
On the other hand, your more creative stakeholders may be reluctant to make changes that don’t fit with certain design principles or a brand vision, even if you have data to show that these changes will provide a commercial benefit. You could be an analytical genius, but if you always defer to the numbers and find it difficult to understand why you wouldn’t defer to the numbers, you might come unstuck in a Product Analytics function. Understanding that sometimes maintaining a certain brand identity or adhering to an overarching business vision will win out over UX or even sometimes short term commercial benefit, will allow you to have collaborative and productive conversations about sensitive issues and ultimately drive positive change on your digital product.
Over time, the relationships you build with your stakeholders will allow you to work together to find solutions that not only benefit the customer and fit the brand & business goals, but also provide the commercial impact that as analysts we are trained to try and maximise.
It’s important that as a Product Analyst you take the time to understand not only your stakeholders and what excites them and frustrates them (not dissimilar to what you do with your product users), but also the brand, tone-of-voice, and business mission.
3. Experimentation mindset and processes
You can’t write an article on Product Analytics without a section on experimentation (a catch-all term for A/B, A/B/n and multivariate testing).
In an ideal world, as a Product Analyst you will spend time performing deep-dive analysis on different areas of your digital product. These periodic analyses will help you to uncover areas of opportunity — is there a step in a funnel that only has a 30% success rate while all other steps have over 60%? Is there an important call-to-action that nobody clicks on? Or a page that’s supposed to be engaging but nobody scrolls past 20%? These are your areas of opportunity. Sometimes they jump out at you, but other times, particularly once you’ve taken all of the ‘low-hanging fruit’, you will need to look a bit harder or cut the data a bit differently. The main thing is to keep comparing — look at trends over time, compare different user groups, different pages, different buttons. These comparisons are what make those areas of opportunity present themselves. Once you’ve uncovered what you believe to be an area of opportunity, come up with as many hypotheses as you can as to why it doesn’t perform as you might expect. These can be as obvious or as outlandish as you wish — they are just hypothetical, of course.
These hypotheses then become the basis for your experiment ideas. Of course, in reality, we are all users of apps and websites so many ideas may also come from stakeholders across the business, based on a hunch or a competitor’s product. For this reason it’s important to have a way to document and prioritise any ideas that come in.
You can use RICE (reach, impact, confidence, effort), or create your own calculations depending on the business priorities. Often, there’ll be ideas everyone is very excited about, and your prioritisation numbers won’t mean anything because people just want to run it ASAP. To account for this, I like to include a column allowing for a subjective rating of ‘Curious’, ‘Interested’ and ‘Excited!’ — with a higher weighting given to ideas that people will continue to talk about until you set them live. This is another example of when you might have to go against your analyst instinct and not rely purely on the numbers — although of course when it comes to calculating sample sizes and validating the results, you’ll need to put your statistics hat on.
Make sure you have a cross-functional working group that meets weekly, containing at the very least: a designer, a developer and an analyst (you), Ideally, the group should be empowered to make decisions and to drive the experiments roadmap. Having to get every experiment signed off will slow you down and minimise your impact. This may require a culture-shift, so be prepared to evangelise about the benefits of experimentation.
While we all get excited about certain tests, it’s important in your role as a Product Analyst to remain unbiased. If a test you had strong conviction in fails, you’ve just learnt something about your users, perhaps something you weren’t expecting. This is valuable knowledge, even if it’s not the outcome you were expecting. Document your learnings, hypothesise off the back of it, and so the cycle continues.
4. Knowing what to track and what to leave
A short but important one. It is tempting to add events everywhere — you might find your stakeholders give you wishlists of things they want to know (or think they want to know). As a Product Analyst it’s your job to determine where there might be value, and what is just data for data’s sake.
For example, in most cases it is not necessary to track when a drop-down is opened, selected, and closed. At most you’ll want an event that passes when a user selects an option, with a property containing what the selection was. You can determine if users are chopping and changing by finding out how many times that event fires in that user’s session.
If you don’t own and implement event tracking, make sure to build relationships with product teams and make yourself a part of the development process. Ensure naming conventions are consistent and descriptive, make proper use of event properties to avoid having too many events, and perform regular audits. You’ll be a power user of events, so make sure you have a say!
5. Persuasive storytelling
This is relevant for all kinds of analysts, but I have found that in Product Analytics this skill comes in particularly useful. With the digital product being so outward facing, it is often an area that many colleagues and leaders will have opinions on. There can also be fear around rocking the boat and making any large changes because of its visibility, so having conviction in your analysis and being able persuasively present analysis so that your recommendations seem like the obvious choice, is key.
As an analyst, you will have no problems reading a table of data, or a complex chart, and knowing exactly what it means. However, this won’t come as naturally to all stakeholders — either they’re not exposed to lots of data, or they’re just after an easy-to-digest summary and don’t have the time or inclination to read a four paragraph email interspersed with Excel screenshots.
Decks are your friend. Tell a story using the titles, keep charts simple, and always include recommendations (otherwise what was the point of the work you just did). The recommendations should make sense to anyone who has just read your short, engaging story. At a minimum I recommend the following slides:
A couple of top-line bullet points and the punchline (your recommendations) — many people won’t read past this.
Analysis purpose and notes
Why did this analysis come about? A simple chart here can help to explain. Include any caveats here if there are any.
1–12 story slides
Each slide title should flow from the previous, taking the reader through the important parts of your discovery — the charts and tables are just visual aids. For example — an extremely simple version of this method might read:
Story slide 1
Title: “We noticed that click-through rate on the beetroots page is lower than it is on the carrots page”
Visual: A line chart showing the click-through of each page, perhaps with two scorecards with the overall click-through rate for the last x number of days
Story slide 2
Title: “This appears to be driven primarily by users who have not ordered before”
Visual: The same chart but for new users only (chart annotations must be very clear)
Story slide 3
Title: “The beetroots review score is 3.9 vs. 4.1 on the carrots page, which may explain the lower click-through rate”
Visual: A screenshot of each page with the review score circled
Findings and summary
A few bullet points with your main findings and any hypotheses you have e.g. “It could be that review scores under 4 put new customers off”
One or more recommendations for experiments or changes e.g. “change the review score from a number rating to a star icon rating, to see if seeing the number ‘3’ puts new customers off.” Depending on the stakeholders, putting numbers or revenue in here can help with persuasion — “If we increase beetroots click-through rate to the same level as carrots, we could see a weekly revenue uplift of ~£100k”.
The above example is very simple and probably isn’t an analysis worthy of a deck, but when you’re performing deep dives into long or convoluted user journeys, a persuasive story can mean the difference between your work being ignored, or having some really impactful site changes prioritised on the backlog.
It can be tempting to add everything to a deck — every slice of the data, the methodology you used, any statistical tests — but you should avoid this. The stakeholders who want to get into the detail will ask you about it, but ultimately persuasive storytelling is almost always a case of less-is-more. Ideally you want to be presenting in person to the decision-makers, but inevitably this isn’t always possible, so make sure your deck is foolproof and can be easily understood by somebody who doesn’t have you there to walk them through it.
The Product Analysts of the future
Many more experienced Product Analysts may have found, like me, that they fell into the profession. I started my career as a jack-of-all-trades in ecommerce: copywriting, artworking, merchandising, reporting. But it wasn’t enough for me to just report on performance, I wanted to know why users were doing what they were doing — why didn’t the homepage image get many clicks this week? What was stopping them from using the search bar? Is the navigation intuitive enough? Luckily for me, as my career developed, so did the field of Product Analytics.
With online life now being an extension of real life, we may now start to see graduates expressing an interest as soon as they start their careers — people assume (mostly correctly) that their every move is tracked online, so it stands to reason that there is a job which involves looking at that data. Unlike Data Science or BI, a technical, and even to some extent, statistical or mathematical background is secondary to an aptitude for understanding and interpreting user behaviour.
We might find the most impactful Product Analysts of the future by writing job descriptions to appeal to a mindset rather than a skillset.