Game analysis and building the right data infrastructure for free-to-play games

Andrew Waag
ironSource LevelUp
Published in
10 min readFeb 6, 2020

Andrew Waag, Manager of Data Analytics at N3twork, and Victor Wang, Product Lead at N3twork, join forces to breakdown how to choose the right KPIs for your game, best practices for analysing your game’s health, and how to build the right data infrastructure. This article was written while Andrew and Victor worked at NBC Universal, and is based on a previously recorded LevelUp podcast.

Most successful games companies have robust dashboards and data infrastructures in place that enable them to measure the performance of their games. Any studio with ambitions for long-term success should know what metrics to measure for specific titles and have a plan for putting those metrics to use.

KPIs are especially helpful for understanding four major aspects of your game: acquisition, retention, engagement, and monetization. Generally, the KPIs a gaming company chooses to focus on vary according to a variety of factors, including genre and level of maturity. For example, a hyper-casual games company would focus more on D1 retention and views of ad units per DAU, while a match 3-focused studio would focus more on D30 retention, LTV, and IAP-related KPIs.

Four approaches to KPIs

We measure things in order to help us understand them, and games are no exception. When starting out on a project, having a world of possibilities can be daunting. For a game that’s already live, refining what to monitor can prove challenging. You manage what you measure, so picking the right KPIs from the outset will have a long-lasting and pervasive impact on how you make decisions for your game.

The approaches to selecting KPIs are as numerous as the games that have been created. There’s no one-size-fits-all approach to thinking about game metrics, but there are a handful of viable archetypes that might serve as starting points while you consider what might serve you best.

Approach 1 — The Industry Standard Set

The most essential set of KPIs, which include ARPDAU, D1/7/30 retention, ARPPU, and about 5–10 others, serve the purpose of understanding business performance for a game, and is usually of most interest to executive level management. Usually, games in soft launch will adopt this approach: since these are metrics that are commonly used to determine product health and are easily benchmarked against industry standards, it informs you whether your product has market viability. This approach is also adopted by games that do not have complex in-game economies or varied modes of gameplay, like in the hyper-casual category.

Consider using this approach when you’re just starting out, and want to see how you stack up against market comps.

Approach 2 — The Broader Set

The next approach broadens the standard set of KPIs and includes just about every metric. While it’s beneficial having a lot of different metrics on hand, they shouldn’t be classified and treated as KPIs. It’s a semantic distinction, but putting too many things front and center makes it harder to build strong data habits and distracts from the things that are truly crucial. If a metric doesn’t make or break your game, don’t be shy about putting it somewhere on your dashboard where people are free to look at it situationally.

Consider using this approach when you’re exploring your game’s character, generating metrics for internal customers with a wide variety of needs, or taking a nuanced and in-depth approach to monitoring a large-scale game.

Approach 3 — Custom KPIs

Mature gaming companies will create custom KPIs that are game dependent. Here, even metrics like DAU or ARPDAU may not be on the main dashboard. Instead, it may be metrics such as Regular DAU or DAC (daily active customers). This approach is great if you have a mature product that has found its market fit and you have enough data to justify focusing on these metrics to drive growth for your games. Ask yourself: what are the things that need to go well — or need to not go poorly — for your game to be successful? What can you measure that will help you understand how this is doing? If you need to measure something new in order to understand this, then that’s a good indicator that you should consider custom KPIs.

There are a handful of great examples of this outside of the gaming space. When you look at the North Star metrics of other successful tech companies, it’s evident they’re all very different. For Airbnb it’s nights booked, for Facebook it’s daily active users, and on WhatsApp it’s number of messages a user sends per day.

In most cases, there is a need to have custom metrics given how unique games have become over the years. This need becomes greater the more complex a game’s economy is, in order to understand the player’s behavior. Top of the funnel metrics often mask the less obvious shifts in player behaviors and motivations.

Consider this approach when you have conviction that the success of your game hinges entirely on the quality of specific, known, behaviors and user experiences.

Approach 4 — A Mix of Custom and Industry KPIs

No two games have identical needs. Evaluate your game KPIs periodically, to ensure that they’re working for you and helping you make impactful decisions. Tailor your approach to your needs, and consider adapting your KPIs to convey information that will allow you to have productive conversations with your team and catch nascent issues.

Consider whatever works best for you, and don’t feel obligated to stick to one approach as your needs evolve.

But KPIs are just one piece of the puzzle

While crucial, the KPI approaches mentioned above should not be the only form of monitoring. Data generated by the game is only one piece of understanding game health. Your QA, customer support, research, and community teams all have unique insight into what’s going on in the game, and effective organizations find ways to give them a voice.

Strong customer support teams and QA teams, for example, will find ways to be involved in production discussions. The support team can provide Product with data on the number of customer service complaints and average response times, and QA teams can share the number of bugs and crash rates.

“Heartbeat surveys”, moreover, can also be helpful in understanding overall product health, and can provide feedback from your most active players. In some ways, this is more effective than looking at app ratings, since those ratings come from users of varying engagement levels.

Cost is another form of analysis. All of this data tracking has a cost, and it’s worth keeping tabs on this to ensure it doesn’t get out of hand. Product managers, especially those with P&L ownership, are constantly balancing benefits with costs, whether it be within your game or outside of your game. What’s important to understand is that as long as you’re setting up your event telemetry efficiently — which you should already be doing because of the performance implications — then that has trickle down effects to cost.

Data analysis best practices

There are a handful of common mistakes when it comes to data analysis. These two best practices allude to those and should be the foundation of your approach.

Best practice 1 — Root cause analysis

If you’re seeing a metric that is lower than the target, root cause analysis can help you get to the source of the issue. A basic example of this can be unpacking a drop in revenue, which is a situation that product managers unfortunately are all super familiar with. The goal here is to reduce revenue down into its core components and see how those core components are all contributing to that drop. For example, you can reduce revenue down into its key components: DAU and ARPDAU.

If ARPDAU is where we are seeing the majority of the decrease, then we can reduce that down into Payer Conversion Rate and ARPPU (Average Revenue per Paying User). If we see that the majority of that decrease is in ARPPU, we can reduce that further down into “Payments per Payer” and “Average Transaction Size”. Let’s say when you do that, you see that payments per payer went up by 20% but Average Transaction Size went down by 40%. Now, there are more actionable metrics to investigate, and the PM has to figure out the rest of the puzzle. In this scenario, it could be the case that a set of targeted offers did not perform as well as they should have.

Best practice 2 — Prioritization

The most common mistake in analysis is around priorities, more specifically, prioritizing the wrong type of analyses given the current state of the product. Ultimately, there are a ton of great questions to be asking about our games, and the stark reality is that many games teams are resource constrained. We have to be very methodical about the types of questions we want to prioritize in answering.

A good rule of thumb in determining analytics priorities is to think about the important decisions you’ll need to make as a games team in the near future and the key data points you’ll need in order to make the most sound decision. Doing so ensures that you’re focusing on the right things and that you’re getting rid of any future bottlenecks in your development or live ops process.

Building an infrastructure

The goal in all of this analysis is to get data out of your game and into the hands of decision-makers, in a way that will be useful to them. There are a handful of essential components for this and you can’t have a data pipeline without it. You need to record the things that happen in your game, discard invalid data, aggregate the valid data in a way that’s meaningful to a person, and put the data somewhere it can be viewed by the people that need it. What this means is that you’ll need a data pipeline that has components for filtering, processing, and storing data, as well as a reporting or visualization mechanism of some kind.

Here are some best practices to keep in mind.

#1 — Set up alarms and monitoring

Things go wrong all the time, and having an automated eye on your data at all times can be one of the fastest ways to catch serious problems and bring them to your attention. Think about how you’re going to set up alarms and monitoring, and whether you want to opt for a real time or non-real time approach. Be sure to give thought into whether using real time data is really necessary. It comes at a significant cost, both from a monetary and performance perspective, however the obvious benefit is the ability to react quickly to changes in the game that may break monetization or DAU growth. At Scopely, we had a real time monitoring system that would alert the product team of any noticeable changes in monetization or economy balances, that needed to be addressed right away. From that perspective, the added cost of having that real-time data paid for itself.

#2 — Improve data quality

When data comes through improperly, it becomes impossible to build work on top of it and takes a lot of time to clean it up by hand. Essentially, it’ll bring your work to a standstill. There are a few things you can do to improve data quality:

  • Get your QA team involved in analytics QA to the greatest extent you can, so that you have as many eyes on the data as possible.
  • Write test cases and give the team guidance on what areas to focus on with each update.
  • Make sure that the expected tracking for your game has been formally defined down to the data type, and discard data that doesn’t match this specification; without this, downstream systems won’t be able to properly handle incoming data.

As you get more sophisticated, you can define expected ranges and values for some of these fields as a way of further ratcheting up quality and monitoring for cheaters.

#3 — Start thinking about data infrastructure early on

Data tends to be an afterthought in the production process, which leads to unforced errors that require massive cleanup efforts and take time away from analysis. Much of this can be improved by thinking about when you bring analysts into the game, how you involve them, and what your expectations for them are.

Mistakes to avoid include:

  • Leaving analytics out of the design documentation process
  • Only adding tracking to your game immediately before the Release Candidate build of the game without time for QA
  • Building dashboards without regard for the ongoing maintenance cost
  • Designing the pipeline without fully working through the way it will scale

It doesn’t help that all of these come up when other things are urgently competing for your attention and you don’t feel like you can set aside the time.

#4 — Create a specification

Data predictability is a key element of making the journey of data from your game to a dashboard or report a success. To ensure this, create a specification for the data that’s going to be sent from the game. This works the same way that a car factory might have a spec — certain tolerances that a product must have in order to be allowed out of the facility and onto shelves. Having these definitions established gives your developers specific guidance on how to build the game, allows your QA team to determine whether tracking works as expected, and keeps wild data from contaminating your data pipeline. It only takes a single string in a numerical field to kill a job.

Looking back, what would we have changed?

Looking back, from a dashboards and KPI perspective, we were definitely very ambitious with the standard set of KPIs that we defined across our whole portfolio. It’s clear now that we were missing someone who could properly define the requirements and definitions of each one of those things. The biggest thing we’d change, though it seems like a small detail, is to be more involved in setting up and monitoring the custom tracking on the games that we’ve been working on. It would’ve been a big job, but it would’ve saved time that now needs to be invested into cutting tracking that fires too often, addressing blind spots that prevent us from doing analysis, and reworking redundant or inappropriate tracking that causes data anomalies.

--

--