Realtime Mobile Performance Logging

Michael Short
spaceapetech
Published in
6 min readNov 15, 2018

Introduction

When I worked in console development it was relatively easy to discover and record performance metrics. Although there are multiple iterations of console hardware, the number of variants was still pretty low. For example XBox One and XBox One X, or PlayStation 4 and PlayStation 4 Pro. So you can run your game, fire up a profiler and know that your title will perform the same on this dev unit console as it will on every other console out there.

In the mobile world this is completely different. The closest we have to this fixed platform hardware is Apple’s iOS devices. Even then there are tens of hardware variations. Devices ranging from the original iPhone with 128MB of RAM, a Power VR GPU running at 103 MHz and a 412 MHz 32 bit processor up to the recently released iPhone Xs with 4GB of RAM, a custom Apple quad core GPU clocking in at 1.1GHz and a custom Apple A12 6 core processor running at 2.49GHz. Check out Wikipedia for more information on these device specs.

The ability of these devices, and those in between vary massively. And then you start to factor in Android devices. In 2015 it was reported that there were over 24,000 different Android devices, and there was a 28% increase on the previous year. During development its impossible to account for performance on all of these devices.

Solution

So how do we monitor and record performance statistics across these device? We don’t have the capacity to sit down, connect each and every device to our laptops and then profile the game. Instead we get our players to help us out.

Profiling in XCode

During development each game will define their minimum, average and high spec devices for both Android and iOS. For example:

Minimum

  • iPhone 5s
  • Samsung Galaxy S5

Average

  • iPhone 8
  • Samsung Galaxy S7

High End

  • iPhone Xs
  • Samsung Galaxy s9

We then use these devices as our core test bed — we test our game on many more devices than this, but for simplicity these are our target devices — much like you have target hardware configurations on PC. We regularly run our games on these devices, log statistics and profiler runs and then fix any issues that occur.

But what of the thousands of other devices out there that we cant possibly test on? We have a performance tracking and logging system built into our games. Each team decides what they want to record and when. Some teams chose to record statistics during UI, some during the core gameplay etc. An example of some of the data that we record:

  • Average frame time
  • Average frame rate
  • Memory consumption
  • Battery drain
  • Load times
  • CPU frequencies

Alongside these figures we also log the anonymised device information information, such as

  • CPU clock speed
  • CPU core count
  • CPU model
  • GPU clock speed
  • GPU model
  • Graphics API (Open GL ES 2.0, 3.0, Vulkan, Metal etc)
  • Available memory
  • Operating System

We then send all of this data to our back end data store (thanks DevOps!). We use a platform called Elasticsearch to index and catalogue all this information. When then use another platform called Kibana. Kibana is a visualisation tool for all this data. We are able to log and compare performance statistics across different builds, CPUs, GPUs, Graphics APIs, OS, Unity version etc. It can be visualised in many different ways, bar, line, pie etc, and then segmented further. We have entire dashboards that show the current status of our games.

Examples

Disclaimer, all of the following graphs are from a pre soft launch game and performance isn’t always top priority in titles at this stage of development. At this stage of development we are focused on gameplay and ensuring what we are developing is fun.

Here we are able to see the average frame rate for each level of one of our games, across all devices. If we then spot an issue we can then drill down further.

Average frame rate for each level in our game

Now we can see the average frame rate across various GPUs. When we notice a particular device is struggling we can grab it from the device drawer and profile it manually. If we don’t have that device to hand we can find another device with a similar GPU.

Average frame rate for each GPU

A more useful graph might be average frame rate per game build for our target devices. This way we can track the overall game performance from release to release and make sure that there isn’t a downward trend.

Average frame rate for each build, for each of our target devices

When people are playing mobile games they tend to do so in short stints of time. They don’t want to spent minutes waiting for a level to load. Therefore its essential to keep loading times to a minimum. Here we can see the load time in seconds of each level in our game. If something becomes unacceptable we can see and address it quickly.

Load time (seconds) for each level

Finally people aren’t going to play your game if it drains too much battery power. If they fire up your game for 5 minutes and then notice that their battery is sitting at 30% they aren’t going to play again. We’re able to track battery consumption per level. Unusually high battery consumption is a sign of high processing cost or high bandwidth usage. If we have a particular level that’s consuming a lot of battery power, we can investigate further and address the issue.

Battery drain for each game level

Usage

I made it as easy as I could for our dev teams to record and visualise data. Our performance logging system is shared via Nuget, and comes with everything game teams need to start recording data. They can even submit custom data per recording or per event.

You simply create a test runner and add whatever recorders and writers you want:

var gameData = new Dictionary<string, string>()
{
{"build", AppConfig.BUILD_NUMBER},
{"environment", AppConfig.ENVIRONMENT},
{"game", "MyGameName"}
}
var runner = PerformanceTesting.RunnerComponent.Create(gameData);
runner.Add(new FrameRateRecorder(), new FrameRateElasticWriter());
runner.Add(new BatteryLevelRecorder(), new BatteryLevelElasticWriter());
runner.Add(new CpuRecorder(), new CpuElasticWriter());
runner.Add(new MemoryRecorder(), new MemoryElasticWriter());

Then you start and stop events in your title, when you want:

runner.StartEvent("perfTest1");
runner.StopEvent("perfTest1");

And when you’re finished, write the data out:

runner.Write();

On top of that the DevOps team helped me setup a shared performance logging data store and visualisation platform. Anyone in the studio can view performance data for any game by logging onto a single website.

One of our games’ performance dashboards

Conclusion

In mobile game development, trying to record performance data across all your target devices is almost impossible. There are just too many variations to consider. Make it easy for teams to collect and view performance data and in turn they will put it to the forefront of their development. Your games, and studio, will benefit tremendously.

--

--