PMetrium Native — let’s measure the performance of native applications

Mykola Panasiuk
GR8 Tech
Published in
13 min readNov 24, 2022

--

PMetrium Native — testing the performance of mobile applications

Hi, I’m Mykola Panasiuk, QA Tech Lead at Parimatch Tech. I am responsible for automation of functional testing, as well as development of frameworks and tools for non-functional testing.

Together with the Performance QA Engineer Pavlo Maikshak we have developed “PMetrium Native” — a tool for conducting performance testing of mobile applications.

This post will be equally useful for both beginner and experienced Native Developers along with QA Engineers who are ready to look at mobile applications from a different angle.

Homo sapiens and smartphones

A few years ago, it was hard to imagine that most people’s lives would change so drastically. There will be no usual mass events, Monday to Friday 9 to 5 office weeks, buses full of people, and Homo sapiens will not be depicted with a stick anymore, but rather with a smartphone.

COVID-19, and now Russia’s war of aggression against Ukraine, have ruthlessly barged into our lives. These and many other factors have made the smartphone almost indispensable for everyone. Now, with a mobile app, you can order food, watch a movie, hold a video conference and donate to the Armed Forces of Ukraine, all without ever leaving your home.

Our reality requires quick reaction, maximum adaptation, receiving information here and now. In other words, we need speed, and in our case we’re talking maximum performance of mobile apps. This is what we’re striving for for our company’s mobile application.

We needed a tool that would allow us to obtain performance metrics (CPU load, RAM capacity, network data transfer speeds (Bytes/s), battery drain (mAh), etc.) from both the smartphone itself and the target mobile application, and then compare this data with what was actually happening on the smartphone.

What the market offered

We started by researching off-the-shelf tools and approaches on the market. First, we tried AppSpector, a good tool that allowed us to track metrics almost in real time. However, we had to abandon it due to the need to integrate into the mobile application code.

Then, we found Apptim — to some extent, this tool was closer to what we were looking for. Apptim works as a separate tool that does not require integration into the code base of the mobile application, and interacts with both Android and iOS devices. After working with it for a while, we realized that we lacked flexibility in certain regards, so it wasn’t exactly what we were looking for. CI integration was only available with an enterprise subscription, which cost a substantial amount of money. We even tried to write to Apptim support for more detailed advice, but we never heard back from the company.

Of course, there were and still are standard mechanisms and approaches directly offered by IDE systems for profiling a mobile application, but they are more about “checking and fixing here and now” when the problem is already known. What we wanted instead was a tool that would be easy to integrate into CI/CD.

To be honest, at first, our expectations were perhaps a bit too high. We have previously successfully integrated Sitespeed.io into our pipelines. This is an Open Source tool that collects performance metrics of websites (in my subjective opinion, the best Open Source solution I’ve ever seen). We expected to find something similar for mobile applications.

The reality turned out not to be what we had expected. While the market of server-side and client-side performance testing is already quite mature, the same cannot be said about the performance testing of mobile apps. The information was scant. So this is how we decided to re-invent our own wheel — PMetrium Native — a tool for performing performance testing of mobile applications.

The Making of PMetrium Native

After researching ready-made tools and approaches for mobile application performance testing, we already had a rough idea of what exactly we were looking for, and accordingly, what we wanted to build as a result. At the same time, we put forward the following requirements for the future tool:

  • compatibility with Android and IOS platforms;
  • lack of integration into the mobile application code;
  • no need for manual interaction to collect metrics;
  • easy integration into functional automated tests if necessary;
  • easy integration into CI/CD;
  • easy access to metrics, their clear structure and visualization;
  • stability and reliability.

In order not to attempt to tackle everything at once, taking into account that none of us had the necessary expertise in performance testing of mobile applications, apart from server-side and client-side experience, we decided to slice the elephant into parts before we eat it.

We started with the Android platform, as it is more open and flexible for this kind of experiments. In a few weeks, we already had the first prototype of the tool in our hands, and in the end of June 2022, we have made the first internal release of PMetrium Native, for which we were at least not ashamed.

Here we need to digress a bit and note that at the time of writing this column, we released PMetrium Native as Open Source Software (OSS) only for the Android platform, so we are currently sharing our experience and examples of implementing the tool specifically for this operating system. However, we are actively working on the implementation for the iOS platform, which will be available in PMetrium Native very soon, along with additional functionality for Android.

You may ask, “Then why is the article published now, and not when everything is ready? At least for Android and iOS.” This question has merit, and the answer to it is simple — there will never be a good time. There will always be something in our backlog in terms of functionality, so that it won’t feel quite as awesome before we finish it. And “awesome” often requires time, careful analysis, development and testing. Also, we really love the OSS world and hope that our tool will be useful to someone.

PMetrium Native under the microscope

Architecture

Let’s finally dive into what our PMetrium Native is. Please see the logic diagram (fig. 1). In purple are the logical elements that exist without PMetrium Native, which are well-known to any mobile application development team. The elements that our tool with its components brings together are highlighted in orange.

Fig. 1. PMetrium Native logic diagram (image by author)

PMetrium Native is a web host that has an open RESTful API for interaction. It must be hosted on a Workstation, in other words, a work machine or even a laptop that has physical access to mobile devices to be able to interact with them and capture performance metrics.

The tool stores the results of its work, that is, metrics, in a Time-series database — in our case, InfluxDB. Then all that remains is to visualize the collected metrics. For this purpose, we chose Grafana, because it is a de facto standard in our field.

We’d like to point out that when we talk about PMetrium Native, we mean the entire stack including PMetrium Native host, InfluxDB and Grafana.

All right, let’s move on to the process diagram (Fig. 2). It can be clearly seen that the usual process of running functional automated tests is not affected in any way and it’s not in any way connected to the collection of performance metrics. This indicates that the code of these tests will also not change when there is a need to collect performance metrics in parallel.

PMetrium Native offers to start the data collection before the test execution starts and stop when it is finished (green and red blocks, respectively). By the way, in the case of iOS devices, the process diagram will look almost the same.

Fig. 2. PMetrium Native Process Diagram (image by author)

Practical use

It’s time to demonstrate how we can capture performance metrics from an Android device and the mobile application itself using PMetrium Native. Suppose we already have one automated functional test. For example, it opens the app, goes to the home screen, and then goes to another screen. Next, you need to call the PMetrium Native API as follows:

where:

  • ADB_DEVICE_NAME — the name of the device received through the adb devices command;
  • APPLICATION_NAME — the name of the mobile application to be analyzed.

Note that direct interaction with the mobile application is possible. It is also worth noting that curl here is only shown as an example, as you can freely use http clients from your preferred test framework. You can learn more about PMetrium Native, its API, and implementation details on our GitHub repository.

After the test, the metrics will be automatically processed, and the results will be available as a visualization on the Grafana Dashboard (Fig. 3).

Fig. 3. Visualization of metrics from PMetrium Native (source — Grafana)

It’s worth making a few comments about what you can see on the visualization:

  • PMetrium Native collects both the metrics of the system itself and the metrics of the mobile app, but it is possible to easily disable/enable the collection of the metrics you are interested in using the API;
  • the vertical lines are annotations from Grafana;
  • the green annotation is the point in time when the collection of metrics began;
  • the red annotation is the moment in time when the collection of metrics ended;
  • blue annotations denote events that occur when interacting with the application;
  • PMetrium Native supports multiple mobile devices simultaneously, which means that it is possible to run tests on several devices in parallel;
  • thanks to the convenient filters of the Grafana Dashboard, you can easily find the launch you are interested in;
  • the two bottom panels display time metrics of events that occurred during the operation of your application. They are shown as blue annotations;
  • by default, PMetrium Native collects all available metrics it can detect, the same applies to in-app events.

Now it is worth dwelling a little more on the blue annotations showing events in the mobile application. Where do they come from and why do we need them at all? When collecting performance metrics, you want to understand why, for example, CPU activity or network usage increases at a certain point in time.

To answer such questions and to correlate the activity in the mobile application and the indicators of the metrics, PMetrium Native uses events that can be configured in the mobile application. And, yes, integration into the mobile application code is required at this point, although it contradicts one of our initial conditions; however, this is not necessary for the normal operation of PMetrium Native. This could be considered as an additional bonus for those who always want more.

For the Android platform, it is enough to add one line to the code of your mobile application, which would correspond to a certain event during its operation (there can be several such events). It will create a record in logcat (the login mechanism for Android apps):

where:

  • LogWrapper.d() is simply the name of the method, which may be different for you, what’s important is that it outputs the log into logcat.
  • PERFORMANCE_TESTING is the mandatory tag name for the log.
  • “Some event” is an arbitrary name of the event that makes sense to you.
  • System.currentTimeMillis() — time in milliseconds in the Unix format.

As a result, logcat should record events and PMetrium Native will easily read and process them:

Note that the concept will be similar for the iOS platform, but it has not yet been finalized.

But how do we capture performance metrics on the Android platform? PMetrium Native is OSS, so we need to reveal our cards right away. In the case of the Android platform, we tried to use the arsenal of tools available to us. Since the Android operating system (OS) is based on the familiar Linux kernel, we got the opportunity to connect to the terminal of the device itself, call the commands we need and get certain metrics here and now.

If you put this command into a loop and run it with a certain frequency, say once a second (in fact, on average once every 0.9 seconds, giving time to execute the command itself), then you will get a rudimentary monitoring system. In practice, it turns out that when calling the API on /Start, PMetrium Native connects to the phone via the ADB protocol, transfers a shell script (phoneMetrics.sh) into the internal memory of the device itself and starts it, stops its execution and after the end of the test when the API on /Stop is called. All collected metrics are saved in text files in the internal memory of the device.

After that, the text files with the metrics go through the process of reading and processing. All processed metrics are stored by PMetrium Native in InfluxDB. For example, here is a code fragment from the script itself:

What you see above is only the part of the code that collects metrics by frames from the mobile application, because the entire script is too large to display it here in its entirety. As you can see, we used the dumpsys utility for frame metrics. You can read more about it at its official page. Most of the metrics that PMetrium Native collects on the mobile application itself are obtained thanks to the same dumpsys utility, but the tool is not quite suitable for collecting metrics of the operating system itself. To this end, we use utilities like:

  • /proc/meminfo to access the RAM subsystem;
  • top — to access the CPU metrics.

Network activity metrics deserve a special mention. In the first version, PMetrium Native can only process the network metrics of the mobile app itself, but not through the dumpsys utility, but using the Nethogs utility.

This approach has its drawbacks:

  • Nethogs needs ROOT access to work, which many developers may find problematic;
  • according to our observations, it works well with classic applications such as online stores, but it was not able to capture metrics from the YouTube application, so it does not always work as expected.

But there’s good news: we found a way to receive metrics through the network use of the application and the system in a much better way, and without ROOT access This feature is scheduled for the second release of PMetrium Native along with support for the iOS platform.

Also, we need to mention that you will find a directory called Localhost in the PMetrium Native repository on GitHub. It contains everything you need to quickly launch the required infrastructure in Docker (for example, InfluxDB and Grafana with a minimal set of mandatory settings). Needless to say, it is difficult to talk about all the possible details and necessary conditions for a full-fledged deployment of PMetrium Native in one article. This is why we’ve prepared comprehensive documentation on the same GitHub repo.

It would be wrong on our part not to mention the shortcomings of the tool. For some, these limitations may be subjective, others may find them objective:

  • the authors of PMetrium Native used only the knowledge and tools available to them when creating it, so it is quite natural that many things could be done better, more optimally, etc.;
  • in the case of the Android platform, the script for collecting metrics also uses a significant amount of system resources, which according to our measurements is 10–16% of the CPU time (depends on the physical characteristics of the device itself), although the use of resources is uniform over the entire time scale. We hope to further improve this indicator in our next releases. An easy mitigation for this is to simply disable the collection of unnecessary metrics to reduce the load on the system;
  • to collect events in the mobile application, you still need to make some changes to the code;
  • ROOT access is still required for network metrics in the first version of the tool.

Afterword

Although at first glance, everything looks simple and clear, don’t let this mislead you. This “simplicity” hides many problems that we solved as engineers, as well as tens and hundreds of work hours.

I would like to express my gratitude to Anton Boyko, who believed in this tool, made a significant contribution to its creation and publishing it as OSS. Let’s be honest: not all problems have been solved yet, but we are working hard on it, as well as on expanding the functionality.

In our opinion, the mobile app performance testing market is not yet mature, at least the number Google search results speaks volumes about this. For our part, we strive to make the situation better, at least by a bit. We believe that we will succeed together with the great power of OSS.

Of course, there will be engineers who will like our tool, and someone who will find its functionality lacking. There may well be better experts than us who will find many flaws in PMetrium Native, but we would appreciate constructive criticism as it will allow us to make the tool better. Well, “c’est la vie”, as the French say.

We hope that you found this post informative and not too boring, enjoyed the read and understood what PMetrium Native can do for you while testing the performance of mobile applications. That’s all we have prepared for you in the first version of PMetrium Native. Stay tuned for more!

--

--

Mykola Panasiuk
GR8 Tech

I am passionate about the quality of everything in my life. Glory to Ukraine!