Mobile developers, are you performing?

Guillaume Le Roy
The Qonto Way
Published in
7 min readFeb 26, 2021

In a fast-expanding and competitive field like Qonto’s, companies who strive and those who struggle are often determined by the quality of their execution. The question of how to establish a high-performance mobile development team and measure its performance is my number one focus as a team lead. In this article, I’ll go through our learnings and current approach to this challenge.

From intuition to visualization

At Qonto, team leads like me receive regular training, sometimes directly from our co-founder Steve. This journey started during a coaching session where Steve asked me:

Steve: “Are you performing well?”

At first, I didn’t get what he meant by that.

Steve: “Well, surely you must know if you have crashes, bugs, a smooth delivery…those kinds of things.”

Me: “Well, we first have Amplitude to monitor some user behaviors; we also get notified by Firebase if we get too many crashes, we also use Jira to keep track of bugs and…oh…and we look at our Kanbans on Notion so I know what developers are working on, and I can see any blockers that may appear in our production flow.”

Steve: “Okay, but can you tell me right now if everything is running as expected?”

Me: “Not really, I have the feeling it works. No one has raised any alarms since this morning, so we should be good…”

We continued this discussion, and then it finally hit me: I was blind, leading my team with feelings instead of concrete elements. I had to clarify our measures of performance.

Clarify performance indicator

A mobile developer’s role is to deliver top-notch mobile applications, fast. As a manager, I am responsible for the smooth, high-quality, and frequent releases of our applications. Here are some of the control points I currently use to track our performance:

Delivery

  • Do we deliver on schedule, or do we miss our expected delivery dates?
  • Is our development workflow smooth, or do we encounter bumps like missing assets, missing texts, or mistakes in software design and coding?

Quality

  • Do we deliver with a high level of quality, or do we often have bugs and crashes that force us to go back to something we shipped previously?
  • Do we write code that is easy to read and work with, that is tested, etc.?

At that point, it was essential for me to get early feedback from the team and validate if these indicators made sense for them. They showed enthusiasm and brought additional ideas to the table.

First key learning: Over-communicate your intentions to your team (in this case, grow our skills as a team). Without this, a performance indicator can be easily perceived as a way to monitor them or a system to highlight our problems as a way to shame them.

Eventually, we shortened our ideas to the points we considered relevant for defining our performance:

Quality

  • Number of bugs to be fixed
    aiming for high quality means 0 bugs
  • Test coverage of our codebase
    aiming for high quality means we should reach 80% of coverage on non-UI code
  • Number of crashes reported on app stores
  • Poor reviews on app stores

Delivery

  • Time for Merge Requests to be approved
    Merge Requests should be rapidly reviewed and merged
  • Size of Merge Requests
    Small Merge Requests are easier to review and thus approved faster
  • Number of comments on Merge Requests
    Too many comments indicate some issues during our design phase
  • Build time on developer machines and our Continuous Integration environment
  • Reworks (everything that we didn’t manage to do right on the first attempt)

Continuous improvements

  • Time we spend nurturing our platform (refactoring, updating dependencies, creating reusable components, tooling etc.)

The good news for us was that all this information was already available but scattered among various tools:

  • Gitlab to get Merge Request information
  • Jira to visualize the bugs we have to fix
  • Firebase to look at crashes information and ANR
  • Notion to see the progress (and delays) of our different projects
  • Segment/Amplitude to get useful behavioral information

In reality, having so many data sources makes it difficult to align a team on common goals and follow performance. For this reason, we decided to build a visual board where anyone could clearly see all our performance indicators.

Android performance board

Let’s take a look at the Android dashboard we created on Metabase and how we use it.

Android performance board earlier this year
Android performance board earlier this year

This is the performance dashboard we were using in early 2021 with some of the data we started to collect. It consists of the following sections:

  1. First section: data from Gitlab, our platform for source code management. We use Merge Requests properties to track how long they stay open, how big they are (diff size), and the number of comments.
  2. Second section: data from Jira, our bug-tracking tool. We extract data from Jira tickets to visualize our stock of bugs waiting to be fixed.
  3. Third section: data from developers’ computers, with the average build time per developer
  4. Fourth section: data from Notion, our platform to organize Delivery flow. We extracted some data:
    . On the left side: how much time we are spending working on features versus scaling tasks
    . On the right side: how much we overshot expected delivery dates during the past month
  5. Fifth section: data from spreadsheets and from Firebase, with product requirements that changed after we started the development phase and app crashes

How we built it

To ensure that this dashboard is useful for the team, I wanted to visualize all our performance indicators in a single place.

Other teams at Qonto were already using tools like Grafana, Metabase, or Kibana. To make our decision on the right service for us, we considered the following characteristics:

  • Price
  • Security: we don’t want to expose sensitive data
  • Flexibility: we want it to handle various data sources, including custom-tailored ones

With the help of our Data and SRE teams, we looked at different solutions before settling on one.

Qonto was already using Snowflake and its Data Warehouse service to collect data from various sources (Gitlab, Jira), reporting, and data analysis.

Collecting data from Jira and Gitlab

Collecting data from Firebase

Extracting data from Firebase is a manual process. Firebase lets us export crash-related data to BigQuery and then query it to export the result into a spreadsheet. That spreadsheet can then be imported to our Warehouse via Stitch.

Collecting data from Firebase

Collecting data from custom sources

The main difficulty we faced was to find a way to export data into our Warehouse from sources lacking automatic integration like Notion or custom-made sources (build times, app sizes, etc.).

Our first idea was to build a new microservice in Qonto’s infrastructure, set up a PostgreSQL database, and write an application to expose an API. While this solution was feasible, it meant days of development on technologies with which we are not familiar and yet another thing to maintain over time. After more research, we found PostgREST, a standalone web server that directly turns a PostgreSQL database into a RESTful API.

This solution is a perfect match for us:

  • It’s simple ⇒ We won’t have to write and maintain code.
  • It’s secure ⇒ PostgREST provides authentication mechanism out of the box.
  • It’s scalable ⇒ It can be deployed like the rest of our microservices.
Collecting data from various sources

What we learned along the way

  • Looking at common performance indicators is a game-changer.
    Once we started looking at our Merge Requests graphics, new conversations emerged. Developers were suddenly paying more attention to the size of their Merge Requests; having great technical discussions on how to split their work helped them become engaged in changing our code review process.
  • Having well-crafted performance metrics at hand helps me as a team lead.
    As I can quickly identify areas where they need to progress, I can now help developers grow their skills more efficiently. For instance, Merge Requests with more than 400 lines changed are now visually flagged on the board, creating an opportunity to talk about the art of writing and reviewing Merge Requests.
  • Developers learn faster and more autonomously.
    This approach has helped us define our standard of what a good Merge Request looks like. You can learn more about how we use work standards at Qonto here.
  • We introduce changes to our codebase using a scientific approach.
    Because we have relevant metrics at hand, we can immediately see if code changes have a positive or negative impact. We can launch improvement initiatives with metrics backing our actions. For example, we will test remote compilation for Android this year and be able to use our data on build times to observe its impact.
  • We grew new skills
    We got to learn how to make the most of tools like Metabase and PostgreSQL. We also discovered this amazing PostgREST solution to quickly create the microservice we needed, saving us days of development.

In conclusion

This project is just the beginning of a long-lasting initiative meant to improve our practice, as individuals and as a team. It takes time for people to change their habits, so we do not see it as a one-shot effort to improve our performance, but rather as a companion that will grow and evolve with the team.

Want to know more?

Our next articles in the series will give more details on what we did to collect build times from developer’s machine and our CI. Stay tuned.

--

--