How to Get Honest User Feedback

A/B testing Ubuntu Touch apps in QML

A recent hack-week project at Tictail got me into developing apps for Ubuntu phone — an upcoming mobile platform from Canonical. At Tictail, we like to make data driven decisions, and one of my favorite tools for doing that is Mixpanel.

When I started working on my newest hobby project, an e-sports match ticker for Dota 2, I thought I would take a similar approach to metrics as I do in my professional projects.

Throughout this article, I will demonstrate how to capture user data from within your QML app, set up variant testing and evaluate the results of your experiment.

This article assumes you already have a working app and that you have some familiarity with UML.

If you don’t, but would like to get started developing for Ubuntu Touch, head on over to A good place to get started is the currency converter app tutorial. Once you’re done, come back and I’ll teach you how to improve your app like a pro.

Collecting data

In order for us to be able to test how our app performs, we need to be able to track what our users do. I’m going to be using Mixpanel, but you could use any service that has an HTTP API available (or roll your own).

Signup is free and takes no time at all

Mixpanel’s free tier limits you to 25,000 events per month. If you become a Mixpanel partner, by displaying a badge on the project website, you get an additional 200,000 events.

Plans with higher limits start at $150/month.

Once you have a Mixpanel account, we can start coding! Make sure you have your Mixpanel token available — it can be found under Project Settings → Management

Mixpanel QML component

We are going to be using the excellent Mixpanel component by Artem Marchenko.

You can add it as a submodule if you’re using git, or you could just copy the src folder directly into your project.

The Mixpanel component is very simple, and I recommend that you read it through to get an idea of how it works.

In short, it takes any event data you feed it, base64 encodes it and sends it off to Mixpanel using their HTTP API.

To make use of it, all you have to do is create a Mixpanel component somewhere in your app where it will stick around for the duration of the session:

A few things to go over here:


This probably shouldn’t be committed to version control. If you’re just playing around, having this directly in your source is OK, but keep in mind that your token is secret — add it from an environment variable at build time instead.


The Mixpanel component documentation doesn’t mention this, but you need to supply a unique ID to identify the device/user. Something to note is that it must be the same at each run, or it will appear as a different user in Mixpanel.

There are many ways you can get your hands on a unique ID. Theoretically, the device’s IMEI should be unique, but from what I’ve heard, that isn’t always the case, and it isn’t always available anyway. If you’re developing for Ubuntu Touch, another way is to use a push notification token, but that would require the user to have configured an Ubuntu One account first.

What I would recommend is to generate a uuid and store it somewhere to be used on subsequent runs. It’s not guaranteed to be unique, but it’s unique enough that the risk of a collision is astronomically small.

How do I store the generated ID?

You could either store it in LocalStorage, or you could use U1DB. Either way, it will be saved in a SQLite database somewhere on the device. Personally I find using U1DB to be easier.


An object of properties that are added to every single event that you track. This is where you’d add things that are relevant for every event. We’ll be using this for tracking which variant group this user belongs to.

track(eventName, properties)

This is how you send events to Mixpanel. properties is an object just like commonProperties, except it’s only sent for this specific event.

An example event


While it’s ultimately up to you how you want to handle privacy, I would urge you to carefully weigh the user’s right to privacy against the utility of knowing how they use your app.

In my app, I go for an opt-in approach where upon first run, the user is asked if they would like to help me improve the app by submitting anonymous usage metrics.

If you do decide to go this way, I would suggest patching the Mixpanel component so that you can control whether or not it’s enabled with a property:

Mixpanel {
id: mx
enabled: true # Fetch this from your settings

Setting up A/B test

An A/B test is essentially a way to pit two variations — or rather, a variation and a control — of something against each other to determine which performs better. For this example, I’m going to examine how a color change affects the performance of a sign up button.

More advanced tests

You can definitely take things a lot further than my simple example.

Multivariant testing requires more data analysis skills than I have, but technically it shouldn’t be very different from implementing the kind of simple test that I’ve shown. If you decide to go this route, be sure to read up on how to properly analyze your results.

Testing entire components is not much harder than testing a simple property change. Here’s an example using the Loader component to use two different components based on the variant:

Evaluating experiments

Once you’ve set up an experiment and let it run for a while, it’s time to determine which variant was most successful. To do that, we turn to Mixpanel’s Segmentation view.

Keep in mind that I’m not a professional data scientist.
This is the simplest possible analysis you can do.

In the top-left corner, choose the event you define as a conversion. It could be a successful signup, for example.

Then, in the dropdown below, choose to segment it by variation. The chart below should update to let you see the difference. If you choose to display the chart as a bar chart instead of a line chart, each bar should show you the total amount of events per variation for the chosen time period.

Even if your variant has performed better than your control, you still can’t be sure if it was an actual improvement or if it was just a fluke. To see whether or not you can be confident that your experiment was a success, you can use this test significance calculator.

How long, you ask? Well, you could calculate that using this handy tool.

Further reading

  1. Statistical Analysis and A/B Testing
  2. Statistical significance & other A/B pitfalls
  3. What you really need to know about mathematics of A/B split testing

In conclusion

With very little effort, we’ve gone from making uneducated guesses about what our users want, to making informed decisions based on real-world usage.

Hopefully, we’ve done so in an ethical way that empowers the user to improve their experience, rather than by snooping on them.

I hope that this article has helped you. Note that the code snippets haven’t been tested, so they may contain typos or mistakes. If you want to see a fully working example, I would encourage you to take a look at my app.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.