Some of the latest features we introduced on the Livewire Market’s platform were firstly running as an experiment for a few weeks. What’s an experiment? When you know your end goal (“Increase the number of users that discover this new page”) and you also have a few ideas on how to achieve it, you can choose to experiment with them simultaneously. Ideas are not enough though — what if by implementing them you’ve been detracting users from doing the same thing you want them to do? Or maybe, you were sure that a button with a blue navy background colour is working the best, while your users are engaging more with a gold one?
An experiment allows you, as a product owner, to run multiple variants you want to try in order to achieve your goal. You split the variants between the users on your site, either equally or by some formula (“60% A, 40% B”). It’s what A/B Testing means — test variants (A and B, maybe C and D too) and find the one that works the best.
An experiment for a user has a start — The user is experiencing one of the variants you set them up with. It also has an end — the same user decides to follow the lead and go and check that new page. That is called a Conversion (the user “completes” the experiment).
A winner is born
When you spot a trend of a variant outperforming the others, you can choose to declare it as a winner and force that variant to everyone, concluding the experiment. Be careful not to jump to any conclusions too early in the process. You want to observe the experiment long enough before making a decision.
A/B Testing on Rails
There are a few Ruby libraries which allow experiment development for Rails. We’ve been using split — they allow running multiple experiments all at once, each with multiple variants, and has plenty of configuration options and it “just works” with our Redis configuration.
In this example of our test server, you can see the different variants (or alternatives, as split calls them), how many participants (users) experienced them and how many completed.
We configured split to support multiple experiments and to persist in a cookie the chosen variant of an experiment for one day. Read more about it here.
Then, we set up the different alternatives in our Yaml file:
If you look carefully, the ‘wire_byline’ is distributed evenly between two alternatives, each with its own metadata, while ‘similar contributors’ having 4 alternatives, 3 that are together shown to roughly 10% of our users (distributed evenly) and for 90% it is not presented at all.
Starting an experiment for a user
An experiment starts by calling ab_test. Split is smart enough to start it only once for a user (it persists in a cookie).
You can add the code in your controller or view.
You can then use the chosen alternative and configuration to present the right variation.
How to track an experiment completion?
When a user acts on your experiment, i.e. clicked a button and redirected to your new page, add an ab_finished to mark it as completed for that user.
Usually, you’ll add it in a destination controller. But make sure you only complete the experiment if the user got there through clicking your button and not otherwise. A good pattern is to add a request param:
And that’s it!
Using experiments, we can develop and run different ideas and get feedback from our users before making a decision — when we commit to one, we already know it’s the best.
Lately, we’ve been running experiments to drive traffic to our new contributor profile pages through different variations of gentle prompts, profile byline text styles and a “similar contributors” sticky component.
Next in our pipeline
We ideate and explore many ways to improve our wire pages. We want our readers to get the most out of them, and we are in the process of creating new experiments for that purpose. Stay tuned!