User-empowering A/B tests


All of a sudden, half of the users get a new version of Twitter. Alice enters Twitter in the morning and finds out that now videos play themselves, while Bob has still to click on the videos to play. User behavior is recorded, and some days later the product designers at Twitter take a decision whether or not to keep the new feature. They have a statistically significant bulk of data that can inform that decision, and they can proud themselves of standing a lab coat away from being true scientists.

The thing is, for the users, the A/B test sucks. When the A/B test is completed and a decision was taken on the basis of actual behavior instead of some guru’s divination, the experience is great, because what we get as a result is the continuous improvement we have grown accustomed to in the modern internet. But what about the experience during the experiment?

The first time I read “The Lean Startup”, the idea of running experiments on unknowing users rubbed me up the wrong way, but I got on with it for the sake of argument. Of course, the science is solid, and the results are well worth it. Several years later, having participated in the performance of various A/B tests and having been the unwilling victim of many more, I can articulate my problem with the practice, but also coming up with a proposal for a solution.

My concerns with A/B tests, from the user standpoint, can be briefed in two items:

  • They make me feel subtly abused. Being treated like an anonymous subject without identity has some nasty associations with it, but also creates a general perception of alienation that, ultimately, is the complete opposite of what the product designers want to achieve.
  • They make me feel underestimated. If I have to take part of an unavoidable experiment — hey, finding out whether the feature is good or not is way better than just rolling it out and hoping — I want to be able to give active, meaningful feedback to the people running the service I’m consuming. After all, I am invested in the service, and I want to be able to contribute in shaping it according to my needs.

The first item is very sensitive and very hard to deal with. I’m not going to go deep into the obvious implications about how the marketing machinery as a whole treats consumers as alienated individuals. Furthermore, experimentation is necessary and positive, so the obvious remedy against alienation — to stop experimentation — is neither viable nor desirable.

The second item, though, gives what I believe to be the key to the dilemma. Let’s tell a story first:

Some years ago, Facebook rolled in (in an A/B test) a powerful search feature called “graph search”. As it goes with A/B tests, some of my friends got it, some didn’t. I was working in a startup aimed at processing real time information from social media, so the issue struck close: my colleague got the feature, and I never did. We played a lot with the search together, but of course I would have to log into his account to keep playing on my own. I felt some degree of frustration, specially since it is the kind of things that can make a difference, and starting perceiving the fast pace of changes at Facebook as an opaque process, something faceless that was imposed on me, whether I liked it or not.

So, what if I could have “opted in” to the feature instead?

From the strict scientific standpoint, I can see how this could be sub optimal. If the test were to be, say, the double–blind test of a new promising cure for cancer, it would stand to reason that many patients will choose to ensure that they don’t get the placebo, merely out of hope. But this is not cancer research. This is marketing. What does it mean if users start voluntarily switching to the new behavior? It means that they like it.

If a user chooses to try the new behavior, chances are that they are going to experiment more than non–invested subjects, giving away more information in the process. The feedback would be more complete, since user behavior can be measure not just in the black/white metric of enhanced interaction, but also with the comparison of passive users on either side of the test and active users on either side. An invested user that decided to try a feature — or to opt out of one — is more likely to be open to give specific feedback on the results or reasons why they think that changing groups is a good idea. Finally, as any tool that allows engagement, it is a good way of improving the brand image.

I would like to see more of these.

Credits to Eugene Belyakoff and Edward Boatman from thenounproject.com and iconsmind.com for the icons.