A/B testing

Most of the tests involve observing how individual users behave. This is great for finding problems with your designs, but it’s not good if you need to compare two (or more) different designs and find the best one.

A/B testing is very useful when there are several possible designs for your application, and you want to find out which one works best. Although it is called A/B testing, the test can be done with more than two designs(ex. A/B/C… test).The A/B testing can also come in handy if you have redesigned or changed something in your design, and you want to know whether the new version of the application or the new copy works better than the previous version. You can use A/B testing in various cases — for example,if you have changed something small, at first sight insignificant but you can also apply the technique if you have lets see, two completely different designs. Let’s say you’re working on the text of your home page, and you’re not sure which version works best for convincing people to continue exploring the application. Or you’re trying to decide between two different positions for the navigation, and you’re not sure which one will work better for your users. The most common way to answer such questions is with A/B testing.

The definition of the success of A/B testing depends on the product. An A/B test in your case is considered successful when the user is able to finish the checkout process and has successfully purchased his guitar. In other words, it could be measured by how many people who start a task actually managed to finish it.

Before starting the test you need to make sure that you have two (or more) designs you want to test, and that you have your own definition of success. Now, you have to implement both designs and find a way of distributing them to your users. You should be careful when distributing the different versions to your testers. The first thing that you need to do is divide your users into two groups. When running A/B tests, you want to make sure that individual users don’t switch between the two different designs. That is why you should give each tester only one of the versions at random when he downloads your app. While the test is running, you have to collect the results. You can do that by observing high traffic areas of your application, because that will allow you to gather data faster. Afterwards, look for pages with low conversion rates or high drop-off rates. This is what could be improved. Finally, you should analyze your results. Your A/B testing will give data from the experiment and show you the difference between how the two versions of your page performed, and whether there is a difference. The version that performed better and gave better results is the winner and should be your new page.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.