Automated tests. Words that spark fear in to the hearts of most mobile developers. Why? Because they take a long time to build and often testing the app by hand feels like the easiest option. However, at SKOOT, a new lift sharing app, where we are affronted with the same technical challenges as the likes of Uber and Lyft, we’ve been unable to ignore their potential — as fun as it sounds, jumping on a bicycle to test location accuracy is quite a time sink (and to your average developer, not that fun).
Our app allows users to go on trips together and pay each other to account for petrol costs, so it’s important that we are able to navigate our users to where they need to be and charge them correctly. We could test this by incessantly clicking through the app every time we want to do a release, but, in the interest of delaying the inevitable onset of RSI, that’s something we would like to avoid.
The sheer number of scenarios we have to run through in order to guarantee a working app are endless and, frankly, it’s impossible to ensure consistency by hand. The result is an app where every new feature causes unseemly bugs; often discovered by users. But why is it impossible to rigorously test SKOOT manually? Let’s have a look.
At the very core of the app is the on trip experience. When a user goes on a ride with their friend, are they shown the correct controls? Are they charged the correct amount? Let’s use this as an example of a test to help illustrate the unmanageability of manual testing. So here we have it:
At SKOOT, we develop both an Android and iOS app, and we need to ensure users can ask all of their friends on drives, regardless of platform. If we take this in to account we now have 4 different scenarios. As you can imagine, completing test journeys in all of these cases can get tiresome.
Now, let’s consider the case that one of our users has updated to the latest version, but their friend is still a version behind. We don’t want that causing issues, so we need to run the production version with the development version we are about to release to ensure that there is a smooth transition for the majority of users after a new release.
This creates 3 new scenarios for every one of the 4 other scenarios, making quite the table where to ensure the quality of a ride, we need to test a mighty 3 x 4 = 12 different scenarios:
Now you can maybe see why manual testing might not quite cut it for us at SKOOT. It also doesn’t stop here, if we wanted to test a ride with more than 2 users, this would increase again. Say that we had the following device and version tables:
We have 18 x 6 = 108 different scenarios to thoroughly test a ride with two passengers! Assuming each manual test takes 1 minute, thats 1hr 48mins of testing one measly ride. Additionally, if we find a blocking bug we would have to go right back to square one to ensure nothing else was broken by the necessary fix. Nightmare.
There’s definitely an argument to be had about how useful all of the testing variants are. One might argue that for at least the time being while we are still a baby app, it’s most important to test that the latest version works against itself on both platforms and ignore the backwards compatibility. This reduces the single rider case to just 4 scenarios and the dual rider case to 18. Still, as far as testing goes, I for one would much rather not have to run every test this many times, especially when a computer can do it more consistently and faster!
Thus, we are amidst developing a testing platform which we hope will be the envy of all lift sharing apps. Watch this space for the technical details about how we get on making this automated hell a reality using tools like Appium, Kobiton and Gherkin.