Building reliable software is difficult
We all know that building software is difficult and can be troublesome, especially when deploying fixed code to an application version.
This is normally the case in mobile development, the release process can be slow and require a lot of developer time to ensure the build is stable and to work as expected.
You can implement tons of tests around the logic and functionality in your code, but you’ll always have the fear that something is not going to work as expected, it could be a backend change in the API or perhaps a timezone issue affecting only a few users. Building reliable software has its challenges :)
Humans are not perfect
When we test, we normally try to test the primary functionality of the application, this sometimes leads to happy path testing.
Happy path testing means you only test a particular path though-out the app, such as logging in as a particular user in a certain country on a certain device, and if all looks ok, you simply say its fine and continue working on something else.
We are also quite easily distracted and we have a lot going on in work or out of work which can affect our focus around building reliable tests.
In Docplanner we have a few stages before we release the mobile application, most of these stages involve human testing, and this can be quite a slow process.
We typically create a release branch, then do some manual QA testing with the developers and the main Quality assurance team on iOS and Android across a few different phones and a few different test accounts.
Then if all is ok or after we have gone though a few iterations of finding bugs, patching the bugs and then testing again, we would eventually progress forward and open the testing to a few specialist people though-out the world in our offices if all looks ok. When we open up to the other people, we would normally repeat this process of finding a bug, patching it and then testing again.
Below is an example of how this process looks.
So how could we improve this?
Having a human element in the process is valuable, but does not cover most cases our customers would typically face.
If we wanted to improve this we would need to implement some type of end to end automated testing, this would check our application across hundreds of different Android and iOS phones. And we would want a number of other factors such as:
- Timezone testing, run the application in different parts of the world to ensure the front end and the backend correctly represent our appointments in the expected way without anything unexpected occurring.
- Network testing, ensure the application and backend are capable of running on a standard to poor mobile phone internet connection. We would want to test 2g edge connections up to LTE 4g connections.
- Locale and culture testing, to ensure we cover all of our marketplaces and customer languages to ensure the application looks great but is also accessible when using the phone with different languages.
- Device testing, does our application perform as expected on low performance phones and work on the latest and greatest phones in the market.
So the question we asked ourselves, how can we do this and is this possible?
Appium to the rescue, well kind off…
So we did some research into a few different tools for testing a React Native mobile application in the cloud. We found Appium to be the most mature and most supported solution for React native.
Appium works beautiful with Browserstack, this had potential.
We started experimenting with Appium and getting it to work locally with the iOS simulator, all was working quite nicely and it was quite fast.
The biggest pain we ran into was element selecting with Appium. Luckily Appium also has a great application called Appium desktop.
Element selecting with Appium Desktop
So one of the biggest pains we faced when developing our end to end tests was element selecting, this involved adding test ids to our components in the application and then having the test search for these test ids. But the pain was sometimes the elements could not be found.
So we found Appium desktop, this amazing application takes a screenshot of your simulator and allows you to select different elements on the page, and once selected tells you how the test id is defined.
This massively sped up the development of our tests and reduced the frustration of continuously running tests to ensure the test ids could be found.
We run continuously against our develop branch
Once we had our end to end tests working and most importantly ensuring our primary flow worked as expected, we decided we had to start testing our code as much as possible against as many different conditions as possible.
So I mentioned above the 4 different test factors we aimed to achieve, luckily Browserstack provided this capability for Appium. We started randomising these conditions while running our end to end tests. This provided great insight and a level of confidence that our application was reliable and would catch anything wrong as soon as something changed or a backend API had broken something important.
Our tests would run at different times during the night and when we would merge a change into our develop branch.
With this in mind, our release process then looked like the following
We started running the tests against the latest version in our develop branch, this meant we could start catching crucial errors and issues before the got to the release branch. Because of this our confidence in our application releases started to massively increase and our reliance on human test decreased.
We are still building out a lot of the primary flows our users depend on at the time of writing this article, but so far so good.
We also realised something kind of cool we could take advantage of…
Introducing Doc/shots, the ultimate feedback loop
So when running our end to end tests, we could start taking snapshots of our application in different phones with different locales.
This meant our design and user experience teams could get a develop-live view into how our application looked. So when they suggested new ideas or wanted to try something, they could get a high level view of the overall perspective of that change.
It also meant our application design language and flow would be more efficient and consistent for the user.
For the technical explanation, we simply used
appium.takeScreenShot() , this would return a base64 representation of the screenshot, we would then upload this screenshot to Amazon S3 and utilise the metadata S3 provides.
Then we built a static Nuxt.js application and consumed the AWS S3 API while mapping the meta data each image returned.
Super simple to implement, but super valuable for the other teams.
Next steps in this project
We have a few ideas on where to expand this testing in the future, some ideas include:
- Integrating with our Analytics tools to use the user flow and automatically create e2e tests based on our customers normal behaviour in the application. We could even explore some edge cases we normally would not consider.
- Visual regression testing, we would like to implement some coverage into ensuring our pages would remain consistent and to avoid an unexpected changes sneaking into the UI side of our application. Appium has some capabilities for this, but we’ll need to explore our options.
So overall this has been massively beneficial in improving our confidence in releasing new changes and encouraging us to be more agile and release more often. We still have some problems around guaranteeing a perfect and reliable experience, but this addition to our development and release flow has improved the way we catch issues and prevent our customers having a worse experience.
Hopefully this was an interesting read and gave you some inspiration into building your own end to end testing suite for React native.