Automated testing — Learnings from a digital product proof of concept

Isak Anklew
Apegroup — Behind the Screens
7 min readMay 25, 2016

A story on how we started doing automated testing at apegroup.

I n this article I will go through my teams thinking step-by-step on how we made a proof of concept on automated testing on an iOS application. Our learnings and failures, it’s all here.

Since the dawn of the industrial revolution automated processes has fascinated humans all over the world. Automating robots to perform tasks on their own is cool and all, automating software to perform tasks and commands on hardware; now we’re talking Skynet levels of awesome.

Skynet, anno 1899. Picture source

At Apegroup we have always focused heavily on the exploratory testing and the end user experience when we perform testing on our client’s products and services. The goal with these type of hands on testing methodologies is to really spend as much time as possible actually testing instead of writing advanced test cases and setting up elaborate test plans.

This approach has served us well through the years and now that we have our regressions tests all mapped out combined with a great understanding of our products and how to test them I thought it was time to start investigating the realm of automation. My theory was that automated testing would give us more time testing the actual end user experience of the product rather than going through all of the “happy go lucky” test cases.

Hands on testing. With a fork.

It all started with our developers setting up Dashing dashboards that highlighted the current code quality and builds statuses of all our iOS and Android projects. The dashboards are placed throughout our office and is a great way of keeping track of every project’s current status and makes it easier for the whole production team to have transparency in our day to day work.

My theory was that automated testing would give us more time testing the actual end user experience of the product rather than going through all of the “happy go lucky” test cases.

One of our dashboards displaying current Jenkins build statuses.

My vision with automated testing at Apegroup was to minimise the time we spent regression testing functional test cases and to enhance the quality and diversity of the test runs we do on a daily basis. I also wanted an easy way to record the test cases by interacting with a device so we could keep the manually scripting of the test cases to a minimal level. Finally I wanted a local build server (with real devices hooked up to it) at our office to run our automated tests whenever a commit was made to the code base of our proof of concept project.

My first step was to acquire a new colleague that could help me explore the possibilities to actually carrying out my vision. That colleague turned out to be Gustavo, he’s ambition for automated testing far exceeded my own so he was a perfect candidate to headline this whole initiative.

We started our adventure into automating testing by first identifying a real client and project that could serve us well for a proof of concept of what we wanted to accomplish. The client we choose to turn to for this was TryMe, a startup that we had assisted in developing a coupon claiming iOS application. The app has a relative simple UI, and is iOS only, that together with the coupon claiming business mechanism made it a perfect candidate for our testing needs.

The TryMe iOS app in action.

Our “Automation team” grew quickly when we added an iOS developer (Magnus) and QA Specialists (David and Olle) to the mix. We started off by drawing knowledge from Magnus that knew the app through and through since he was part of the original team building it. By mapping out the three most important user scenarios for the app we quickly realised that we couldn’t automate everything in a user scenario. We quickly discovered that our automation would lean more to UI testing, rather than just automate functional test cases. This made us aware of the limitations and the need to specify the rules of the scenarios well ahead of starting to automate them.

Our first time user scenario mapped out in a flowchart. The three most important test cases are highlighted.

After deciding on our three scenarios we started with breaking them down and discover which test cases was hidden inside each scenario. We could easily see that our first scenario had at least three different test cases that we would like to try to automate. The first scenario also included the tricky test case of acting as a first time user that has just installed the app for the first time. What made it tricky was that the try me app syncs your device ID to your user ID as soon as you claim your first voucher, so we needed to come up with a way of resetting the device ID.

This feature was not yet implemented in the project itself, so we asked Magnus to implement it for us, which would both benefit our automation needs and make it easier in the future for manual testing as a first time user. Once the feature of resetting the device upon app installation was implemented we could proceed with actually recording the whole scenario in Instruments.

I had some prior knowledge of the recording feature of Apple’s Instruments tool, but I had never really gotten it to record a whole scenario directly from a real device in my earlier attempts. Gustavo was given the task to evaluate both Instruments and other automation tools in order to decide what tool we should use.

After some investigations and test trials Gustavo came up with the following matrix for the automation tools.

After looking through both Apple and Google’s own official automation testing tools as well as 3rd party tools, Gustavo came to the conclusion that we should choose the official Apple and Google tools for our testing.

Gustavo: The reason for choosing the official tools was that the crashes where less frequent on the official tools and they were free. We liked Appium but the setup process was less then easy and when we finally got it to run it had problems accessing the correct elements in our app.

Since Apple just recently (in Xcode 7.2) started to include the whole XCUITest suite inside Xcode itself running on the Swift coding language we also decided not to use Instruments.

In our attempts to get Instruments to run our first scenario Gustavo discovered that the Javascript language made the whole test a lot slower and more prone to crashing compared to XCUITest. We’re still gonna use Instruments for it’s other features such as running different profiling templates for performance testing purposes, but not for recording our user scenarios.

Since this proof of concept focused on an iOS app we didn’t have time to start setting up Android Studio with UIAutomator or Espresso, but we have since this blog post was published starting dabbling with these tools as well.

Gustavo started to record our three scenarios in XCUITest on real devices and with little effort and modification he was able to quickly output a whole suite of test cases. While Gustavo focused on getting the tests repeatable I started investigating how we could setup a local build server for our test to run on. I had earlier been shown a demo of the app Xcode Server by one of our developer’s Dennis, who with little hassle had setup a iMac running the server app on.

Automation happening.

Together with Dennis we we’re able to setup a Bot inside of Xcode that automatically ran our user scenarios as soon as a new commit was made to the TryMe proof of concept branch we had setup in BitBucket. With this I felt confident in our proof of concept so I decided to make a Friday presentation in front of the whole company on the Automation topic.

The presentation was well received and we’re now working on getting more of our projects to automate the most important user scenarios and make sure they run the automated tests for each commit.

I think that automation can absolutely cut down on the time we spent regression testing our most important user scenarios and test cases. It will absolutely limit the human error in each test run and we can run more tests on more devices at the same time. Most important of all though is the ability to record user scenarios, in fact, I think it’s the key to unlock test automation.

By collaborating with both designers, developers and by discover, defining, designing and record a scenario together we all should be able to create a better overall product and raise the quality together.

By making the first step of automation as easy as just interacting as a user on a real device, the hurdle of automation has been lowered drastically. However, automation should still be looked on only as one of the tools in a tester’s toolbox, not a machine or robot that can completely substitute a real human testing software. We’re not Skynet…yet.

My name is Isak Anklew and I’m head of Testing and Support at Apegroup. We’re a design and technology agency in Stockholm, Sweden. If you would like to know more about how we work with automated testing, please see our website.

If you enjoyed this article, please press Recommend below.

--

--