Test Automation is Really Manual

test.ai
Appdiff
Published in
5 min readSep 14, 2016

The promise of software is all about automation, but the open secret is that most software test automation is manual.

Software engineers write code that increasingly automates our work, recreation, and communication, and much of the software itself runs with little human intervention and across hundreds of thousands of machines. Yet how do we test all this software? How do we make sure things work correctly? We use a combination of manual testing, and manual writing of test automation scripts.

Stock image person, super frustrated after writing 100 manual test cases.

Manual Testing

Manual testing in a team often begins in an ad hoc fashion — imagine people doing exploratory testing by tapping on an app or clicking in a browser and looking for bugs. Some teams will host a “pizza bug bash” where they gather around a conference table, eat pizza, and hunt for bugs.

Later on, the engineering team may hire full-time manual testers or external consultants to take over the bulk of this process, and the testing itself becomes formalized. Eventually teams have hundreds or thousands of documented test cases that manually executed for each build.

It’s tiring because so much of manual testing is repetitive. Every new version of the application needs to be tested to see if it launches without crashing, if users can properly login, if the app is responsive enough, and whether key features and workflows like are working correctly. Testers become bleary-eyed the 100th time that they log into the app. The repetition saps mental creativity and raw time needed for more critical testing tasks, such as exploratory, negative, and boundary testing.

Stock image person tired, probably after hand-crafting 10 “automated” tests.

Automated Testing

All this manual work leads software teams to inevitably come to the same insight: “It should be so much faster and less expensive if we just wrote more software to automate all that testing.” There are two basic branches of test automation:

  • Record and Playback (RaP): The idea is simple — someone has written a magical app that watches a tester’s actions while testing an app. The tester hits Record, clicks stuff, then hits Save. The RaP system then plays back these test steps for each new version of the app. I fell for this dream myself a few years back.
  • Test Automation Scripts (TAS): When the people testing also know how to code, they likely want the control of writing the test scripts themselves. They intuitively know how brittle RaP tests can be, love the extensibility of parameterizing custom test code, feel cool while programming, and like exerting more control over the software and test system in general.

App teams then set up test machines that patiently await new versions of the app or web page. Test inputs, swipes, and clicks are be automatically applied to the application. Outputs and various application states are verified by the automation. Does this mean that the app team can now focus on feature development and be confident that every new version of their app is tested? Nope.

Open Automation Secrets

  • Cost and ROI: Lots of time (money) is spent *manually* recording or writing all these test automation scripts. This money could have been spent on manual testing and/or pizza.
  • Setup lag time: Before these promising test scripts can run, they have to be designed, written, and reported. That often means months.
  • Miss new features: Test automation scripts only verify the inputs and outputs that were expected at the time of writing. That means automation usually doesn’t cover the the newest features — the very things in the app that need the most testing.
  • Weak verification: The test scripts only verify what they were manually told to verify, which often isn’t much. I remember a day while I was at Google where tens of thousands of automated test scripts showed Pass, even though the browser we were testing just showed a blank white screen.
  • Who tests the tests: Test automation breaks. All the time. Worse, it breaks when you need it the most — when the app changes.

Because of these issues, automation projects often deliver far less than promised, and are often scaled back from initial aspirations. And, they require a lot of manual work.

So What is Actually Automated?

Consider the main components of test implementation: Design, Implementation, Execution, and Maintenance. Are any of these steps actually automated?

  1. Design: No. Software changes, as new features are being added or changed all the time. New test cases need to be created.
  2. Implementation: No. People are typing tedious lines of code to input values and verify state.
  3. Execution: Yes. Manual testers can be trained to properly execute test cases. Test scripts can be set to run on-demand or at scheduled times.
  4. Maintenance: No. When the automation breaks, it has to be fixed — you guessed it — by hand. Manual re-recording of RaP tests and re-coding of TAS scripts is annoying, expensive, and labor-intensive. The more tests you have, the more of these scripts need to be updated when the product changes.

At best, only test script execution is ever actually automated. Everything else remains incredibly manual.

Automated Automation

We’ve explored the basics of manual testing, how it relates to automated testing, and realized that automated testing is really just another form of manual testing work. And, this automation work in practice is often more expensive, and delivers less value.

Thanks to advances advances in Artificial Intelligence (AI) and Machine Learning (ML), we are finally getting a glimpse of a future where test automation is truly automated. Imagine having access to a team of bots that have been trained to tap, type, and swipe through an app just like humans (and mischievous testers), and could learn the basics of “good” and “bad” software behavior. Software testers and developers would have the time to do what they do best — add features and think creatively about test coverage.

It is exciting to see what these bots will do in the near future and see how humans will work alongside these truly automated test scripts. More on that in the next post…

--

--