The Horrors of Biomechanical Test Automation

Manual Testing is Dead, Long Live Manual Testing

Carl Horned
The Startup
7 min readAug 28, 2019

--

Manual

relating to or done with the hands.

Let’s start by describing the villain of this article; Biomechanical Test Automation

“Human Tissue over a Metal Endoskeleton”

Large QA organizations often rely on creating clear testing instructions for their testers, including exact steps for setup, execution and expected outcome.

What you’re effectively doing when writing these instructions is automating the testing and there are several problems with automating a humans behavior like this.

Firstly you’re telling your tester that you don’t trust their intellect or intuition and expect they need strict instructions in order to not mess anything up, this generates Zero buy-in from the tester and will make them put in the least possible effort. They will likely only report the most egregious issues along the path defined for them and probably won’t report any issues outside the path. You could say that a test executed with precise instructions isn’t a test at all, but a “check”.

Continuing, you get most of the negatives of Automated Testing with none of the positives. You will likely have to invest in some kind of tracking software to be able to audit your tests and even then you won’t be able to truly know if something was actually tested or if it was just lazily skipped through by a stressed tester needing to catch up on their quota for the day.

Bug Immunity is also a risk with Biomechanical Test Automation, one could even argue that avoiding it is more difficult than with true automated testing since seeding manual tests with random data, or data from an object mother, is more difficult when writing instructions for a human to execute.

A manual test also doesn’t scale, it will take the same amount of time and carry the same costs every time it’s executed. It’s impossible to store a manual test to execute it whenever necessary, make it part of a software repository for anyone to use (for example an open-source project), or as part of code delivery to a client.

Now to present our Hero, low friction E2E testing frameworks.

“No, no, no, no. You gotta listen to the way people talk. You don’t say ‘affirmative,’ or some shit like that. You say ‘no problemo.’”

These have started popping up within the last few years and this is how I define them:

  1. Minimal Setup

The first thing that makes these testing frameworks low friction is how easy it is to get to your first written and running test. They often just require a simple installation, a test spec file and you’re done. Historically what is daunting about test automation is the infrastructure around it, setting up all the different services, learning different syntaxes and making sure they can interact with each other. This is completely avoided with an all-in-one solution.

2. Simple Syntax

Writing a test for a computer to execute should be as easy as writing it for a human would be. Services like Mocha makes it very easy to define when different actions should be executed; Before, Before Each, After, After Each, etc.

Also Chai makes it possible to describe the desired outcome of your test in a manner very similar to how you would describe it to a human; “Expect the yellow button to be yellow”.

3. Simple Execution

Once a test suite is written it should be as easy to execute as pushing a button with the results clearly communicated at the end of the run. Thanks to projects like Chromedriver and Geckodriver it has become very easy to start a browser and execute tests just as a real user would. With a lot of the low friction frameworks you only have to write a test once and then execute it against all mainstream browsers (though with just executing your tests on Chrome, you have coverage of 70% of internet users), and if you want even higher coverage, for example Internet Explorer or older versions of browsers, there a multiple cloud solutions available for executing your tests on almost any browser, including mobile ones.

So who is The Hero saving as he defeats the villain? Exploratory Testing

“The unknown future rolls toward us. I face it, for the first time, with a sense of hope.”

When the chore of testing (or rather checking) is automated you might not only save money, but also time. The initial time spent writing tests is regained multiple times over as those tests can be run across any number of devices any number of times.

Automation also allows the tester to be proactive since an automated test can be written before the functionality exists to be tested, simply knowing the requirements can be enough.

Maintenance is something that is brought up a lot when talking about how much time is spent on test automation. Maintenance meaning fixing the false positives of failing tests or updating infrastructure to support more platforms.

I’d say that even if a test results is a false positive it still did it’s job, it alerted you to a change in the software and you can now be deliberate in how to go about fixing it, was the breaking change intended or not? A failed test could be a signal to the tester that can help guide them later in their explorative testing.

And that is how we should use this new free time we have after automating our testing; focus on explorative testing. I’m using the term here as an umbrella term for many kinds of testing, mostly black box or white box.

Using exploratory testing techniques challenges the tester and generates buy-in from them that testing with instructions never could, and testers have to care about what they test, otherwise you will never be able to trust their results.

Exploratory Testing could be described as the only true way to test as the tester actually gets to investigate the product, find its strengths and weaknesses and communicate them to the team. A tester can be an invaluable ally to a product manager or product designer since they can be the ones that make sure that what the team has built is actually what they intended to build in the first place.

What’s holding many teams back from committing to a QA Strategy focusing on Automation and Exploratory Techniques?

  1. Lack of knowledge

Few teams know about how easy automation can be and the latest tools that are available. I advice these teams to scroll to the bottom of this article and try out one of the tools I link there, you will be surprised!

2. “Technical” vs. “Non-Technical” roles

There’s a false dichotomy in some teams that there are technical roles which write code, and non-technical roles that don’t. The fact is though that the bar is so low now for Test Automation that if you can expect a role to be able to learn a piece of software that has a UI, you can expect them to learn how to write test automation. Automated tests don’t even have to be written by a tester, they could be written by a Product Designer or a Product Manager as well.

3. The Prime Directive

Some teams don’t perceive the task of test creation as part of product development. A team is supposed to create value and only creating new features is perceived as valuable, creating tests to ensure the dependability, usability and visual fidelity of the functions becomes secondary and can happen later.

4. Late commital

Many teams make the mistake of waiting to create automated tests until after something, perhaps after the MVP, after Alpha, after Beta etc.

This puts the testers in a position where they have to catch up, which they might never be able to do depending on the speed of development or size of the testing team.

5. Visual tests can’t be automated

There’s a false impression that humans are somehow better than computers at detecting visual errors while the reality is that the opposite is true. A computer will have a much easier time detecting visual inconsistencies as long as you feed it a baseline to compare to. Humans are notoriously bad at noticing small visual discrepancies, see for example “spot the difference”-games or the “can’t unsee” challenge. See the bottom of this article for links to automated visual comparison tools.

6. Exploratory Testing can’t be Audited

It is often perceived as valuable to a team to be able to go through testing and see exactly when something was last tested or exactly what has been tested and what hasn’t. There are multiple techniques for documenting exploratory testing, such as TBTM, SBTM or xBTM

7. What about Internet Explorer?

While it is true that modern browsers have better opportunities for automation, supporting older browsers isn’t a heavy lift. Thanks to projects like IEDriver, running your tests on Internet Explorer is just as easy as it is on the latest version of Chrome. Cloud Testing services provide an easy way to test on Internet Explorer but Microsoft also provide free to use virtual machines.

So now it’s time to try this out, here’s a list of links and tools I recommend to get started, please let me know if you would like to see anything added to this list:

Low Friction E2E Automation Tools

Ultra Low Friction E2E Automation Tools (test recording & execution)

Cloud Testing Services

Image Comparison Services

--

--