Automation is more than just clicking a button

Team Merlin
Government Digital Services, Singapore
5 min read2 days ago


Automation testing is all about pressing the magic button and everything will just run by itself!

Alas, that’s not really true.

Behind every click and inputs performed by automation tools, several things need to be considered to achieve quality automation testing. Although automation scripts can simulate human actions and interactions on an application, it doesn’t replicate how humans will react to different situations instinctively. The steps performed by an automation script is only a reflection of the programming and instructions provided. Anything outside of that, we (humans) will start blaming the tool for its limitations (and that’s unfair).

Before we start concluding that the tool is not capable when things are not going the way we intended, take a step back and look at our expectations and the way the automation test case is being scripted.

1. Encountering unexpected errors?

There are times when people ask if automation can handle all sorts of scenarios such as handling error messages intuitively and auto self-resume should any step fail. Here’s an example:

“Once it hits the error 404 page, can the tool automatically stop running the remaining test cases and automatically alert the developers about the error page?”

Unfortunately, automation tools are not that smart (yet). Although some tools do offer a “recovery from failure” feature, it may not be as simple as just enabling the feature. How the recovery works may be just re-running the failed test case/step all over again. In some cases this may work, but what about those scenarios whereby the test data is not reusable? The failed test run may have already updated the test data to the point that re-running the test case is impossible. Or considering it is an error 404 page, re-running the failed test cases will just produce the same result and the extra time spent re-running is wasted.

In reality when the test execution runs into an unexpected 404 error page, the script usually fails the step with reasons like “unable to locate an expected object element” or “failing at some validation point”. When this happens, the tester has to look into the execution report to investigate further. If screenshots or logs are captured during the execution, testers can easily refer to them to understand the cause of this 404 error.

While humans can intuitively perform such tasks, we must provide the automation tool with precise instructions on how to handle each of such situations.

2. Using if-else statements in automation scripts

“Can automation scripts execute if-else statements?”

Technically, of course! However a clear and unambiguous outcome is crucial for automation testing, and the if-else statements can complicate matters.

Example: A pop-up may or may not appear in an application, depending on the condition. For instance, a “welcome new user” pop-up will appear for newly registered users, but should not appear for existing users.

Imagine the following is used to close the pop-up (by clicking the ‘Yes’ button in the pop-up) when it appears:

This script will definitely work and click to close a pop-up whenever it appears and eventually passes the test case. BUT is that the full correct expected behaviour?

This is easy for a human to determine when the popup should or should not appear. However if a pop-up appears when it shouldn’t (i.e. when login with an existing user account), the automation script will still proceed to click and close it and thus fail to catch this bug.

This is why it is better to have a distinct flow than an if-else. Definite input, definite output. This way the automation script will know exactly when to see a popup (or not). We won’t say it is a 100% no-no to use if-else statements; use it thoughtfully and in moderation. As much as possible, find out the exact condition required and script with a fixed outcome.

3. Validating “correctly”

Using the same example above and as mentioned, validating that a pop-up exists alone isn’t enough. What if another pop-up with a different message appears?

Humans can easily distinguish them and flag the issue with minimal effort, but the automation tool will just pass the test case without checking the content of the pop-up. Be mindful that both pop-ups use the exact same object elements, with only the message text being different.

Thinking in the perspective of the automation tool, all we’re asking it to do is to perform a simple action of clicking “Yes” on the pop-up, which is exactly what it has been programmed to do! Attempting to perform something ambiguous won’t help determine whether the test is defect-free. To address this, include a step that explicitly verifies the pop-up message before clicking the button.

Despite that automation testing can run test cases automatically, it does not automatically handle everything by itself; automation tools do what we told them to do, and don’t do what we never say. If we don’t handle it properly and when those bugs go into production, not only more time may be wasted to fix the issue (and ensure it doesn’t break any other functions), it will affect user’s confidence in the product (which may eventually cause more damage to the organisation).

Script with caution and in the mindset of the automation tool to reduce the risk of defect leakage!

Hope this article has inspired you to reassess your automation test case scripting. Having clear and concise commands can ensure automation test cases are executed accurately and effectively!

🧙🏼‍♀Team Merlin 💛
Application security is not any individual’s problem but a shared responsibility.



Team Merlin
Government Digital Services, Singapore

Software | Security | Quality enthusiasts doing the right things