Test Automation Demystified, Part 1: From Manual to Automated UI Testing

Denis Markovtsev
10 min readMay 17, 2019

--

Walking is a great way of getting from point A to point B. You are moving slow enough to look around and notice things. There is no need for paved roads and fuel. You can even carry some weight with you. In other words, walking gives you big freedom in choosing direction but limits speed and useful load. Everything changes if movement is automated. Using a car, you can get to point B faster and take more things with you. But is it so simple and straightforward?

There are many ways to get from point A to point B

If you use a car in a big city as I do — you may get stuck in a traffic jam. Remember that lucky pedestrian leaving you behind at a couple of traffic lights in a row. Did you ever notice anything with wonder by walking in a place where you used to be driving for a long time? Did you ever get to destination much later than planned because of road blocks and detour? On the other side we often choose some kind of transport for commuting. On average it allows us to reach destination much faster than on foot. And if it is public transportation, we have time to do other things — watch videos and even write text.

Let’s now replace walking with manual testing and using a vehicle with test automation and we’ll get very similar picture. First, there is no universal best choice. Second, there are different tools (vehicles) that help achieving goals. And third, automating things requires certain skills.

Let’s sort this out!

Manual Testing

There are two types of manual testing.

Manual — Scripted Testing

A tester may have a predefined script where every step is defined. The script tells what to do with the application under test, what data to input and what result to expect.

Below is a simple test checking math in Calculator. Every step is clearly described. A person executing the test knows what data to input and how to determine that the system behaves correctly.

Scripted Manual Test

A scripted test can be developed and executed by different people. In some cases, such a test may be a good candidate for automation. Automation engineer implements a test and a software tool runs it. If we have a working car, there are good roads and we know how to drive — why not to reach point B faster?

Manual — Exploratory Testing

On another side of the scale is Exploratory Testing. The shortest definition is:

Simultaneous learning, test design, and test execution — James Bach, Cem Kaner

When you do an exploratory testing, you do not have a script. You generate the steps based on goals and observations. All steps are documented along the way to ensure that if an issue is found it can be reproduced.

When a teacher poses questions to a student during an exam — this is an exploratory testing. Next question to some extent depends on previous answers and initial plan of the teacher. Compare oral exam to a written test that more resembles the scripted testing.

Another example is testing of application store submissions. You get an app and a rough idea of what it should do. You need to test it and make a decision — publish or not. So, you make assumptions, check them and decide if application output is valid.

Automated Testing

We are now getting closer to the main hero of the article — test automation. In this article I am limiting test automation concept to E2E/UI testing only (see the famous test pyramid) because it is much more connected to manual testing than unit and integration testing. I think it is convenient to split automated E2E tests into three types.

Automated — Scripted Testing

This is the most common type. The sequence of steps that could be performed manually is automated. During test execution a software robot interacts with an application like a real user, inputs data and verifies output — all automatically.

In the example below the test is checking application login screen. It enters username and password, then clicks login button and verifies result.

Scripted Automated Test

Scripted automated testing is very much like commuting to work or back home. Routine should be automated if possible, to free up time for more creative things.

Automated — Exploratory Testing

Imagine a software tool that given a website URL and a brief description of its purpose will figure out everything required for testing. This is something not possible today and the subject of current research activities. Possibly someday AI-powered robots will be able to do exploratory testing. At this point there is no any evidence that in foreseeable future robots will replace humans in QA.

Exploratory testing is going on foot. You are moving slow enough to notice various aspects of application behavior. Things that are ok for an automated test may be easily detected or not possible to perform by a human.

Here is an example from UI Testing Playground site created to challenge test automation tools. A software robot has no issues with clicking green button in the sample below.

Click the green button

The page displayed below is functional but layout is messed up. It is not a problem for a robot to interact with the page — enter text into fields and click buttons and links.

What ‘s wrong with this page?

Automated — Generative Testing

One may consider it as the first step to automated exploratory testing. There are two big movements in this field.

Model-based Testing

This type of testing involves both manual and automation efforts. First you have to build a machine-readable model of the application. Usually it is a state machine, you describe states and transitions between them. Second, a machine algorithm reads the model and generates test cases.

On the picture below you can see a very simple model. One of the biggest challenges in model-based testing is selection of test cases from the pool of generated ones. Even for our simple model the number of generated test cases is pretty big. If we take all paths in this state diagram that are no longer than 8 we will get 33 test cases.

Model-based Testing

Monkey or Fuzz Testing

Second type of generative automated testing is Fuzz or Monkey testing. A software tool interacts with an application in a random way frequently using invalid data input and records output. The application should not crash and respond in a reasonable way.

What Should Be Automated?

Most widely used type of UI test automation is scripted test automation, frequently it is automation of manual test cases. The main reason of it is — people want to offload boring repeatable tasks to machines. This is where the first road block usually hit.

Test automation does not come for granted. It costs time and money and thus should be done wisely. If time/resources spent on development and maintenance of automated tests are less than resources spent on manual test execution — go for it, otherwise — think what you are doing wrong. Don’t do automation for the sake of automation, it must be driven by business objectives.

This is the ideal curve of performance of a QA group switching to test automation. Expect slow down at the beginning of a test automation project. The reason for that is — you need time to understand what can be automated, how to do automation and what tools to use. With time you should reach the next level of performance. Machines will be running automated tests and you will have time to create more automated tests or do more of exploratory testing.

Automation project performance

Beware to stuck in a traffic jam in your test automation bus.

Difference Between Manual and Automated Scripted Testing

There are things in test automation that have much more impact on the process than in the case of manual testing. What is easy for a human may be very difficult for a machine. And vice versa.

Element Identification and Interaction

Most automation tools deal with a hierarchy of elements within an application. It may be hard to identify an element and it may be hard to interact with it. So, you will need to learn tools and develop test automation skills.

This is the screenshot of the Web Spy displaying Document Object Tree of a web page. To find an element an automation tool remembers various information about it and the DOM tree. Sometimes a tool may need assistance to identify an element reliably. This is where you will need understanding of nodes, attributes, locators and XPath.

Web Spy

Maintenance

Automated tests need maintenance. Updates to application UI or underlying technology may break things.

Compare two versions of the login screen to a sample application. If we have a manual test for login checking it won’t change after switching to the new version of the application. It is not hard at all for manual tester to find new username and password fields. But it may be a problem for an automated test. It may require updating/fixing the test.

Flaky Tests

This is the completely new experience for those who just start doing test automation. Flaky means that a specific test may fail from time to time. Or some number of tests in a test set may fail from time to time.

Why it happens? Modern applications load data over network. Data may be loaded with different speed. The speed depends on server and network performance. Also, sometimes servers may not return data we need. It leads to uncertainty in page loading. It is not a problem for manual testing but complicates test automation.

Imagine a situation when a button is displayed on screen, a robot clicks it, but the application did not finish initialization yet and thus does not respond to button click. How do you determine that application did not respond? In most cases it is not a problem for manual tester, but requires an automated test to be able to do such things as wait, retry and recover. Wait means that the test should wait long enough for UI element to appear on screen and be ready for interaction. Retry means that in some cases repeating interaction may advance test execution. Recover means appropriate reaction to popups, error and warning messages.

Data-Driven Testing

One of the biggest advantages of test automation is opportunity to run same test with different input data at almost no additional cost. Computer time is cheap. If you parameterize the test and feed it with the table of data you can get significant benefits compared to manual test execution. On the screenshot you can see a test getting UserName and Password values from a table.

Data-driven Automation

Who Can Do Test Automation?

Doing test automation requires skills. Nonprogramming testers can do test automation to a certain level of complexity. There are cases when record and playback of automated tests is a viable and sufficient approach. If you plan, however, to get most of test automation it is very beneficial to have

  1. Algorithmic thinking. Ability to clearly define test steps and understand conditions and loops.
  2. Understanding of technologies used to build the application under test.
  3. At least basic knowledge of any programming language will let you overcome gravity and make your test automation rock.

Wrap Up

  1. Manual and automated testing complement each other. Test automation may and should save time for exploratory testing.
  2. It is not possible to automate everything or anything. What is easy for a human may still be very hard for a machine. And vice versa. A computer “can do the work of three million mathematicians using sticks and sand” but fail to click a button in your application.
  3. Do test automation wisely. Beware Rube Goldberg machines. Don’t overcomplicate things and choose best approach for every scenario, either manual or automated. Or maybe do not implement the scenario at all.
  4. Test automation requires skills. At the end of the day automation is software development. You will have to make your hands dirty at some point and learn about technical things.
  5. There is no magic in test automation, it is a technical problem of finding the most effective way of getting from point A to point B.

Hope on our bus and get on a wonderful journey with Inflectra!

References

  1. A Practitioner’s Guide to Software Test Design, Lee Copeland
  2. Lessons Learned in Software Testing, Cem Kaner, James Bach, Bret Pettichord
  3. Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin
  4. Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, Nicole Forsgren, Jez Humble, Gene Kim
  5. History of Definitions of ET, James Bach
  6. General Functionality and Stability Test Procedure, James Bach
  7. Flaky Tests at Google and How We Mitigate Them, John Micco

Test Automation Demystified Series

Part 1: From Manual to Automated Software Testing

Part 2: Is Application Ready for Test Automation?

Part 3: Choosing a Test Automation Tool: 8 Features That Matter

Part 4: Friends and Foes of Software Test Automation

Part 5: Codeless Test Automation

Part 6: Scenarios, or Why Some Automation Projects Fail

Part 7: AI in Test Automation

--

--

Denis Markovtsev

I am one of brave men standing behind #Rapise test automation tool @Inflectra