5 Reasons Why Test Automation Frameworks Tend To Fail

Explore the most common causes of test automation frameworks breaking down

Jonathan Thompson
Better Programming

--

Photo by Jesse Chan on Unsplash.

Over the past five years, I have had the pleasure of maintaining and constructing a number of automation frameworks for a slew of companies. Each framework had its own distinct set of challenges and advantages. Some thrived, some failed.

I have learned a lot from each of my automation experiences, whether it was a best practice or understanding a pattern to avoid (for more on these, take a look at my article on JavaScript anti-patterns). Throughout all of it, I have taken note of a number of occurrences that seem to predict automation failure.

The following items are what I would consider the most common mistakes engineering departments make when writing new automation frameworks. I have taken the time to include what you can do to make sure that your framework is built in the best way possible so as to avoid these mistakes.

1. The Wrong Stuff

One of the most common predictors of automation failure is an inability to service a framework with the right knowledge or talent. Oftentimes, a lack of technical knowledge is joined with a fundamental lack of framework knowledge, leading to an absolute breakdown in maintainability.

I worked for a company that implemented an automation solution without properly researching how to use the framework or what it could provide. The shop had chosen to use Cypress.io but opted to use it more as a test runner than an all-in-one automation solution. Instead of using the built-in requests library, they chose to use an SDK. In addition, they wrote out complex Java-esque workflows called by cy.task() rather than using Cypress commands.

It is not that they were wrong in their design — the framework would have been sound for a Java and Selenium pairing. The issue is that they chose a JavaScript framework and implemented a Java solution.

I spent months refactoring in an attempt to gear the framework more toward Cypress itself. In actuality, I disappointingly never wrote new tests for this company. I spent my entire tenure refactoring previous work in order for it to come close to passing. The framework was simply broken from the start — all due to an inability to properly research and vet a product.

Instead of choosing a framework because it sounds attractive, research the product offering. Make sure that the framework fits your needs. Check its repository on GitHub and examine the “Issues” tab. Consider it a red flag if any of the issues raised may affect how you test your application, such as Cypress not handling windows when you know your application uses third-party integrations.

Most importantly, do your due diligence. The intent is for this framework to be the cornerstone of your build process.

2. Writing Testable Code

One of the most critical elements of building a successful automation framework is having an application that is easy to test. There are plenty of engineering shops that do not consider the effects that programming can have on testing. This quote from the book Lessons Learned in Software Testing sums it up fairly well¹:

“It’s not that they don’t care about testing or quality. They probably just don’t understand the impact their actions have on the test process.”

Simply put, not all engineering teams develop with the idea of test automation at the forefront of their mind. Doing so creates an application that can be difficult to test (or in some cases, resistant to testing). This can take the form of reliance on brittle selector criteria (non-unique selectors such as class, ID, and xPath), unreliable application performance, and lack of quality-minded tools such as an API method for tearing down test data.

If you find that your shop is not programming with test automation in mind, seek to collaborate with the developers on why that is an issue and how you can solve it as a team. Slowly (but thoughtfully) work to make your code more testable with each passing iteration by introducing concepts such as unique selector criteria (data-id, data-testid, or data-cy) or QA-enabled API methods.

Speak up during blueprinting, planning, and grooming meetings (if you are Scrum-based) and bring attention to how a feature could be difficult to test. Try to formulate solutions for how to make it easier from both a user and machine perspective.

As the arbiters of software quality, we must ensure that we are advocating for code that is written with automation in mind so as to reduce the number of instances in which elements are difficult to automate.² Doing so will create an application that is easily testable by both human and machine, thereby allowing present and future automation endeavors to succeed.

3. Fumbling in the Dark

I would like to be transparent and note that I am personally guilty of building an automation framework outside of CI. In fact, it has come to my attention that this is fairly common when building out new automation frameworks. The main issue with building a framework outside of CI is that failing automation runs lacks repercussions.

Failing CI builds should act as a catalyst for both development and QA to take action. When automation is run outside of CI, the potential for escapes to make their way to staging, validation, or production environments is exponentially higher. This is due to the fact that running outside of CI limits visibility and oversight. Only the engineer who has kicked off the run knows that a test has failed. Meanwhile, the rest of the team is left in the dark.

Why not just let the team know?

In CI, you have a report with error messaging, logs, and other criteria that help the engineer support their case as to why the failure must be addressed. A failing run outside of CI does not have that luxury. Surely, you can generate a run report using a tool like Allure or a package like Pytest-HTML. However, CI builds allow for automatic transmission of failures throughout the team — most notably through chat applications such as Slack.

In addition, there is the age-old issue of “it works on my machine” as seen in the comic above.³

Running automation locally should be considered the same as building an application locally. Just because it passes on your machine does not mean that it will pass in CI. Always check to ensure that your runs pass in both instances. If not, then you must investigate the failure and determine the next steps.

4. Fanatical Automation

Have you ever worked for an automation shop that sought to automate everything?

The idea behind test automation is to take a series of tasks that are highly repeatable and provide them to a computer. Tasks such as selecting a single filter out of 20, clearing that selection, then selecting the next filter. Performed manually, the behavior would become monotonous and mentally taxing. This leads to tester fatigue, which can lead to escapes. In an attempt to reduce tester fatigue and ensure that we are not being crushed under the weight of manual testing, we automate the series of tasks.⁴

There are some shops, however, that take this to an extreme. Management hires a quality engineer with experience in automation and immediately wants to automate every test case. Unfortunately, 100% automation is not a feasible goal. The predominant reason being that automation is not a means to replace manual testing. We need manual testing just as much as we need automation — the two methods of testing are inseparable at this time.

So why is it that so many shops want to do this? A lack of understanding behind what automation can provide.

For many, the idea of automation means the end of manually tested scripts. No longer does a shop have to worry about executing manual test cases — the computer can do it now.

Instead of seeking to write automation for everything, we should be thoughtfully choosing when to automate and manually testing when we cannot.

5. Language Matters

I once took a fanatical approach to language homogeneity between quality and development by building an automation framework in Clojure. Our developers wrote in Clojure and Clojurescript, so I felt it pertinent for quality to also write in Clojure.

We found a test runner (Kaocha) and a web driver (Etaoin) and set to work. The project succeeded for a few weeks before a point of failure was identified: Both the web driver and test runner were maintained by a single developer, respectively.⁵ Should either developer take time away from their respective project, our framework would risk breaking down.

What lesson did I learn?

The choice of programming language matters when building an automation framework — something I did not consider when choosing Clojure. At the time, I wanted to join quality and development as closely as possible. I did not consider that the automation community for Clojure would be so small or that the testing offerings would be so sparse. I just wanted to marry quality with development in order to make shifting left easier.

I experienced this once more at a different shop, though I had taken a more careful and thoughtful approach to language selection. The shop was struggling to maintain an aging Ruby framework that had been developed a few years back. I was tasked with auditing the framework and choosing a path forward, whether it be to refactor or rewrite.

With the buy-in from engineering management, the more senior quality engineers and I opted to rewrite using Python despite there being an obvious desire to use Cypress. We chose Python, as it is syntactically similar to Ruby and would be easier for our associate engineers to learn.

Unfortunately, the company was acquired weeks after, so all automation (save for my team) went on pause. I continued to develop the framework, alone, for months until other teams were able to automate once again. By this time, however, the teams had decided that they would rather use JavaScript and Cypress than Python.

Your choice of language matters. When building an automation framework, check (and double-check) that the language you are using is agreed upon. Do not use a language that may be difficult to hire for or that is waning in popularity.⁶ Do use a language that offers adequate tooling and opportunity.

Summary

Automated testing can provide a significant quality-of-life and productivity boost to any development team. However, building an automation framework can be a difficult process. Ensure that you are thoughtful and deliberate when choosing a language and architecture. Collaborate with your developers on programming with automation in mind.

Most importantly, have realistic expectations when it comes to test automation.

Resources

  1. Kaner, Cem, et al. Lessons Learned in Software Testing: a Context-Driven Approach. Wiley, 2002, p. 10.
  2. Agile Testing: a Practical Guide for Testers and Agile Teams, by Lisa Crispin and Janet Gregory, Addison-Wesley, 2014, pp. 287-288.
  3. Lofvers, Jeff. “It Works On My Computer.” Don’t Hit Save, donthitsave.com, 2016.
  4. Agile Testing: a Practical Guide for Testers and Agile Teams, p. 280.
  5. This was true at the time. Each project now has a large number of contributors on GitHub, which makes them more reliable.
  6. The State of the Octoverse, octoverse.github.com/.

--

--