Tool-less testing

Bas Vegter
NS-Techblog
Published in
9 min readNov 10, 2022

An architectural solution for test tool choices

Our relationship with tools

Tools, do we control them or do they control us? If we use a tool, do we still have the freedom to choose how to use it? No, of course not. Nor can we. Tools each have their own purpose/effect. They are limiting and at the same time they enable us to work better, faster and cheaper. The golden triangle returns.

Tools are “the talk of the day”, at events and they are flaunted on LinkedIn, CVs, webinars, etc. I am talking about test tools like Cypress, Tosca, Robot Framework, Cucumber, Protractor and Selenium. We feel lord and master of software when we know our tools. With ** or +++ we indicate how experienced we are in these tools and how we know all the ins and outs of this ‘solution’.

In recent years, tools have been popping up like mushrooms as solutions to all sorts of problems. The question is, if you have all the resources of a construction company, can you build a house? No. And if you learn to use everything? No. Or maybe have an example? No again. If you have ready-made instructions? Best and bad practices or anti-patterns; or the do’s and don’ts. Still No. There are two ways to learn to build something: 1. Learn to build it with a tool, but then without that specific tool what are your capabilities? 2. Learn to build something and find the right tools to make it more efficient. There is a major difference here and I hope you see it.

Tools for Conviviality

Conviviality? A term from a book published in 1973 , a must in this subject. Conviviality in my eyes is about the possibility to use tools in all freedom and according to insights of the user. Tools for conviviality stimulate the creative process within the framework of the demands placed on them by their environment. Ivan indicates that tools should give you the opportunity to realize your vision.

It all sounds a bit complicated, and it is. On the one hand you see a lot of tools that take work out of your hands, take you by the hand and tell you what they do and how to use them. These tools are initially useful, but as the user you will eventually run into the limits of these tools.

Ivan Illich: “There will be a further increase of useful things for useless people”.

Or as someone once told me: “Everyone makes an equally big mess of every tool if they haven’t had a lot of training to automate specifically for a certain purpose with a certain tool”.

Tools should be easy to use, only the term “easy” is not the same for everyone. Where one person might consider a selenium library easy, another person might prefer a solution like Tosca. Whatever tool we use, it should clearly help us to do our job better. The tools should work for us, not the other way around. They should make our work more enjoyable, or rather actually allow us to do our work. So that the tools can be used to do satisfying, creative and independent work.

We are Homo Faber; the making man. 
We are artists, we are builders, doers, thinkers and above all makers. As I write this I am reminded of "The Final Speech from The Great Dictator". If we take the thinking from this then it is important that we are challenged, that we are enabled to make or form 'things'. To make things you need tools, like we do when we build a house. Without tools that would be a lot harder and should help us to get the most out of ourselves. We don't want to become slaves and be told how to work and what to do, if that happens any form of creativity evaporates.

Why is this important? If we look at test automation, we see several problems that we address here in this blog and for which we will present our solution. It is not an all-encompassing solution, but it will give us more freedom and at the same time allow us to collaborate, build and share knowledge. So that we can continue to make sure that the tools we use work for and with us and not us for them.

Challenges

The “problems” we are talking about have been known for years and I’m sure will be familiar to some:

  • When a team is forced to switch from one tool/solution to another you often see that all test cases must be rewritten. This destroys months, sometimes years of work. The cause of a switch can be the use of new techniques the tool cannot support, or the test tool is phased out. And so ends the implementation of automated tests.
  • Test automation people with a preference for code find it much more fun to develop a framework, set it up, propagate it and then build on it. This way a new (or hired) tester (or developer) can spend months working on a new framework without writing a single test. An additional problem is that this custom build is often poorly documented. And that whilst there are already so many (well developed) frameworks that possibly can do the same as the one that took months to build.
  • Related to the previous point is that besides building frameworks yourself, there is an overabundance of tools. Many test automation practitioners, all skilled in other tools, have their preferences. Endless discussions and comparisons about which tool is the best, while in reality it all depends on the context. For every context a different tool and no tool is the best in every context.
  • The standardization of tools ensures that teams can no longer choose or use a tool at their own discretion. This leads to dissatisfaction, non-compliance with standards, chaos, anarchy! No kidding, few engineers are fans of standards and yet here lies part of the solution.

Standardization

Standardization is not about making rules to be broken. The challenge in standardization is in making rules that fuel creativity to achieve the goal of standardization. Test tools come before testing and start the whole testing business any more than the steam engine started the industrial revolution. It is the other way around, we were already testing and only later did a need arise for tools to take some of this work off our hands, test more often and above all faster.

By shaping tools the right way, they allow us to continue to learn and grow. And if the tools aren’t shaped the right way, then we need to create a wrapper so they are easier to deploy and we become less dependent.

Testing without result

In my experience I have seen many, many test cases in a variety of test tools. It really doesn’t matter which tool you use, it all comes down to the quality of the test case. It has to be clear what is actually being checked. This seems logical, but then why do I see so many test cases that don’t actually check anything, for instance: just walk through some steps and if any of the steps fail the test cases fails and it’s tested right? Unfortunately, no. This too often shows me people want to operate a tool instead, without any understanding of the craft.

So, we are talking about goal-oriented testing, not tool-oriented. In test automation, the focus should not be on tool implementations, but on delivering quality software and functionality. We need to make sure we don’t become dependent on the tools we use. The focus on testing must remain goal oriented, not tool oriented.

To solve this we need to decouple test cases from tool implementations. It is always dangerous to use analogies to clarify technical concepts but we will give it a try. After all, simplification or an analogy often helps understanding. The decoupling of systems can also be seen in solar energy generation. The inverter (test tool) has a shorter lifespan than the solar panels (test cases). It is much cheaper if you only have to replace the inverter when it stops working instead of replacing the whole system. By disconnecting the inverter and solar panels you can leave the solar panels on the roof and replace the inverter with a better variant.

The same applies to tools as to the inverter. Once a tool is or no longer can be used, because of cost, support or there are better solutions then that should never mean that tests need to be rewritten. This is our starting point, toolless testing is keeping the focus on quality and testing and not on the tools. So how can we still do our job without the dependency, with the freedom to make our own choices and keep developing?

Architecture

Robert C. Martin wrote in 2010 about applying such an architecture and yet we have encountered only limited realizations of it. It is an essential part of SOLID, by applying this principle also to test architecture we want to solve some of the problems we experience

For these reasons, we have adopted a new approach to test automation that recognizes both problems. We can focus on testing and not on tools and our test automation engineers get the space to develop themselves. First a small step back. Where we started was a framework with a common architecture:

The problem we have here is that our tests and test solution are directly linked to our test cases. Once the tool we’re using gets dropped, like Protractor for example, then you’ll have to rebuild everything.

By applying dependency inversion we have decoupled the tool from the test cases. This is shown in the image below. Essentially it’s a wrapper, giving us an independent shell around the tools.

Architecture test set with dependency inversion

The interface describes what the tool can do. So through the interface you know from the test cases what you can do. The how, below the line, is the implementation. When you change a tool and it is not yet in the solution, you have to write an implementation for the tool. Your test cases remain unchanged. What you can do does not change, assuming of course that you have chosen a tool that has at least the same functionality as your old tool.

All of this is nice in theory, but does it work in practice? Yes, we have built this solution and are happy to share it with the world. You can find the repo on Github (Link to NS GitHub will follow soon). We did a PoC at NS International and later extended this to multiple teams within NS. Below we share our experiences

The practice at NS International

At NS International we had a project with three development teams, a dozen test sets, over 1000 test cases. The tool we were using (Protractor) was declared end of life in April 2021. So we were forced to switch to a new tool. Since we had spent a lot of time writing the test cases we did not want to rewrite everything. We chose to refactor our test set to the architecture with dependency inversion. This allowed us to keep the test cases and we only needed to write an implementation for the new tool. Another advantage was that tests could be changed or added when changing tools.

As a replacement for Protractor we have chosen Playwright. We came to this choice by first making a shortlist of the most used Typescript tools. These tools were Cypress, WebdriverIO, Playwright and Test Café. Then we did some PoCs and consulted the documentation. We chose Playwright because it offers the most functionality, is free and fits within the architecture (Cypress for example does not).

Over three 2-week sprints, as we continued to deliver new features, we wrote the implementation of Playwright. We started implementing the required functionality at the smallest test set. In the build pipeline at this stage we had test sets using Protractor and others using Playwright. Because we were able to merge continuously with master we were able to prevent merge conflicts and running behind the facts. After the last test set also used Playwright we removed the Protractor implementation. We now use a different tool but the way we write tests has remained the same. Curious about how it works? On Github (Link to NS GitHub will follow soon) there is a sample project with implementations for Playwright, WebdriverIO and Test Café.

--

--