Exploratory testing

Editor at Sage
Sage Developer Blog
4 min readSep 18, 2020

Combining human skill and machine efficiency for the right result

Silvia Ochoa Fernandez, Senior QA Engineer, Sage

It was almost ten years ago since I had my first experience leading a test team.

The way testing was understood in some industries was far from being a creative task: we worked with pre-defined test cases that were run over-and-over again in a manual way. Back in the day, test automation was generally reserved for less structured industries.

I was lucky to belong to a very diverse team, which ranged from a very experienced tester to a tester who had just started his first role in the industry. They were assigned to the same project with the junior tester learning from the senior one. Beneficial for everyone involved, we mixed experience with creativity which resulted in a bunch of new and exciting ideas.

One day they came to tell me that they wanted to change the way they were running the project.

We had predefined test cases that were maintained and run manually every time there was a change in the code. Their proposal was to add one last step to each of the test cases called “exploratory testing”, meaning they wanted to book some time to freely play with the system, discovering unexplored parts and challenging it.

I loved the idea, but it was a difficult ‘sell’ into our Director who tended to consider all testing through the lens of automation. The conversation went something like this…

“How are you going to automate that?”

“This isn’t about automation — this is about freely using the system as a user would. Noting down what we doing, reviewing it and adding any useful additions to the formal test plan as we find useful increments in our testing”

…and after a bit more negotiation we got the green light.

From that moment on, more than 90% of the new defects found were directly traced to the last steps of the test cases — exploratory testing. There was no magic behind this — it was just that the test cases that were always run, release after release, were limiting our discovery of issues ahead of production.

This was my first experience of exploratory testing.

What is exploratory testing useful for?

* For the testers to develop a deep knowledge of the system under test without the need for detailed requirements or general system documentation.

This means…

* they design much more efficient test cases (with less effort and resources they will discover more errors)

* the test plan will evolve continuously, keeping good test coverage, not only by growing the number of test cases (that consumes more resources) but also to delete those that are no longer required

* they can help to create optimized regression test plans, that include only the test cases needed for each change

What is the role of automated testing?

First of all, let me point out that, as defined by Michael Bolton (not the singer but one of the main referents in testing worldwide -https://www.developsense.com/index.html-), we shouldn’t keep talking about manual testing and automated testing. There is testing and there is checking (1).

Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, studying, modeling, observation, inference, etc.

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product. Checking is a part of testing that can be performed entirely algorithmically.

So in real terms, or the way I like to think about it — checking would be what we’ve been naming automated testing. And testing is where we need human interaction, to rationalize, consider the external environment, and make a decision based on several factors. And this is the difference between humans and machines — machines are great at sticking to an algorithmic process, but for empathy and rationalization — we still need humans (…well for the time being 😊).

Back to the beginning of our blog…

I am fortunate enough to be able to say, that by chance, we discovered the differences between testing and checking.

If testers are fed up with running the same test cases, those tests can (and might, as far as possible) be automated. They should be termed checking.

If the test plan is no longer, and able to discover new errors and it’s simply ensuring that what it was working it’s still doing, those tests should also be part of checking. And we shouldn’t limit our capacity to this.

We need to be careful to put human skill and experience to use — where it is of benefit. If we waste that experience, if we don’t give them the freedom to use their creativity and curiosity (inherent skills of a good tester), we are missing the chance of having a high-quality system, better-motivated testing professionals, and of course the realistic aspiration to an error-free system.

(1)https://www.developsense.com/blog/category/testing-and-checking/

--

--