The topic of exploratory testing has become hot again. AI and Machine Learning allow us to automate more and more mundane tasks, so QA has more time to do more exciting and challenging testing. The topic of exploratory testing was really hot a couple of years ago, and looks like it is on the rise again. There are still many questions around it, like when to do exploratory testing, what are the most popular approaches, and what the difference is between exploratory and ad hoc testing. Some of these topics I will try to explain in this article.
Known and Unknown
We start with a philosophical discussion about human knowledge. There are things that we know we know. We know our name, we know our address, how to get from home to work. These things are Known Known. In software industry, product requirements are Known Known.
There are also things that we know, but we don’t know that we know them. For example, we don’t know for sure that an application should not crash each time a user adds some items to his shopping cart. There is most likely no such requirement, but we make our own expectations and assumptions. These are Unknown Known. The things from Unknown Known can easily be moved to Known Known. All we need is to realize that we know something.
Together, Known Known and Known Unknown make up Knowledge.
There are also things that we know that we don’t know. They also can become part of our Knowledge. To change these things from Known Unknown to Known Known, all we need to do is ask a Question.
Finally, Unknown Unknown. These are things that we don’t even know that we don’t know. We cannot ask questions about them, we cannot make assumptions. I cannot even give you an example because if I could it will be Known Unknown.
Bugs that are hiding in Known Known are very easy to spot. Bugs from Unknown Known and Known Unknown are a little bit harder, because somebody needs to ask a question or make a correct assumption. Bugs that are hiding in Unknown Unknown are the nastiest bugs that escape to production and cause a lot of issues and bad publicity. The question is, how can we possibly catch them if they live in the Unknown Unknown area?
These bugs cannot be found with Automated tests or any type of Scripted tests because scripted tests cover only Known Known and Unknown Known.
“With a script, we miss the same things every time”.
Before we continue, we need to clarify the term scripted testing. It is not automated testing as you might think from the name, it is any test that has detailed step-by-step instructions to follow and what to expect. It can be either manual or automated.
Scripted testing is a must-have part of testing but it is not enough. Scripted tests are used for regression and often are automated. They do not make software better, they just follow the same steps again and again.
Scripted tests only cover the “Known” part and may indirectly touch “Unknown Known”. They do not extend the borders of “Known” — they do not ask questions.
Scripted testing is “checking” and “Checking” != “Testing”.
Automated scripted tests are not supposed to find bugs and they don’t. The purpose of their existence is to make sure that everything that worked before still does.
How we can expand our knowledge? Learn more about software under test? Exactly like people have been doing for hundreds of years: explore.
Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. It is not ad hoc testing. Ad hoc is wandering about, not exploring. When we explore, we know where we want to go and what we want to achieve, we just do not have an exact plan. Think about Columbus: he knew that he wanted to get to India and had a plan, but that did not work out. He did not find India, but he found something else instead — America. Keep this in mind during exploratory testing — you might not find what you expected, but you will probably find something totally new that nobody knew about or has ever seen before.
Exploratory testing should always be time-boxed. Do not spend a lot of time in Known parts, if nothing new comes up.
Exploratory testing can be used on any stage of SDLC (Unit Testing, Requirements, Functional, Load Testing, Sanity ….) but it brings the most results when executed during end-to-end testing as an additional check before going to production, or even after.
The great thing about exploratory testing is that it can be done at any given time (even if you have just 5 minutes) by basically anyone who has access to software.
Test Charters and Tours
There are no test cases in exploratory testing, but we still need a plan. There are two most common approaches for creating such plans:
- Charters, as described in must-read book “Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing” by Elisabeth Hendrickson
- Tours (approach suggested by James Whittaker in his book “Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design”)
We briefly discuss both approaches here but I definitely recommend reading the books to anyone interested in testing.
We begin with Charter. Charter is one sentence that describes what we want to explore, how, and what we want to find. Here is a suggested template:
Explore (target) with (resources) to discover (information)
Example: “I want to explore a data consuming service with inserting a lot of invalid messages to discover how the service handles it”
Where can we find some ideas for charters?
- Requirements: look for areas that have no requirements, are open to misinterpretation, or are vague.
- Check implicit expectations: reliability, stability, performance, industry and company standards.
- Listen to your stakeholders, especially to the questions they ask.
- Look for comments in code. If you see something like “this is an ugly hack to… ”, that’s your chance!
- New realizations and discoveries — if you learned something about software, explore it.
James Whittaker suggests using a little bit different approach by using Tours. Imagine that you are in a city that you have never been to before and you don’t have any maps, brochures, and access to internet and you want to see some attractions, have lunch and find a hotel. How do we find what we’re looking for?
Exploratory testing without good guidance is like wandering around a city looking for cool tourist attractions. It helps to have a guide and to understand something about your destination. James Whittaker
Tours are based on themes or behaviors rather than features. There are many standard tours that you can start with:
- The Money Tour — Why does a customer pay money for your product?
- The All-Nighter Tour — Some bugs only happen if the software has been running for a long time.
- The Saboteur Tour — Undermine the software in any way possible.
- Intellectual Tour — The intellectual tour is used to ask the software really hard questions
- Garbage Collector Tour — sanitation workers go street by street, house by house. They stay only a few moments but they crisscross the neighborhood in a methodical manner. Be like a garbage men — open every screen, click all the buttons, select each menu option or drop down.
- Obsessive Compulsive Tour — Repeat every step of a scenario twice or as often as you want
To make the most of exploratory testing Whittaker suggests having a team of two: one tester and one observer who will take notes during a time-boxed session not longer than 20 minutes. The tests should be based on intent, not on the application structure, and be documented. The last suggestion is to capture screens or even have screen recording so if a bug is found we can reproduce it.
It is absolutely fine to mix and match Tours and Charters, or pick up the one that fits current needs, in every exploratory testing session.
As with any other types of testing there are heuristics that can be used to come up with good exploratory test sessions. Here are some of them:
- Try to draw schemas and diagrams
“If you cannot draw a picture you do not understand how it works”
- Use CRUD: Create, Read, Update, Delete (not only for databases, for every object. Try to do the actions in incorrect order, etc)
- Play with Network Variations. How your software performs when Internet is slow. What happens if Internet connection is lost for a moment?
- Position: Beginning, Middle, End. If there is any type of lists or order in the application, try to change it; insert something in different places.
- Count: 0, 1, Many. What if you create one object? A thousand?
- Does size matter? (size of downloadables, logs, tables)
- Timing, Frequency and Duration (did you think about time zones? What if a user does something extremely slowly? Extremely fast?)
- Interruptions, States and Transitions Time: Before, During, After. What if flows are interrupted? What if something happened before its expected?
- Ecosystem. Play with environment and other software that surround your application.
- Combinations of all of the above
Note that the heuristics I just listed can be used not only for exploratory testing but in any type of testing. The opposite is also true: all the heuristics for any other type of testing are usable and useful during exploratory testing
Moving on, let’s watch a short video. It’s not about testing, but it is very relevant to what we do.
The point is that if we don’t look for something, we often do not see very obvious details. Do not execute tests mindlessly, look around!
Try to dig deeper (check that user after login experiences fully functional system, do not check that user can login). Ask deeper questions: how many fully functional users can use the software in parallel instead of just how many users can login/start application. Does type of connection impact the number? What about roles? Time of year? Phase of the moon?
Look for subtle clues — counters, warnings, memory increase. Hear, see, touch! Use consoles and logs. Do you see an unexpected log line? Warnings? Do you see an error that does not manifest itself anywhere else?
Look at things you have never looked at before!
Why do you testers always make stuff up???
Often bugs that we find during exploration are found under unusual conditions (remember, we explore parts unknown). They may look like corner cases, so there is a chance that product owners and developers will try to disregard them or put them at the bottom of the backlog because from their point of view it will never happen in production — it is a corner case or the scenario is unrealistic.
I often say if I was able to do something to software, somebody or something else will be able to do it. It is not a good argument and it is not enough to convince other people. We will need to explain why the problem found deserves to be looked at/documented or fixed. Here is an example of how to think about it and how to convert an unrealistic corner case to valid bugs and requirements.
Epic Battle: Iron Vs Fridge
Imagine that we are testing a refrigerator and as part of exploratory testing you decided to explore how this shiny new model behaves if something hot is put inside of it. We decided that it would be brilliant idea if we take a flat iron, put it inside the fridge and turn both of them on.
This scenario is definitely not covered in any specifications, requirements or instructions. Honestly, who would do that?
Still it is a perfectly valid scenario. What if instead of a hot flat iron we put in a hot pot of soup or a very hot skillet? Will it still break? Nobody knows, so you as a tester need to explore further and find out what the exact temperatures and conditions are when it breaks.
The bug with description “Fridge broke after I put hot flat iron into it and turned it on for 20 minutes” will most likely be laughed at and closed. In addition, people will think that you have way too much free time at work.
But, bug with description “Fridge breaks if we put in it something hot (300–400 F) for more than twenty minutes” at least will be looked at. Risks will be estimated and most likely documented as a change to requirements, or user instructions.
We also should consider another scenario: what if after we ran our little experiment, the fridge blew up? Or the whole building lost power? Or any other dangerous things happened? In this case things must be fixed. Because believe me or not, there will be some idiot somewhere who will do this (google it, there are people who actually did this at home). And if this person gets seriously hurt — it does not matter if the scenario is stupid and insane, the name of your company will be everywhere, and that is not the publicity you want.
So, do abuse the software. It’s ok if it breaks — but it should die in a predictable, safe way. Do whatever is possible and it does not matter how sane it is. Look at race conditions, missing requirements and limitations. Check all the “–ability” tests, they are great for finding risks and vulnerabilities.
Let the games begin
If you still struggle to come up with some scenarios here is another way to get ideas — games. Who doesn’t love games? You can play them with your customers, product owners, and developers.
Game #1. It will never happen in production!!!
Make a couple of small teams and make them come up with scenarios that will never happen in production. Give the winner a prize and go to your desk and explore if it’s true. As a variation of this game, make the teams brainstorm this scenario and find a way it could happen in production.
Game #2. Always vs. Never.
Make people with different roles finish these phrases:
- Our product must always…
- Our product should never ever…
Game #3. Nightmare Headline.
Imagine that you wake up in the morning, open your favorite news site and see a really, really bad headline about company. After you come up with multiple headlines, try to brainstorm how they could happen and of course then explore if it’s possible.
Game #4. Make regression great again.
Take old, regression tests and try to find a new expected result for each step in them.
Exploratory test lifecycle
Exploratory test lifecycle is different than lifecycle of regular test case. If exploratory tests find issues, they can be converted to regression tests. They can also become an update to documentation, such as new requirements, change in user documentation, new instruction for DevOps, etc.
If exploratory tests did not reveal issues and did not result in increased knowledge about the software after execution they can go to the void.
It is the best practice to document performed exploratory sessions regardless of their results.
Explore > Document > Explore something else
Exploratory testing in Agile
So how does exploratory testing fit in an Agile environment? Actually, it should become a core practice of Agile teams in addition to automation. Use automation for obtaining fast feedback that Agile teams need to deliver frequently at a sustainable pace. Use exploratory testing when you need to quickly test a new feature, product or impact of a bugfix that was rolled to production. Explore when you need to learn the product. Explore when you need to find areas not covered by any other testing activities. Explore when investigating defects found in production or during late stages of SDLC. Explore when you are aware of particular risks or changes that can introduce them.
The great thing about Exploratory Testing is that you don’t need much time or resources to introduce it into SDLC and any process your team follows. You can always find twenty minutes or five. I recommend having a common list of things you want to explore, so when you do have that time you can easily pick one up. For best results make sure that this list is visible and accessible by the whole team, not only QAs.
Do at least one exploratory testing session for every new feature. Think about how it can impact existing functionality. Has performance decreased? Did it add a lot of data to databases? What about logging?
Do an exploratory session instead of manual sanity checks and in addition to automated sanity tests before big releases.
Do exploratory testing while running load tests and after, without cleaning any data. You might discover a lot of new things.
Do not hesitate to use automation or any other tools and scripts when you’re exploring. You might use them during setup, to inject some data in databases, to produce traffic, etc. Although exploratory testing is manual testing you don’t need to do all the steps manually.
If you can — use the production environment to run exploratory testing; start exploring before a feature even exists; explore when writing requirements, tests, automation, code, documentation. Ask! Look! Listen! Always be aware!