A Beginning to Exploratory Software Testing
“Not all who wander are lost.” — J.R.R. Tolkien
According to Dictionary.com, ‘exploratory’ is defined as “pertaining to or concerned with exploration”. Think NASA’s Apollo moon missions, Lewis and Clark’s expedition of the American West, or Edmund Hillary’s journey to the summit of Everest. All of these examples were explicitly planned expeditions with concrete goals.
‘Ad hoc’ is defined as “concerned or dealing with a specific subject, purpose or end”. In regards to technology, the definition is expanded to “contrived purely for the purpose in hand rather than planned carefully in advance”. In other words, things done ad hoc are not planned and do not have concrete goals. Ad hoc testing is essentially poking and prodding a piece of software without a focused approach or end goal in mind.
When many people hear the term ‘exploratory testing,’ they believe it is equivalent to ad hoc testing. This could not be further from the truth. Exploratory testing consists of minimal formal test planning with a focus on maximizing test execution.
Exploratory Testing Explained
While lacking the formality of traditional test planning methods, exploratory testing always includes some planning. Testers define a specific set of testing goals for each exploratory test execution. If given a user address form to explore, testing goals for the form might include verifying proper data storage, field boundaries, required field error handling, data type integrity, and so on.
Additionally, exploratory test execution sessions are always timeboxed. For example, for verification of field boundaries, the exploratory tester could timebox the execution to no more than 30 minutes. During this 30-minute test session, the tester will explore all the functionality around the various defined field boundaries. If the ‘First Name’ field is limited to 75 characters, the exploratory tester will attempt to store 80 characters in the field expecting the system to provide some sort of feedback. At a minimum, the application under test (AUT) should not crash.
“…keeping track of each tester’s progress can be like herding snakes into a burlap bag.”— Jonathan Bach
James and Jonathan Bach started the exploratory session revolution in the early 2000s. Initially, they found that ad hoc testing allowed them to deliver testing results quickly for demanding clients, but Jonathan found “keeping track of each tester’s progress can be like herding snakes into a burlap bag.” They transformed the testing procedures from ad hoc to session-based exploratory testing.
One important point to remember when successfully implementing exploratory test session management is understanding applications of this strategy are highly situational. Some projects due to their complexity will require more formalized test planning. Others attempting to leverage a “chase the sun” approach using both on and offshore teams may find more formalized test planning helps with hand-offs between team members.
Formality has its place. Especially if you’re turning your cases over to someone else to execute. However, if you’re designing cases for yourself, it is a waste of time to describe specific steps in this manner. You know how to launch the application; you know what happens when you do. However, if you don’t take the time to describe your exploratory goals, you may miss something important when plans are not codified before testing begins.
Formal test cases have an inherent rigidity which stifles creative testers. How many of us have found bugs during testing which were not explicitly described in a scripted test? Some of my best defects have been found this way. Exploratory charters lay down structured goals for your testing but not how to achieve those goals. This gives the tester greater flexibility to explore the AUT and leverage their testing intuition to its greatest potential.
Test charters become reusable artifacts for future testing. Since ad hoc testing has no plans, the only artifacts are the test results themselves. It can be difficult to recreate the test execution path if only the results are preserved. Session charter goals can be transformed from functional evaluation into regression exploration to ensure stability of existing features prior to production release. This means the charters are repeatable, while ad hoc testing is not.
Exploratory Testing — Practical Application
At LendingTree, we use TestRail for test case management. TestRail has a robust solution for test case design. It allows for formal test step cases, text based cases or exploratory charter design. This flexibility allows test designers the ability to choose the test design strategy that fits the testing need at hand.
Here is an example of TestRail’s exploratory test charter template.
The mission is the purpose of the exploratory session. The goals are the specific areas of validation. Observe that the charter doesn’t say how to access the AUT or what the expected results are for the testing goals.
Conversely, here is a formal test case covering a login attempt for the AUT with an invalid password. This one test covers part of one of the goals of the charter above. In order to have formal cases that cover all of the goals of the charter above, the test designer would have to build at least four formal cases, possibly more to ensure everything is properly documented.
Exploratory testing using charters is ideal in an agile development environment given the quick delivery timelines. We use a modified scrum methodology at LendingTree consisting of two-week sprints. If the test engineers spend the first couple of days of a sprint compiling the testing charters, it helps inform the entire scrum team the overall QA plan for the various stories in the sprint. Timeboxing the charters also helps predict how much potential carry over there may be in a sprint if stories are not making it into QA in a consistent and timely fashion. Ad hoc tests without a timeboxed measure of engagement do not give this level of transparency into the process.
This strategy relies on the tester’s applied knowledge of standard testing techniques such as boundary analysis, required field validation, data integrity verification, security testing, and so on without the need to explicitly script these techniques. In other words, session-based test management expects the tester to have experience with more formalized testing approaches as well as a well-developed intuition regarding where bugs may be hiding.
Just as you wouldn’t expect an inexperienced hiker to tackle Mount Everest, you cannot expect an inexperienced test engineer to master this technique without an experienced guide to show them the way. (Perhaps don’t have Sméagol as the guide)
Subscribe and Join Us!
Thanks for reading about how we are doing things here at LendingTree Engineering. If you enjoyed our story please subscribe to our publication. If you would like to join and help shape our company, please visit careers.lendingtree.com and contact us. Follow us on Twitter: @Careers_LT