Five-Fold Testing System — #4: Activities

Yusuf Misdaq
DealerOn Dev
Published in
12 min readDec 7, 2018

How we test what we test!

The Right Tool for the Job

Testing techniques are like ways of seeing. Each testing technique we use is like a precious resource that has its ideal time and place. Used incorrectly, we’d be totally wasting that resource (and our own time, an even greater resource). Used correctly, we gain the supreme satisfaction associated with using the right tool for the right job (and, we also actually get the job done!). This lovely quote from the architect Louis Kahn always serves to remind me of the importance of utilizing resources (in his case, it’s a building material, and in our case, it’s testing techniques)…

If you think of Brick, you say to Brick, ‘What do you want, Brick?’ And Brick says to you, ‘I like an Arch.’ And if you say to Brick, ‘Look, arches are expensive, and I can use a concrete lintel over you. What do you think of that, Brick?’ Brick says, ‘I like an Arch.’ And it’s important, you see, that you honor the material that you use. [..] You can only do it if you honor the brick and glorify the brick instead of shortchanging it.’’ — Louis Kahn, University of Pennsylvania, 1971

Our development team works on a wide variety of tickets daily, comprising of vastly differing formats, outcomes and associated expectations. One team, for example, is presently building a new application from the ground up (much of it will have no testable front-end for multiple sprints, and the business logic is still unfolding/blooming in some areas). Another team is more in the maintenance life-phase, improving upon an already established application where the expected behavior (often coming in the form of bug fixes or improvements) is crystal clear and relatively unambiguous. It stands to reason, then, that our testing techniques be suitably varied and context-driven to meet the needs of these different products / situations. To use a stark and obvious example, we won’t take up vast amounts of time performing multiple exploratory testing sessions on a change that calls for basic, simple requirements-testing.

This article is essentially a jargon-free (hopefully somewhat entertaining) guide to demystifying a bit of what good testers actually do, and what the Software Testing community means when they use the words they use.

Strategies, plans, techniques, approaches…

A test strategy is the guideline of your overall testing effort for a given application, sprint, ticket, etc. The strategy contains within it considerations of time, environment, scope, as well as testing techniques.

A test plan as a concept is really more internal, i.e. team-facing (and is not always needed as a formalized document, depending on your organization’s structure, your own team’s structure, and management’s needs). The plan might dictate details of the strategy’s implementation, i.e. Bob tests this, Jeanie tests that, Sanjay writes automation tests, Kimie documents the results. It may also include notes on known issues, workarounds, schedules, go to Subject-Matter Experts (SME’s), or key third-party contacts.

A testing approach is the manner in which you go about your testing. For example if you’ve decided that it would be best to use a testing technique like functional testing for a given application, you can do that using an exploratory approach, or an automated approach. The techniques you use don’t change, just the way you go about performing them.

A testing technique is essentially “how you test”. Each technique (see below) is different and each one may be better for giving you certain information (i.e. for finding certain types of bugs) than others may be. As we said at the start, the right tool for the right job!

A test report — perhaps self-explanatory, this is often a collection of the test cases that were run, when they were run, whether they passed or failed, and in short, all of the testing activity that occurred. In some ways the traditional test report is a relic of the waterfall era, but it can take many forms that make it valuable as a means of keeping testing transparent.

The most lean and succinct ‘report’ (for me) is simply running the tests and if bugs came up as a result, letting those bugs and their respective fixes and the code be the ‘report’. In other words, let the tests and the development work be the documentation or reporting. The whole concept of the traditional test report varies in importance depending on management (and the value management places on it will in turn often vary according to what’s at stake and whose eyes are on it). If there’s a third-party involved, for example, the report often serves as an extra document (crucially, one that isn’t internally facing) which management can point the third-party towards.

Some Testing Techniques Defined…

Smoke Testing — One of the very first steps testers do on a new build, and often so simple and intuitive a task that giving it a name is almost unnecessary (ironic, since it has probably the most names of any technique! Testers may refer to it as Sanity testing, Confidence testing, Build-Acceptance testing, etc.) Put in literal terms, a new circuit board is released by the engineer. The tester plugs it in to a power source. If there’s smoke, then the test is over and you hand it back to the engineer! Likewise, if that shiny new page full of new functionality doesn’t even display when it should, there may be a problem!

What characterizes a smoke test isn’t (usually) smoke, it’s really timing (i.e. as soon as there’s a new build, you smoke test), and the cut-off point (or exit criteria) for smoke testing, which is much earlier, i.e. if it fails your smoke test, that’s it, it’s failed, you’re done! Whereas with general functional testing you may find things which fail and still carry on testing until everything in your list has been covered.

Positive / Happy-Path Testing — Confirm that the thing does what it basically should do, operate the application under test (AUT) in the manner the developers intended it to be operated. This is equally (if not more) useful as an education to the tester as it is at finding any potential issues, and therefore, once smoke-testing passes, this is a great starting place.

Negative Testing — A lot of things could fall under the broad category of negative testing (some of them may follow in this list). Suffice it to say that if you were testing a vending machine and you inserted an incorrect currency (let’s say foreign or fake), that could be one form of negative testing.

Stress Testing — A more extreme extension of negative testing with edge-cases and the robustness of the application in mind. Punch that vending machine very hard. Try to rock it to the floor with just one person. If that doesn’t work, try to rock it to the floor paired with another person. Pour an inordinate amount of water onto it (if it’s an outdoor vending machine that claims to be water resistant, for example). Putting the AUT under any kind of stress is stress testing. ‘Dropping’ an iPhone from a chair. Then from a table. Then from the second floor. Dropping it onto a carpet. Onto a wooden surface. Onto concrete. Feed the UI too much data to contend with, see if it handles it at all, and if it does, how gracefully? How would you stress test an input field? A radio button? (The only limits are your imagination, how twisted you truly are, and of course the actual amount of time you have to give the testing!)

User Testing — Model your tests upon specific users. Think about different user-roles within the app, for example, or users with different concerns or end-goals (some of which the app supports, some of which are perhaps more tangentially related), users of varying age groups and abilities (hello accessibility and security concerns!) — step into their bodies and minds as best you can and let your tests be guided by the actions you think those users would make.

Boundary Testing — This is generally more concerned with applications which feature a lot of integer/value-related logic. Let’s say you have a ‘Comments’ box that requires the user to enter between 5–500 characters. The boundaries you’d probably want to test are 5 and 500, and then something which is outside of the lower and upper boundaries (say 2 and 599). Far out..!

Function Testing — Focus on the individual functions of the AUT. Test them one at a time. For example, if your AUT was the website Facebook, test the post functionality. Can you create a new post? Edit a post? Delete a post? Add photos to posts? And so on for all of the functions, one by one! Naturally a prerequisite to this kind of testing would be a function tour of the AUT and the creation of some kind of ‘function list’ to base said testing upon.

Buggy Results…

Integration Testing — Test various functions together. If we return to the Facebook example, you already tested posting, and that looked fine, but now try testing the posting function while simultaneously having a conversation with Facebook Messenger (another function you probably tested at the function testing phase). Testing two things (that previously worked fine in isolation) at the same time may yield buggy results, and who doesn’t love buggy results?

Requirements Testing — Pretty obvious, test the given requirements! CONS: Requirements are not always given! Take a pinch of salt with your testing efforts and remain skeptical while you scope the work you think is involved! Another way to look at this (if relevant) is “user-manual testing” —perhaps itself a subset of “Claims testing”. In other words, take every claim that is made about the application (i.e. in the accompanying instruction manual), and check against it.

FUN FACT: If you make your own exhaustive function list and do your own function testing, you may, at the end of it, realize that you have technically accomplished both requirements testing and function testing (and possibly even claims testing too) all while technically doing the same activity. The point is, dedicated testers will often test for and catch significantly more than the stated requirements or claims. That’s what we’re here to do!

Partition Testing — This can contain function testing, but is deceptively deep. Divide the application to be tested into various partitions, then test and use the results to create interesting hypotheses or even to generate further testing ideas. The fun part of this is: how you define those partitions is entirely up to you. The partitions could be across business requirements, or areas of functionality, or in the order in which they are built by the team (or planned to be built by the team). You could even divide up testing by something as abstract as ‘concerns of various stakeholders’ (i.e. the business owner is worried about the notifications popping up at the appropriate time, the developers are most concerned about how consistently and accurately the data displays when requested in the UI multiple times in quick succession, while the designers expressed the most concerns about how well the responsiveness holds up when the app is placed under various different conditions). PROS: By partitioning the testing into various subdomains, you get information (about the AUT) in sets that are potentially unique / may not be obviously apparent to developers, product owners or others.

Perverse Testing — A subtle form of negative testing that I often do myself (also, the name makes me laugh). Per testing guru Cem Kaner, perverse testing is where a tester takes the specifications about how the application works and intentionally misinterprets them, or follows a task in an intentionally unusual, clumsy or indirect manner. Very cheeky, and often yields very interesting results (by very interesting I mean that upon presenting my findings to developers, an interesting conversation and potentially new directions / considerations can ensue, even if no bugs are found. As an artist at heart, I find testing in this mode (i.e. subverting the rational) particularly enjoyable and satisfying!

Blink Testing — This one might arguably be a subset of what is more broadly known as Visual Testing, but I find that term so generic it saddens me. James Bach and John Bolton came up with this deceptively simple technique/name. All you need to effectively blink test is an awareness of the results you are expecting/desiring, an eye or two, and one brain; it’s really all pure pattern-recognition. As an example, you are blink testing anytime you rapidly tab back and forth between two almost identical pages to detect the differences with your naked eye.

Likewise (and who hasn’t done this) — stare at one particular cell of interest in a long excel workbook (or one column in a series of SQL query results), then keep your finger on the scroll down button. Let the results flash past your eyes, and the patterns which you know to be false will eventually jump out at you (for example, if it were a list of phone numbers that had to have a “301” area code, your eye/mind would quite naturally pick out any minority outliers from that list; all without having to code anything, and all while giving your brain a more varied activity that requires intense mindfulness for just a short period of time… Fun!

Interference Testing — This is more a set of heuristics to try while doing exploratory testing, but they’re brief (and useful) enough to include here. Key aspects include:
Interrupt your process in some way and observe the results.
Change something that the process depends upon (like data) while it’s underway (or before / after).
Attempt to cancel or Stop (or Pause) your process while it is underway, does what you expect to happen actually happen?
Compete for processor attention by opening other applications (and try pushing them to their processor limits, while you’re at it).

Test Bash / Bug Bash / Hackathon / Testathon — Get a bunch of people in a room and let them loose for a few hours. Offer up a prize and let them go wild. This kind of event is best held once the software is in a relatively complete state and closer to release, and it can be immensely helpful to the programmers as a means of flushing out anything that they may have missed (or getting a heads-up on potential future areas of concern).

Who participates in a test-bash? Often this is what sick testers like me do in their spare time when they have an urge to test something new (literally this could take the form of a meetup). There’s also crossovers here with other role-based testing techniques like beta-testing (because this kind of thing could just as easily be done by real users in a dedicated session), as well as paired testing, because test bashes will often end up with testers pairing off (and paired testing is one of the most underrated and important aspects of role-based testing, so watch this space for a future blog on it!)

Exploratory Testing — This is really more an approach than a technique, as one can do a multitude of techniques within an exploratory approach, however since it gets confused with a technique so much, and since the term is bandied about so much, I will include a brief word on it here.

As you can see, there are a multitude of techniques, strategies, and patterns you can approach testing from. They’re all beneficial in different ways and you are likely utilizing at least one of them without even realizing it. At the end of the day, any kind of testing is useful testing, so don’t get caught up in the “Am I testing correctly” pitfall — caring enough about testing to be concerned if you’re doing it right shows that you’re already on the right path.

--

--