This is a rant about a worrisome trend I see in blog posts, news articles, tutorials all over the web. Authors explain how they solved or addressed a particular problem in a particular context, and then continue on delivering a message as if it applied to all aspects of software testing. They do this mostly not by explicit claims, but by using generic terms. It would important to point out that a particular solution may only be applicable to a particular context especially for readers who are just getting their feet wet with software testing.
Below are a few topics where I believe putting some additional perspective may help readers better understand our beloved craft of software testing.
Test Automation != Browser-based Test Automation
Many automated test activities focus on the end user and a large portion of software delivered these days are exposed using a web interface. Automated tests — understandably — in certain contexts rely very heavily on automated tests driven through web browsers. However, this should not create the illusion that all test automation is browser-based. While there might be cases or contexts where this type of of automation is sufficient, it is not the end-all be-all of test automation. There are plenty of companies that are making browser-based test automation much easier to maintain and less brittle, may even sound like that this is the only automation you will ever need, but I would be hard pressed to believe that browser-based test automation is all you ever going to need.
Don’t believe that you are a test automation superstar just because you completed the first test automation class where you learned how to use a particular tool to create automated tests via the web browser. (Yes, I have seen classes where you get this message at completion.) You have taken an important step in your career, but test automation is a huge discipline to master and cannot be completed with a few hours of training.
To put it simply, this is how I view the world of test automation:
API Testing != HTTP REST API Testing
While REST is very common interface for exposing services either publicly or with individual components of the application under test, it is not the only way to expose APIs — there are plenty of others to choose from. There are different protocols, there are payloads different from JSON and XML. API-level testing does not even require or mandate using network protocols to communicate. APIs (application programming interfaces) may be public or private, it is simply a contract between two communicating parties that testing must validate.
Nobody should ever say or imply that API testing is all about testing RESTful APIs over HTTP.
Unfortunately, I have seen tutorials about API testing just to find out that it was all about using a particular tool for testing RESTful APIs. That is just plain wrong. Well, here again is my simplistic view of the API testing world:
Testing In Production Is Not For Everyone
I love the notion of testing in production and emergence of chaos engineering in general. Kolton Andrus from Gremlin opened my eyes to this practice at a meetup in the Under Armour offices last year. There are so many aspects of testing in production that I love: being proactive finding issues, exposing engineers to production issues, tabletop exercises to anticipate faults in production, and I could go on and on.
Testing in production is, however, not applicable in all software testing context and should not be portrayed as such. I have seen blog posts where the authors push the idea about “right-shift”-ing software testing activities by exercising testing in production without articulating when and where this is applicable.
Let us just stop for a second and think: would you test the flight control software of the recently grounded Boeing 737 MAX aircraft in production or would you have a different approach to testing this critical piece of software?
This is my simple view of the testing world:
Performance Testing != Testing Web Page Load Times
Having done performance testing myself, I am irritated when performance testing is often reduced to testing how fast a particular web page is loaded. This is a very important metric for users of a web-based applications and likely has a huge impact on the business providing the service. However, this is not the only concern of performance testing in general.
First, performance testing can be (and often should be) done in test levels other than at the system level (yes, even in unit tests). Doing performance testing at lower levels will likely reduce the time for root cause analysis. In a highly complex system with many micro-services under the hood, tracking down a performance issue can easily become a time sink.
Second, in different contexts performance testing may focus on indicators other than page load times. Critical paths in the application under test are scrutinized and exercised by experienced testers. They may use a variety of tools to understand how the application behaves including system monitoring, network monitoring as well as application monitoring tools.
The rise of application performance monitoring vendors may create the false illusion that all application performance testing happens on applications deployed in the cloud and any performance analysis can only be done by using the tools of these vendors. This is a false assumption — there are plenty of applications in software testing, where the application under test is not cloud based, but performance can be critical (think nuclear reactor control software).
Here is my last show of my basic drawing skills:
I am a huge believer of context-driven testing. I also firmly believe that we — software testers — should be articulate in our communication about software testing activities and how they relate to our context and outside our context.
I am sure that there are other examples that you have seen where the message has been either implicitly or explicitly generalized to apply to a broader application than intended. Please, share them with me.