Test Your Assumptions
By Paul Grizzaffi, QA Automation Architect, Cognizant Softvision
ALERT!
This blog is not about test automation. It is, however, about testing, development, requirements, and the assumptions we sometimes make about all of them. Let’s go back to the before times, the 1990s…
During my last semester of graduate school, I worked on a project that involved a visual model and an accompanying data model. The idea of the project was that a user would be able to use an image that was an outline of a human torso, head, and neck. In this visual, the user could draw a relatively arbitrary neckline for a blouse or shirt. Also, in this visual, there were landmarks to use as guides when creating their neckline.
But things were not in sync… a story…
My assignment was to build software that would read the bitmap of the outline with the new neckline and produce a PostScript file for printing a life-size sewing pattern. This pattern could then be used to create a blouse or shirt with the specified neckline. Yes, I (sort of) programmed in PostSctript, but that’s another story.
The thing was the neckline curves didn’t come out right.
Now, for various reasons, I’m mathematically challenged. So, I checked and rechecked. I took direction from my professor who gave me additional reading materials about mathematics and Bezier curves. I got advice from my friends who were better than me at math. Alas, there was no joy; my output was still wrong.
Jumping ahead to the end of the story, I eventually discovered that the visual model was out of sync with the data model. Specifically, one of the landmarks on the visual model was lower relative to the coordinates of the data representation and lower relative to the other landmarks. This made the situation such that the visual representation’s coordinates were out of sync with the data representation’s coordinates. This situation caused my code to produce incorrect results.
As a naive grad student, I assumed that the data model I was given was an accurate representation of the graphical model I was given. I mean, a professor wouldn’t give me bad input, right? And the previous grad student that worked on this project would not have corroborated this information in his code, right?
It took an extensive amount of testing, debugging, and uncomfortable sessions with my professor for me to eventually discover this situation. At the time, I was angry at my professor for not giving me “the right data.” I mean, he was the expert, and I was the student, right? Yes. But in hindsight, there were multiple aspects to this scenario:
- The graphic model was meant to convey the representation of the landmarks and the drawn neckline.
- The data model was supposed to convey the coordinates of the landmarks and the data representation of the drawn neckline.
- The communication between the professor and his previous graduate student who built the engine to parse the landmarks and the drawn neckline.
- The communication between the professor and me regarding why my code might not be producing the expected results outside of “incorrect math.”
- There was no accounting for any communication between the previous grad student and me. In fact, I was lucky that he’d not yet graduated so he was still relatively available. Had he not been available, I’m not sure I’d have made as much progress as I did.
So, why am I writing this? I’ve observed that we frequently take too many things at face value.
How frequently do we say or think things like, “that’s what the requirements say,” “that’s how the test case is written,” or “I was told to automate that”? How infrequently do we say things like, “that requirement is inconsistent with other parts of our approach,” “that test case isn’t testing what we intend,” or “automating that thing will cost us more than the value it provides”?
In my story, I was given a portion of my requirements directly by my professor. I was given other requirements indirectly by my professor via the other grad student. Both of these requirements were given in good faith; neither individual intended to deceive me, yet I received bad information. Had I investigated the requirements, and perhaps performed some testing on the requirements themselves, I could have saved many hours of effort, not to mention much heartache and stress.
I had assumed that everything I was given was accurate. But clearly, this was not so. In our jobs and careers, we cannot act as if everything we received is accurate either. We also cannot behave as if we have the same understanding and expectations as those who delivered the information and requirements to us, nor can we make that assumption about other people working from that same information. We need to question so that we can come close to a shared understanding. And we need to test that our understanding is appropriate.
Get Paul’s take on “being responsible with automation, testing, and other things” in his blog, Responsible Automation