How Can Testing Teams Review Bugs that Don’t Come from a Specification?

Ten years ago managers were teaching that it was irresponsible to start writing code until the testing teams had “complete, consistent, correct, unambiguous” requirements — sometimes called 3CU. Today’s software organizations work more iteratively, with prototypes, sample screens, stories, and, sometimes, sticky notes and conversations.

Testing teams that refuse to stop work in the light of these “unprofessional requirements” is no longer the savior of the project; instead, they are the odd person out.

Even a high-quality specification is unlikely to cover all the defects. Slow performance, confusing tab order, and awkward word-wrap are all cases of clear problems unlikely to be in a specification. Unless the specification has examples, rounding, formatting of numbers, handling French or Spanish characters all might be problems the specification fails to address — and the list goes on.

Customers don’t need a specification to know when a problem bugs them, and testers certainly don’t either.

Today we’ll dive into what makes a bug a bug, and how testing teams can identify bugs without a detailed specification, starting by Example.

Finding Bugs in SquareCalc

Consider a banking webpage with a spreadsheet built into the page to help create a budget. The specification for the spreadsheet feature is not terrible. Product managers spent time outlining how the math functions like SUM() and AVG() should work, detailing usability features that help categorize entries, and other general functions like resizing columns. But like every other specification, it represents the understanding of a few stakeholders at one point in time, and can not anticipate new ideas or lessons.

A tester starts working on this feature, focusing just on the SUM() function and discovers that they can’t select values by clicking a field with the mouse to create the formula. Flipping back to the specification, there is no explanation for the different ways a user should be able to create formulas.

The bug report reads like this:

Description: User can’t select cell with mouse to use in formula

Steps to reproduce:

  1. – type SUM into a cell
  2. – use ctrl+mouse click to select the fields to use in the formula

Actual Result:

  • Fields not added to formula
  • Expected Result:
  • Fields should be added to the equation

Expected result? Why was that the expectation and who cares? Sometimes the expectation is perfectly clear. We expect that a browser won’t crash when the submit is clicked. Other times, like that formula creation example above, it might not be so clear and the bug report ends up getting ignored.

The person that discovered that bug had an emotional reaction to not being able to use the software in a way that felt reasonable. The expectation came from experience in using other spreadsheet products like Microsoft Excel or the LibreOffice product. Both of those programs offer a mouse selection feature for building equations. Comparable products had the feature, and the interface looked like comparable products. The testing team was confused, and expected the customers would also. Comparable products is the name of an Oracle — a method used to identify problems. Some examples of Oracles include the dictionary, screen mock-ups, the previous version of the software, the specification, and yes, the opinion of an influential executive.

Sometimes the information that helps someone discover a bug comes from unexpected places like marketing material, like how the company presents itself, or even regulatory requirements.

Sources for (wrong) Answers: Alternative Oracles and How To Use Them

The next time a bug is discovered, instead of talking about expected results, try describing where that expectation came from. Is a date picker behaving differently from every other date picker in the product? Is a webpage loading in 10 seconds when the terms of service (TOS) says it will load in under 5? Is a SUM function is a spreadsheet functioning different from industry standard products? Talk about that. Showing developers where the expectation comes from is much more persuasive.

An expected result is a place to start, that feeling is a signal that something isn’t quite right and there might be a bug lurking around the corner. Digging deeper than an “expected result” teaches some important lessons about how bugs are identified, and how to make a more persuasive bug report. Customers don’t need a specification to find bugs, testing teams certainly don’t either.

Every activity in software development has a cost and a value. Getting cost to trend down while increasing value, is the ultimate goal. Discover how to make your team more efficient and productive with our 4 Quick Wins eBook.