Over the last few decades, a family of practices emerged that share an example-guided heritage. This articles examines those practices along with origins of the example-guided idea and how this unifying term may help us today.
When I first learned Test-Driven Development (TDD), I remember being surprised by the simplicity of the software designs emerging out of carefully practicing the red, green, refactor cycle. When Kent Beck first introduced this practice in the mid-to-late 1990s (he actually used the word re-introduced since others had worked this way long before), it was common to do upfront design followed by development and test phases. TDD didn’t ask you to stop practicing good software design, however it did ask you to approach it differently. Just as a Socratic dialogue would be driven by questions, TDD asked you to craft an important piece of software driven by tests. The TDD dialogue would go something like this:
“Can you perform this little test?”
Craft only what’s necessary to perform the test.
“Now can you perform it?”
“Great! Can you do it with a simpler design?”
Either “Yes” or “No, it’s already as simple as I can make it.”
When done well, TDD helps you evolve a design with simplicity, safety and speed. You get speed by having safety, the automated checking of your tests and tests as executable documentation. You get simplicity by refactoring to ensure that your design doesn’t become overly complex. As your system evolves and grows in functionality, TDD helps you work faster, with greater ease and grace.
Since the late 1990s, my work style evolved considerably based on TDD and being test-driven. Looking back, I could now say I was becoming example-guided, a term recently suggested by Daniel Terhorst-North, a lean/agile practitioner, writer, consultant, teacher and originator of a TDD-like process called Behavior-Driven Development (BDD).
What does it mean to be guided by examples? Let’s consider a different example.
An Example-Guided Hardware Product
Around 2005, I was in the Toronto offices of a company that makes mass spectrometers. The company, a leader in their field, made the entire mass spectrometer from the hardware to the software to the analysis one could do after acquiring data from the mass spectrometer. The company was about to experiment by outsourcing the hardware development of the mass spectrometer to an Australian company. A director said to me, “whatever you do, make sure you end up with a simulator for that new hardware.”
Mass spectrometers are complicated instruments used for all sorts of things, from testing water quality, to fighting bioterrorism to making computer chips. It was a big bet to outsource the hardware development to another company. How could we make that bet safer? The simulator was key. To develop it, I decided we’d use what I then called storytests (more on that later).
Our team in Toronto would produce the software for the new mass spectrometer as well as the simulator for it, while the team in Australia produced the hardware. We would collaborate by means of storytests and the simulator. We’d define a specific example of using the mass spectrometer (i.e. a storytest) and guide development of the simulator and real code with that example. We repeated this cycle over and over throughout the project, being disciplined to never produce any new behavior without first discussing and defining a storytest. We gradually evolved our code to run the entire mass spectrometer, while simultaneously evolving the simulator. The simulator helped the Australians ensure that the hardware they were producing could perform the necessary behavior.
When the day came to run the actual software on the hardware, everything just worked! It was an unprecedented win for the company! While there would normally be defects to fix to get the software to run correctly with new hardware, there was simply nothing to fix!
This process we used was like TDD but different — it was more coarse-grained, focused on product behavior instead of how a function or object worked. As you’ll learn later in this article, I called this process Storytest-Driven Development (SDD), because it was based on tests of (user) stories. Our storytests helped our teams, separated by a large geographical distance, communicate and collaborate effectively. When I think back to that initiative and others like it, Daniel’s term, example-guided, fits.
Examples in Agile
Brian Marick, one of the signers of the Manifesto for Agile Software Development, felt that examples helped “clarify thinking and improve communication.” In the early 2000s, he named his blog Exploration Through Example, spoke about Example-Driven Development (EDD), organized an “Examples” stage at the large Agile200x conference and produced a wonderful sticker:
Brian, a consultant and author who writes and tests software and helps teams become more agile, understands TDD thoroughly. He appreciates TDD’s role in software design. TDD is driven by code-level examples and Brian also speaks about business-level examples (similar to storytests). He’s less convinced of how examples at the business level aid in software design. Instead, business-level examples, whether written on a whiteboard or documented in some way, clarify, communicate and guide us.
A New Term
On December 26, 2018, Daniel (@tastapod) wrote on Twitter:
Daniel has since said that he prefers the word “development” (Example-Guided Development) since some may perceive “design” to only imply planning, not implementing. To me, design means a full act of creation. Merriam-Webster’s dictionary defines “design” like so: “to create, fashion, execute, or construct according to plan.”
While TDD (Test-Driven Development) does not contain the word design, Daniel correctly calls it a “powerful design practice.” Years ago I remember Ward Cunningham asserting that “Test-Driven Development is more about design than it is about testing.” It’s a design process for evolving solutions.
But is it about testing? I’m not a professional tester, however, I do know a few and they’ve never liked TDD’s use of the word test. To them, testing is an art, a thinking person’s process for uncovering issues. Michael Bolton has been saying for years that what TDD calls “tests” are really more like “automated checks”, repeatedly checking that something works as intended. Manual testing practices, like Exploratory Testing, are far different — intentionally focused on uncovering the unexpected.
In late 2000/early 2001, I introduced a practice to an XP team I was on. It was based on TDD, yet focused at the feature level. I had no name for it. I introduced it simply to improve the quality of work of our XP team. That team practiced TDD at an expert level, had on-site customers and nearly every aspect of Extreme Programming. Yet bugs were slipping through. I reasoned that it was because we were lagging in automating acceptance tests. Try as we might to fix that issue, it continued to be a problem. I suspected that a TDD-like discipline could help.
It did. I eventually came to call this practice Storytest-Driven Development (SDD). A storytest is a concrete example of the behavior of the system from the user’s perspective. A storytest is associated with a user story. One user story could have one or many storytests. The initial idea was to never begin work on a user story without first having a failing storytest. To compose a good storytest, key people, like domain experts (or on-site customers), testers, analysts and developers would collaborate. That collaboration would yield clarity around the scope of the user story, which helped us understand what to build and what not to build.
SDD involved the repeated process of
- defining and agreeing on a storytest for a given story,
- automating that failing storytest,
- TDDing a solution,
- getting the storytest to pass and
- refactoring to improve the design.
Acceptance-Test-Driven Development (ATDD) came along soon after and was simply a different name for SDD. SDD and ATDD became known as outside in approaches — where you start from the outside and work your way in. TDD was and is an inside out approach.
Around that time many of us were using ideas and tools like Ward Cunningham’s FIT (Framework for Integrated Test) to produce human-readable documentation (storytests, acceptance tests, etc) that you could execute against the system.
I’d sometimes refer to these documents as “executable specifications.” I remember working with an auditor because we were helping a client build a product that was regulated. I showed the auditor how we produced a series of small, understandable specifications that could be “run” against the system to ensure it worked correctly. The auditor happily approved this process. She remarked that it was even better than the old, un-executable, dead paper approach to documentation.
Executable specifications are wonderful and they come at a cost. The automation may be expensive. Debugging problems with a failing specification can take time and effort. It took me a few years to get over my zeal for SDD and realize that not every user story needed this approach. People like James Shore, Arlo Belshee and Geepaw Hill pointed out the hidden costs of automating at the feature level. Brian Marick had explored the many approaches to automation of examples and came away unsatisfied. I held onto the practice of elucidating the scope and purpose of a user story via an example, but I became far more judicious in the automation of examples.
Daniel Terhorst-North’s Behavior-Driven Development (BDD) emerged around this same time. Daniel chose the name to deliberately move away from the word test and towards the art of evolving behavior. In those early days, BDD was sometimes called “TDD Done Right” and for programmers who never fancied themselves to be testers, the behavior language of BDD was more appealing. BDD caught on nicely with a community of practitioners, matured and grew. It is now used at the feature level, code-level or both and there are tools to help on the automation side of BDD to produce executable specifications.
Many Names, Similar Practice
There are many names and practices in this family of what is essentially example-guided work. All of them are examples of evolutionary or emergent design: each one helps us evolve a solution, guided by examples. This family of practices includes Test-Driven Development (TDD), Example-Driven Development (EDD), Storytest-Driven Development (SDD), Acceptance-Test Driven Development (ATDD), Behavior-Driven Development (BDD), Specification by Example (a term originally coined by Martin Fowler and now a great book by Gojko Adzic). Rebecca Wirfs-Brock recently mentioned even more names from history, including “responsibility-driven design, design by contract, object behavioral analysis, Smalltalk workspaces (to test and illustrate how code works).”
Each of the practices use different words, like test, developer test, programmer test, storytest, acceptance test, customer test, spec, scenario, contract, executable specification, etc. While TDD is traditionally a code-level practice, other approaches (or elements of those approaches) are both code- and feature-level. You practice TDD with automation however you can productively practice BDD with or without it.
So what do we do with all of these names? Nothing, would be how some respond. “You’ll just confuse things more by adding a new name!” Or “Any efforts at renaming will just hide the work of pioneers.” And finally, “Efforts to unify the name will invariably result in future de-unification.” Oh boy, that all sounds bad. And…while those concerns may be valid, there are deeper problems that I believe overweigh them.
One such problem is slow adoption. Example-guided approaches like TDD and BDD are still not mainstream. People who teach others these practices regularly come face to face with friction. Some balk at the mere use of the word “test”, others don’t believe they do “development” so this practice doesn’t apply to them, still others wonder why we aren’t writing full specifications.
Great teachers experiment with ways to reduce friction. New names often factor into those experiments. Daniel observed the following:
“…having discovered a better name — more intention-revealing, accessible, less cognitive distance — to describe something, I think it would [be] irresponsible, or at least a missed opportunity, not to use that better term. I had more success saying BDD over TDD for the same thing.”
Like Daniel, my colleagues and I teach skills that make a difference. In fact, sharing wisdom is the core mission of our company. Hiding history has nothing whatsoever to do with that mission — it’s always important to review history, learn from it and credit the pioneers who gifted the world with their wisdom.
Adding to the confusion via a new term is also the opposite of our intention. Today, veteran agile coaches like Jeff Langr observe that
“BDD <=> TDD are unfortunately confusing to folks who aren’t sure whether we’re talking about code-level driving or feature-level driving.”
BDD, TDD, ATDD, etc. — all similar (and slightly different) and yet we have no common vocabulary to speak about this family of practices. Example-guided fits nicely. And if it can help us reduce cognitive load or friction, appeal to a broader audience and speed adoption, I’m all for using this term. (Note: we don’t know if this term will indeed deliver and the term does not presume that all practices in the family are identical). Finally, since I’ve recently been engaged in applying agile outside of software development, I’m excited to see how example-guided work could help people in a variety of industries.
Thanks to Industrial Logic coaches, Wil Pannell (@wilpannel), Bill Wake (@wwake) and Tim Ottinger (@tottinge) for feedback on this article.