Almost, But Not Quite, Entirely Unlike TDD
Douglas Adams’ The Hitchhiker’s Guide to the Galaxy used to be required reading for software developers. Ah, the good old days, when developers shared a sort of cultural literacy! A strange sort, maybe, but a sort nonetheless. Any group of developers could recite quotes from the story on request, or in response to work-related situations that would have engendered panic, if not for the soothing words on the cover of the Guide: “Don’t panic.” Quotes like this one:
“The way [the Nutri-Matic machine] functioned was very interesting. When the Drink button was pressed it made an instant but highly detailed examination of the subject’s taste buds, a spectroscopic analysis of the subject’s metabolism and then sent tiny experimental signals down the neural pathways to the taste centers of the subject’s brain to see what was likely to go down well. However, no one knew quite why it did this because it invariably delivered a cupful of liquid that was almost, but not quite, entirely unlike tea.”
Younger software development professionals have little awareness of the classic science fiction and comedic material that shaped the thinking of earlier generations of practitioners. Few are able to quote Monty Python dialog from memory, or succinctly communicate the salient characteristics of a problem simply by naming a Twilight Zone episode.
It will come as no surprise this can sometimes lead to problems in the application of robust software development techniques.
For example, I’ve noticed many software developers claim to be advocates of test-driven development (TDD) and insist they use TDD in their own work, and yet the way they build code is almost, but not quite, entirely unlike TDD.
What’s TDD anyway?
If you ask 10 people what TDD means, you might get 30 different answers. Most of the answers will be internally consistent and some of them will be practical. A few may even resemble TDD to an extent.
FWIW I’ll share what I think TDD means. YMMV. But first let me talk about this other thing for a minute.
There’s an approach to software development whereby we evolve the low-level design of the code incrementally. As it takes shape, the code itself “tells” us how it should be designed, if we would but listen.
Listening to code in this way requires a trained ear. We have to learn how to listen to code, just as we have to train our ears for music or foreign languages.
Or maybe it would be better to say we have to train our noses. People like to talk about code smells. A code smell is a structural pattern in source code that leads us to suspect the design could be improved. It doesn’t necessarily mean there’s a design issue; it’s just a questionable pattern that often points to a design issue, just as an unusual smell in your house might point to a dangerous gas leak or might be nothing more horrible than your neighbor’s cooking.
If we aren’t too sure what those patterns look like, we won’t be too sure what our code is trying to tell us.
And if we don’t know what the code is trying to tell us, we won’t know which refactorings to use to improve the design.
I often work with developers who can stare at a 2,000 line method in Java or C# and feel no anxiety whatsoever. The code is speaking, but they don’t hear the message.
It sort of reminds me of listening to music with my dog. She lies down peacefully when Mozart is on. She curiously investigates the speakers when Yasuhiro Yoshigaki is improvising on old auto parts. She flees in terror at the sound of George Crumb’s Black Angels. But in no case does she relate to the music on a deep level. Sometimes the sounds stimulate a response in her, but she doesn’t understand music. I’ll bet she wouldn’t react at all to a 2,000 line method in Java or C#.
Anyway, this thing about letting the code tell us what it wants to look like and incrementally letting the design evolve is often called emergent design.
You can Google the phrase. Go ahead. I’ll wait.
So, you probably found this Wikipedia article: Emergent Design, which shows the term is not limited to software development but has broader application.
Relevant to software development, you probably found this write-up from ThoughtWorks, opinions from advocates of emergent design like this one, and criticisms of the approach like this one. So you can get a sense of what it means, when it might be useful, and when it might not be useful. All good.
Let’s set aside the arguments for and against emergent design and take it as a “given” for purposes of this article. How would we guide the emergence of the low-level design of a software module or component? You can probably think of several ways to do this. The method that is most often used is test-driven development.
TDD as a way to guide emergent design
In this context, we’re talking about building small-scale components of a software solution. We do it by expressing concrete examples of the desired behavior of a piece of code in an executable form of very limited scope. These small examples are called microtests.
The TDD cycle — red, green, refactor — is used to drive out an implementation for the desired behavior of the code. “Red” means that the executable statement of a desired behavior does not exhibit the expected result. “Green” means that it does so. The words reflect the colors in which failing and passing examples are usually represented by unit testing tools. “Refactor” means to clean up the code, which we prefer to do incrementally rather than building up a mass of technical debt, so that the task does not become burdensome and so that the code is kept in an understandable state at all times.
Starting with very simple examples, as the suite of microtests is built up, an appropriate low-level design emerges. A noted proponent of the approach, Robert “Uncle Bob” Martin, explains that as the examples become more specific, the implementation becomes more generic. In other words, as we add more and more discrete examples, we are guided to write a more and more general implementation, capable of handling all the defined cases properly.
Many detractors of TDD point out that it’s possible for people to forget to include all the relevant examples, resulting in a fragile or incomplete implementation. This is more a problem with people forgetting things than an objective criticism of TDD or any other technique or method. After all, software doesn’t do our thinking for us. Well, not yet, anyway.
Uncle Bob has worked out a list of transformations the code undergoes as the design emerges. By favoring the simplest transformation necessary to cause a microtest to pass, we can guide the emergent design toward an appropriate form.
Best or good enough?
I hesitate to claim we end up with the “best” design because that would involve writing the solution in every possible way and then judging the various implementations by some criteria that everyone would agree with. You can probably see a couple of challenges with this idea. The first challenge is to think of every possible implementation.
I haven’t met anyone who has gotten beyond that first challenge on the way toward discovering the “best” design for any software solution. If such people exist at all, then they will face the second challenge: Getting everyone to agree on the criteria by which to determine the “best” design. Therefore, I doubt anyone actually knows what the “best” design for any given solution might be, even if some people believe they do.
Short of absolute perfection, I’m pretty happy with having a practical way to discover an appropriate and practical design that doesn’t go too far (that is, helps me avoid overengineering) and doesn’t overlook anything important (that is, helps me think of significant edge cases). I’ve found TDD helpful in those ways. YMMV.
TDD by any other name would smell as sweet
Well, that was a pretty long-winded answer to “What’s TDD anyway?” What I was going to say is that TDD means (to me) to repeat the red-green-refactor cycle in very small increments, following a logical progression of transformations to guide the emergence of a practical and appropriate low-level design for a software component without overlooking important edge cases, and keeping the design “clean” at all times through incremental refactoring.
A key point about all this is to be sure and write a microtest that defines a piece of behavior before you implement that behavior. The D in the middle of TDD stands for “driven.” The driver of a car sits in the front seat, not the rear. (Once when I used that metaphor, a person in the room showed me a picture of a car that had been rigged for back-seat driving. Clever.) Anyway, it’s fundamental to TDD that the only reason to write a line of implementation code is to make a red example turn green. That’s kind of hard to do if you’ve already written the implementation before you write the example.
If you’re doing something different from that, it’s fine. There’s no law that says we have to develop software in any particular way. The problem is calling whatever you’re doing “TDD” when you aren’t doing that stuff I just said. It’s no more meaningful than calling a carburetor from a 1948 Ford pickup truck a “banana.”
Here’s an if-P-then-Q-doesn’t-imply-style corollary to the sweet-smelling assertion:
Any other thing by the name TDD doesn’t smell so good
I’ve encountered quite a few developers over the years who insisted they were strong proponents and dedicated practitioners of TDD. They begin their work by laying out a fairly detailed low-level design on paper (or pixels). Then they write a bunch of “skeleton” source modules. Finally, they use the red-green-refactor cycle to help themselves fill in the blanks in the skeleton source modules. Or they use a sort of green-green-never-refactor cycle, which they label “TDD” for some reason.
The rest of this paragraph contains material some readers may find disturbing. Feel free to skip it, or to ask your children to leave the room while you read it. Don’t say I didn’t warn you! Here goes: On many occasions, I’ve witnessed experienced TDD practitioners demonstrate or teach TDD by making hard-and-fast assumptions about how the solution is destined to emerge, and to begin by writing some “initial” production code before they write the first microtest. In Deccember I saw a demonstration of the Bowling Kata in which the facilitator first created C# classes for Game and Frame, and an empty method for Roll. He blatantly did all that without writing a single microtest to drive it. I’m very sorry if reading that upset you, but it had to be said. Okay, it’s safe to invite your children back into the room now.
More recently, I’ve learned there’s a school of thought about development people call “reverse TDD,” or words to that effect. They’ll go through a fairly long monologue to describe it, if you ask them to. Sometimes they’ll do so even if you don’t ask them to. Or if you ask them not to. “Reverse TDD” basically means writing unit tests after writing the implementation. Not sure how the abbreviation “TDD” fits with that, but there you have it.
If Douglas Adams were still around and observed these activities, he may well call them “almost, but not quite, entirely unlike TDD.”
Why do people feel the need to label any old pseudo-random approach to software development “TDD?” Why does that term mean so much to them? As mentioned above, there are many ways to write code and none of them is “wrong” or “evil.” As long as you’re happy with your work and with yourself, it’s all good. Some methods might take more time or carry a higher risk of overdesign or error, but eventually, given sufficient time, money, and frustration, a working solution can be produced using pretty much any approach. So, why insist on calling just about everything “TDD?”
If I had to guess (and I do, as I’m not a psychologist), I’d guess one reason for this phenomenon is that TDD is a very popular buzz-term these days. Everyone likes to be associated with popular buzz-terms. Therefore, “whatever I do can be labeled [insert-popular-buzz-term-here] because I say so.”
Even if my guess isn’t wrong (and it might be), it doesn’t fully explain these almost but not quite entirely unlike TDD forms of TDD. The problem isn’t entirely due to developers’ misunderstanding of the technique or their eagerness to qualify for a popular label. Many tutorials and explanations of TDD explicitly advise developers to write production code before they write a failing microtest. This example from Microsoft is representative: Getting Started With Test-Driven Development.
But even that can be explained in terms of my guess at the psychological motivation. Companies and others who have something to “sell” are very keen to be associated with popular buzz-terms. They may or may not understand what those buzz-terms mean. They do understand that people will buy stuff that is associated with popular buzz-terms.
So, where do I think I’m headed with all this rambling nonsense? Just this: Things have names and definitions. If you change the Thing to such an extent that its basic characteristics no longer conform with its definition, then you really ought to come up with a new name. The Thing is no longer what is was. Calling it by the old name will only confuse people who actually know what the old name means.
Even if you furrow your brow and affect a professorial manner, they’ll know you aren’t using the buzz-term correctly. Trust me. I’ve tried it.
Variations of TDD
Am I claiming, dogmatically, that any deviation from these rules invalidates the label “TDD?” No. There are at least a couple of well-known variations on TDD that still satisfy the basic criteria.
Classic style TDD follows the pattern described above. It’s very helpful when we need to emerge a low-level design for any sort of algorithmic implementation. Classic TDD is also known as the Detroit school of TDD, as it was devised by people working in the city of Detroit. It’s the Kent Beck, Uncle Bob, Ron Jeffries et al way of doing TDD (not that it’s the only way they know). The microtests tend to be agnostic about implementation details and to focus on the observable outputs of the units of code under test.
Mockist style TDD takes the approach of defining interfaces for the key domain concepts of the solution under development and building up the solution using the red-green-refactor cycle, with mocks defined for components the code under test collaborates with. It’s very helpful when developing a solution characterized by many interactions between domain objects. Mockist style TDD is also known as the London school of TDD, as it was devised by people working in the city of London. It’s the Nat Pryce, Steve Freeman et al way of doing TDD (not that it’s the only way they know). The test cases tend to know more about the underlying implementation than when classic style TDD is used, at least to the extent of interactions with collaborating objects.
Practitioners of TDD routinely switch between these styles, as well as taking short-cuts that they advise their students not to take. Rarely will you see anyone follow a single style rigidly. Beginners are advised to take baby steps to an extreme degree so that they can internalize the technique and get a gut feel for how it influences emergent design. Once beyond that initial learning phase, it’s okay to be more flexible. It’s better to follow the steps closely until you get a sense of how far you can safely flex.
Sometimes beginners make the mistake of thinking the practitioner who’s showing them TDD wants them to stay rigid forever. That isn’t the case. It’s a way of learning. Don’t skip it. If you’re a beginner with TDD, then you don’t know enough to judge when and how far it’s safe to flex.
When a variation becomes another song altogether
That mockist style thing I mentioned…it sounds a lot like writing skeleton source modules and then filling them in using the red-green-refactor cycle, doesn’t it? I wonder if that indicates there aren’t hard-and-fast boundary lines between concepts; there might be gray areas or opportunities for people to apply judgment. Hmm.
It turns out that even when we want to use emergent design for some aspects of a solution, we don’t often use it for all aspects. Portions of a solution might be straightforward examples of well-known design patterns or reference architectures. There’s limited value in pretending we know nothing about them and forcing ourselves to drive out an emergent design for every little thing.
Also, emergent design doesn’t usually mean no up-front design at all, unless we’re experimenting or learning about a domain that’s unfamiliar to us. When building code intended for production, we typically perform some amount of upfront design.
One lightweight development method that’s consistent with the Agile Manifesto is called Feature-Driven Development (FDD). A buzz-term that came out of the FDD community is JEDI, or Just Enough Design Initially. Another popular design approach is called Domain-Driven Design (DDD), devised by Eric Evans. Scott Ambler defined yet another lightweight design approach he calls Agile Modeling.
All these, and many more methods can be used to elaborate a highly detailed domain model or a very lightweight model. It’s up to the user. Even the Unified Modeling Language (UML) and the Rational Unified Process (RUP) can be used to produce a comprehensive up front design or a minimal one.
Ideally, we’d like to find the optimal place to meet in the middle, between just enough up front design top-down and the beginning of emergent design bottom-up. That optimal place will vary by context. Understanding the context is up to us, and is not a question of methods or tools.
Maybe another reason some people insist on calling everything they do “TDD” is because they assume they have to use a single approach for all their work, and they’re looking for an umbrella term. TDD isn’t what they’re looking for. It only means what it means. It doesn’t mean what it doesn’t mean.
There’s nothing wrong with combining different techniques to achieve our goals in a given context. Lately, I’ve been finding value in combining London school TDD with another technique known as Design by Contract as an approach to developing microservices for a cloud environment. For driving out the design of individual microservices, I like to use Detroit school TDD. And I’m happy to use frameworks and libraries for boilerplate stuff. Everything has its place.
Practice makes good enough
TDD is a learned skill, and we improve with learned skills through mindful practice.
But…what? Just “good enough?” Doesn’t it go, “Practice makes perfect?”
Well, if the old saying is true and “perfect is the enemy of good,” then it follows logically that “good enough” is better than “perfect.” Therefore, if you’re a perfectionist, you ought to be aiming for “good enough.” Aiming for “perfect” would make you a less-than-perfect perfectionist, by definition. (Norman, coordinate.)
Code dojos and other hands-on activities are a great way for practitioners to learn about and experiment with different approaches to software development. Most programming katas can be approached in a variety of ways and can be used to compare and contrast different approaches given different sets of assumptions.
Personally, I like to have as many tools in my toolbox as possible and to cultivate a sense of when to use each tool. There’s no substitute for hands-on practice to learn about various techniques and to gain a sense of when and how to apply them.
This isn’t meant to be a crash course on TDD. I just wanted to say that TDD means what it means and doesn’t mean what it doesn’t mean. I guess in that regard it sort of resembles a lot of other words and phrases; at least, the ones that mean what they mean and don’t mean what they don’t mean.
You don’t have to call everything you do “TDD” just to sound up to date or whatever. TDD is (or can become) one tool in your kit. If you write implementation code before you write examples, you aren’t “improving” or “extending” or “adapting” TDD in any sense whatsoever. It just isn’t TDD, in exactly the same sense that a carburetor isn’t a banana.
About the Author: Dave Nicolette has been an IT professional since 1977. He has served in a variety of technical and managerial roles. He has worked mainly as a consultant since 1984, keeping one foot in the technical camp and one in the management camp. Read More
Originally Published at www.leadingagile.com