Defensive programming anti-patterns
One of the patterns I keep seeing (and flagging) when doing code reviews has to do with defensiveness. CPU and memory are abundant these days, and people would argue that the extra checks won’t hurt, so I’d like to take a post to explain why those patterns set me off. (Don’t worry, I’m not going to spend a blog post arguing against bound checks, or any similar Luddite notion.)
Fail uniformly
- tl;dr Don’t try to handle all possible errors
It’s often said that a good developer handles errors and a sloppy one doesn’t. What’s more seldom said is that, save for unusual circumstances, a good developer handles errors the same way — by not committing any bad data to storage, and failing fast. There’s time and place for covering all error conditions exhaustively. That time is when you’re designing software for a space probe, and that place is NASA. For the rest of us, we might have one or two odd cases, but error handling beyond a certain point is a foolish rabbit hole. (You can’t reliably handle running out of memory, you can’t communicate without a network, and nobody knows all possible errors that CopyFile might return.)
In languages that feature exceptions, a well-placed top-level exception handler that (a) logs and (b) returns an error, is usually better than sprinkling a myriad of handlers through functions in misguided attempts to make those functions “safer”.
In APIs that rely on return values to indicate errors, more developer diligence is required, but with some luck, that diligence will be put towards wrapping all those functions in checks and bubbling up the error codes. Good examples can be found in kernel sources, where error handling is of utmost importance.
Don’t be afraid to fail
- Don’t give up protections afforded by your language/framework, even if those cause your program to “crash”
Modern languages usually come with standard libraries, whose functions report errors using exceptions (or other mechanisms, like in Go), but it’s not unusual for complimentary non-throwing alternatives to be offered. For example, in C# one can parse an integer without risking exceptions by calling Int32.TryParse. In Python, one can get a value from a dictionary by calling get — missing keys will return None instead of raising a KeyError . Those functions can serve many purposes. In some languages exception handling constructs are too cumbersome, or have a non-negligible performance cost. Other times, the “error” case is truly not an error. Whatever was the intention of the library designer, those functions are almost guaranteed to be misused by inexperienced developers.
To give an examle, I’d often see a value = d.get(‘key’) in Python directly followed by an attempt to use the value, all without checking value is not None . Often the developer thinks the key would always be present in real world cases, and never bothers testing for that case. I think that if you’re confident enough that your language’s error checking won’t kick in, you shouldn’t disable that error checking “just in case”.
Fail fast
- Crash before things get too weird
I’ve started my software engineering career at a bootcamp in 2001. An actual bootcamp of the Israeli army, that is, but it was surprisingly similar to the “bootcamps” I’ve seen springing up in San Francisco a decade later. To list a few similarities, there was intensive training, we collaborated on projects, and then there was Demo Day. (As for differences, we had to make our beds to military standards, and go on sentry duty with an M-16.)
Prepping up our Demo Day projects, implemented mostly in Microsoft Visual C++ 6.0 (that was 2001), we all dreaded this message box from popping up in the middle of our demos:

Quite a few of us realized that you can suppress this message by wrapping your C++ code in:
try {} catch (...) {}
Luckily, most of us realized why it was a bad idea. This might keep your program running for a while longer, but except during demo day, this nefarious message box—or rather, the function behind it—saves you from a much worse outcome, one that has to do with random crashes and corrupted data. [See also: IsBadXxxPtr should really be called CrashProgramRandomly.]
Quite a few languages have exception handling facilities that allow you to catch any error, including errors that no programmer in their right mind can be expected to handle. For example, in Python one can catch RecursionError (likely a runaway recursion), or NameError e.g.
try:
covfefe
except:
print('Everything is fine.')In those languages, a catch-all might get you more than what you bargained for—that is, make real problems harder to debug by masking them.
Be strict in what your accept
- Not before long, someone will use your code as reference.
- Both extra checks and extra leniency have their costs.
Most of the specs I’ve written in my engineering career are a snapshot of youthful optimism, the green field I’ve envisaged before reality set in and made the code the mess it is today. The specs are akin to the 1776 constitution, a principal and visionary but somewhat outdated document. On the other hand, the source code doesn’t lie, and will be often (and rightfully) used as a blueprint of what really happens. (*)
The source code, however, can be obscure about its intention and context. One side of the coin is code adhering to the robustness principle a.k.a. Postel’s Law. This is when the program takes extra effort to handle obviously not-to-spec inputs and conditions. Another side of the same coin is introducing too many checks for things that “can never happen”. I find both to be equally bad anti-patterns, though finding the happy medium can be a challenge.
In either case, a newcomer engineer a year from now will have problems. In the former case, other parties will grow accustomed to the leniency so removing any leniency will become a breaking change. As Adam Langley puts it, the Law of the Internet is “blame attached to the last thing that changed”. In the latter case, the code’s intention will be more difficult to understand. To a newcomer, all checks look equally meaningful: “there’s code to handle this, therefore it must happen”.
(*) That assuming the specification is even still available. In my experience, growing organizations are woefully incapable of settling on a way to manage their knowledge. In the proliferation of abandoned knowledge management systems, your spec might end up being virtually un-discoverable.
