The Dexter Rule

Assume that the next engineer looking at your code is a serial killer who knows where you live.

Kevin Lai
Kantata Product Development
4 min readApr 17, 2017

--

If Dexter were reviewing your code, would you do anything differently before you committed? This question has helped guide my approach to communication in software development for the past eight years, and has cut down on embarrassing instances of, “I did this a year ago? It’s terrible! What did I mean here?”

Engineers work on teams with people of all stripes: tabletop gamers, opera singers, soccer players, casual serial killers, space aliens, and more. The people on your immediate team may have enough context to not require documentation, but without it, everyone outside your team may have trouble understanding what’s going on.

Documentation comes in multiple forms: technical documentation, code comments, commit messages, pull request discussion, class or method names, tests, and so on. I’m not going to weigh the merits of each form here, but it’s worth consciously assessing the level of documentation needed based on factors such as whether the work is siloed off, or whether someone who has less context will eventually look at it.

Here are a couple of points to keep in mind when creating documentation for a team:

Never write bad documentation. It’s much more likely to do damage by being misinterpreted or becoming obsolete. Don’t throw a sentence in because you’re short on time and need to tick off a checkbox; it is far better to have no documentation than bad documentation. Having no documentation signals that investigation needs to be done, while bad documentation leads people astray and makes them homicidal. Like Dexter.

Good documentation is concise, simple, and explicit: it clearly spells out intent and is meaningfully organized. Bad documentation is ambiguous, hard to follow, and sometimes redundant. Quality matters significantly more than quantity. If something doesn’t seem right to you, or to someone else on your team, take the time to make it right.

Diction and word choice matter, too. The standard writing rules apply, such as favoring active voice over passive voice, making your intent immediately clear, and considering your audience. However, I find that it’s not necessary to be grammatically strict as long as the meaning can be easily deduced — and shorthand can pay off in some cases.

For example, consider the comments below for a method that generates dropdown menu markup for templates based on the accept-language header. (Assume they’re the only source of documentation.)

Terrible: “String i18n”. This doesn’t tell us anything we wouldn’t already know from going over the method, and may introduce possible confusion about whether it’s meant to be plaintext string literals or markup.

Bad: “Dropdown Menu translations”. This at least offers some explanation of what this is being used for, in case usage or code happen to deviate.

Better: “Main menu dropdown markup using marketing dictionary. The links are intentional for accessibility reasons.” Now we’re getting closer to something reasonable, and this ensures that a future developer who looks at this will understand the reason for links rather than click handlers. This is immediate context that likely won’t be obvious to someone else. Translation concepts are dropped, because they’re both inferred by the dictionary and evident in the method.

Even better: “Main menu dropdown markup. Links for accessibility. Not for menu tooltips — different dictionary.” This version clarifies markup and text, is slightly more concise, and carefully considers audience by warning about a bad optimization someone may try to make.

I’m intentionally excluding a “best” example here , because documentation is subjective, fiddly, and can always be improved. You need to decide what works for your organization.

A caveat about tests as documentation: make sure they’re as precise as you need them to be. Usually, some other form of documentation — such as commit comments or pull request comments — will supplement tests, and vice versa. Tests inform behavior, but they might never explicitly tell you what a chunk of code is meant to do. This is deductive rather than inductive reasoning: tests tell you how it behaves, but not what, exactly, it is.

As an example, consider these three unit tests for a Duck object:

  • eats meat
  • walks
  • swims

Notice that those three tests could also describe a human. Let’s try to tighten this up by adding:

  • has webbed feet
  • lays eggs

But these conditions also describe a platypus. If the distinction among duck, human, and platypus is important, and you want tests to be your documentation, you need to be more precise.

We ran into this issue early on at Mavenlink, when tests meant to be documentation were written for only a subset of keys returned in an API presenter. This led to a few untested keys drifting over time, which caused behavioral problems and confusion about expectations. In this case, it was important to test the entire JSON hash upfront.

There’s no hard and fast rule about how much or how little you should document. Proper documentation is expensive, and follows the same adage as good form in exercise: do it well or not at all. Sometimes you must ship, and that means making sacrifices. Documentation and refactoring passes end up on the cutting room floor all the time; it’s sadly just a fact of life for software engineering projects.

But by keeping the Dexter rule in mind, you’ll leave another software engineer (or maybe even yourself) less murderously frustrated in the future.

--

--