The Value of Professional Judgment

Ambrose Little
Software Developer
Published in
7 min readOct 7, 2017
A nice top image so this post looks better in the sharing feeds. ;)

The following quote was recently shared with me. I won’t say by whom/where as it’s not particularly pertinent. I’m told it’s an excerpt from an upcoming book. I started to type an answer, but it ended up becoming a blog post. :)

To some degree, it’s hard to disagree with the statement. It’s almost a truism that more simply said amounts to “be careful when choosing (or creating) abstractions, especially ones that may affect the entire application architecture.”

So why say it? I can only assume it is meant as an argument against abstractions, like trying to scare us off from using them because, after all, if we choose the wrong abstraction, bad things will happen. Considering my source, I have a feeling that this concern stems in part from the ship fast mentality, the one that errs on the stream-of-consciousness, locally-optimized coding rather than a more thoughtful, planned approach.

But we can easily invert this generic warning:

“If we avoid settling on and using abstractions so much that we end up with oodles of duplication, many different ways of doing the same thing, more code, more surface for bugs, more places to fix the same issue or make the same changes, we’ll pay dearly for that mistake. Avoiding abstractions in fear of choosing the wrong ones is a tendency that will bend entire applications to become huge unmaintainable messes of inconsistency, which is bad both for UX and for leveraged learning amongst the devs who have to maintain and augment them, and once we’ve realized this is a serious problem, we might have too much code to go back and refactor to use good abstractions. This paired with the sunk cost fallacy, whereby we’re tempted to keep not only the old code but continue to avoid settling on and using abstractions just because ‘that’s what we’ve done so far’ and ‘it’d be too costly to change all that now’, can be very hazardous indeed. Once something ships, the likelihood of refactoring it for better maintainability drops to close to nil.”

IOW, if you don’t spend time choosing and using an abstraction, bad things will happen. I’m not saying either of these is 100% right or wrong. Both are subject to immense amounts of interpretation. I wrote this counter example to illustrate that similar generic rhetoric can often be used to support competing viewpoints.

There is some truth in the concerns on both sides of the coin. A more concise and balanced version might be something like: “Be careful when choosing your abstractions, but don’t be afraid to use them when you have good reason to believe they will benefit the solution.” Of course, this also is a truism in software development.

So what is the takeaway? Is there some generic rule to be gleaned from this? Should we avoid abstractions more than use them? We know that according to Knuth, “premature optimization is the root of all evil (or at least most of it) in programming.” You could say the above caution against abstractions is a form of that saying. But that doesn’t get us much farther. The solution isn’t to avoid abstractions (optimizations) until they are begging you to solve them. By that point, it’s often too late.

And choosing to use an abstraction, while a form of optimization, is not in fact the same kind of concern that Knuth was warning against. His caution is against, as he continues, “worrying about, the speed of noncritical parts of their programs.” The reason for that caution? Such worrying can have a “strong negative impact when debugging and maintenance are considered.”

In short, the reason for his caution is essentially the same as the reason to opt for abstractions, namely, maintainability. Abstractions also tend to aid in code reuse (Don’t Repeat Yourself), which has many side benefits, such as reducing code bug surface, increasing consistency in user experience, increasing leveraged learning amongst devs, making reasonably-anticipated changes more feasible, and so on. Using an abstraction is a form of optimization for maintainability, which is (or at least should be) an overriding concern when creating productional software.

Furthermore, if you factor in a principle from Domain Driven Design, which is to do what you can to spend most of your efforts on your problem space’s core domain, it becomes even clearer that opting for abstractions that help you do that is a Good Thing. Put another way: choose ready-made, established abstractions for areas of concern that are not your core domain.

One might say the weight of wisdom in software development remains that given the choice between using an abstraction and not using an abstraction, barring any other concern, you should lean towards using an abstraction rather than not. That’s not to say that you cannot go too far in that direction; it doesn’t deny that there are times when you can end up spending more effort on an abstraction and ultimately make something that is effectively no easier (and perhaps harder) to maintain and modify than having chosen no abstraction. It is simply that, all other things being equal, creating and using abstractions is better than not.

All other things being equal, creating and using abstractions is better than not.

So how do you avoid using bad abstractions? How do you know when to use one or not? Here are some potentially useful rules of thumb. Only introduce an abstraction when:

  1. You already have at least two concrete uses for it in your current solution. As soon as you are tempted to copy and paste some code in your solution, ask yourself, can I create an effective abstraction here rather than copy and paste? Is the variability limited enough such that a good bit more of this code would be shared than not in my abstraction? If so, it’s a good indicator you should create the abstraction.
  2. You know based on prior experience that it is very likely you will have more than the current use. This is often true of horizontal/cross-cutting concerns, or when you have made “this kind” of software before and know where it tends to warrant abstraction. Another example is when it is a well-known solution boundary that you will want to test independently or that will need to interface with external systems/areas of systems (see bounded contexts and context maps in DDD).
  3. You have specific requirements that you will eventually need to support more than one use. A concrete example of this is needing to support different types of customers or a plan to integrate with more than one partner, etc. Any time you know similar but not identical code will be required to suit the solution requirements, that’s a good flag for abstractions.

But generalizations and rules of thumb only go so far. One of the things that separates great devs from not-so-great is being able find the right balance between the extremes, that is, being able to effectively use professional judgment to know when or when not to create/use abstractions, which ones to choose, and how to use/design them for maximal value to the project and team.

One of the things that separates great devs from not-so-great is being able find the right balance between the extremes.

The primary ingredient for effective professional judgment is experience. There really is no substitute, and this is why companies go astray when they opt for prioritizing familiarity with X technology as a key hiring criterion over broad problem space and technological experience. A person who is green simply does not have the experiential referents to make solid judgments — for them making the right choice with regard to abstractions will tend to be more luck than not.

Often their decisions will be informed by peers — what’s the latest, hot framework/library/language (i.e., abstraction) folks are talking about. Often their decisions are framed more by what they feel they need to learn in order to be marketable than what’s appropriate for the problem at hand. As of late, one of the cool, hip things is to eschew hard-won, long-proven practices, principles, patterns, and abstractions in favor of “vanilla” reinvention. It’s taking what we used to call a syndrome (the “Not Invented Here Syndrome”) and transmogrifying it into a supposed virtue, the underlying hubris being that “I can do this better myself” and not only that but “I should do this anew, myself” because “abstractions are dangerous” or “I need to prove myself” or simply “I am smarter and know better.”

One of the things that is gained through experience is an appreciation for the experience of others, of not repeating past mistakes, whether they are your own or those of others. True progress is not repeatedly reinventing the same thing but rather “standing on the shoulders of giants” to do something truly new and meaningful and valuable. That comes from not starting out each project and not solving each problem from a tabula rasa, pretending as if it is an entirely unique problem that warrants an entirely unique and unrepeatable solution by a uniquely talented an smart individual. And as I said, differentiating the lines between uniqueness and similarity (i.e., how to craft a valuable, correct abstractions) comes primarily through professional judgment earned through personal experience and learning from that of others’.

--

--

Ambrose Little
Software Developer

Experienced software and UX guy. Staff Software Engineer at Built Technologies. 8x Microsoft MVP. Book Author. Husband. Father of 7. Armchair Philosopher.