Perfect vs Good Enough When Coding
A little while back I talked about a fun subject: comments as code smells. I received a fair amount of discussion and counter-viewpoints from readers on this topic. In fact, one fantastic reader Patrick Michaelsen went so far as to write a well-thought-out rebuttal.
In code craftsmanship, we often discuss ideal practices. While I can point out that an ideal is something to strive for, even if you never achieve it (such as having no comments) there’s a different issue that I want to discuss today.
That is the concept of Perfect vs Good Enough. This is the heart of many arguments against various code craftsmanship ideas.
This is the argument that the code we’re writing is “good enough” and trying to perfect the code more is a waste of time (we’ll completely ignore the high subjectivity of the term “good enough”). I find that this argument falls into the trap that most of us fall into when we write code.
That is not considering seriously enough the overall life of our code. Remember Y2K? (if not google it, it’s good history to know) Most of the affected systems were written 30 to 40 years before Y2K was an issue. And a lifetime is measured not just in calendar time either, or stated differently, “it’s not just the years, it’s the miles”. If we could monitor a single function in an application, and count every time it’s read by a developer vs every time it’s modified, we’d find dozens and even hundreds of reads for every modification.
This means that optimizing for “read” and not “write” of our code is as much as 100 times more effective.
There is a specific way to look at this. And that is the concept of WHEN. When do you want to do the work? We can easily write our code as quickly as possible. In order to do that we will be sacrificing on code quality. This still means that we’ve spent less time writing the code. But it would be incorrect to say that we’ve reduced the amount of work we do. Instead, we are simply shifting the work from “now” to “later”. This is often called “technical debt”. But that term is insufficient. Seeing the work as inevitable is a useful paradigm. Especially when combined with the reality that the farther down the process we make changes, the more expensive the changes are.
For example, changing the functionality of our system is nearly free if we do it before we do ANY work. The more work we do (design meetings, documents, and ultimately code) the more expensive changes are. The most expensive changes are the ones we make years after a system has entered production. Either bugs, or new features, these changes are invariably orders of magnitude more expensive.
Now the reality is that we can easily spend lots of time optimizing code for readability and quality now, and never benefit from that work. We may never touch that system again. It may never get a bug. And unfortunately, we can’t look into the future and see what bugs we will fix and changes we will make in the entire course of our application’s lifetime. If we could do this we could also only pay car insurance the months we get into accidents, and only need to start a life insurance policy the month we die (how morbid would that be?).
Even though we can’t guarantee that everything we do now in regards to quality will benefit us in the future since the cost is many many times greater if we do that work later on, we can spend a lot of time now improving quality and still end up ahead over the long run.
So the next time you’re tempted to cut a corner, or skip unit tests, or leave a less-readable algorithm un-refactored, consider when you want to do the work, and how much of it you want to do. The work that we do today to improve the quality of our code (regardless of the specific methods we use) is work we don’t do tomorrow.
Don’t forget to check out all our awesome courses on JavaScript, Node, React, Angular, Vue, Docker (one of my favorites), etc.
Happy Coding!
Enjoy this discussion? Sign up for our newsletter here.
Visit Us: thinkster.io | Facebook: @gothinkster | Twitter: @GoThinkster