Never under-estimate the power of smoke tests
I would love to always have 100% coverage with automated testing. But here’s the thing. There will always be a constant struggle between getting new features out and writing tests. So in my mind, writing tests is a value proposition. What’s the cost of a bug versus the cost of maintaining hundreds, thousands of tests? The answer will be different depending on the company you work for and the product you’re pushing. So there’s no absolute answer and it’s very contextual.
Opinion alert: Smoke tests provide the best cost/benefit ratio.
What do I mean by a smoke test? A smoke test imports the component(s) you’re trying to test and instantiates it into some kind of runtime environment that mimics the real production environment. The component can be very simple or a page or an entire application. The test will then check if the component can perform the most basic functionality. Usually, that just means it can display an input element or something. The key here is the test should fail if there are any runtime errors.
At first glance, a smoke test looks like it doesn’t do much. But so much can go wrong in a smoke test. What if one of the import file paths got renamed incorrectly during refactor? What if a Git merge conflict got resolved incorrectly? What if one of your dependencies is now incompatible with another dependency because you decided to upgrade one of them? Made some brilliant tweaks to your build tool config? A lot of stuff can break in the most mundane ways.
Yes, 100% test coverage will address all these issues, but that comes with a cost: maintainability. Anytime you refactor you will break tests. I know that’s the point, but how often do you fix false positives versus real failures? Again, what’s the cost of a bug in production? How over-tasked are the developers? It’s a constant balancing act.
Smoke-tests are so cheap and durable. It takes something like 30 minutes to an hour to write one. Usually the only false-positives that I’ve encountered are when the code gets removed or heavily refactored. Fixing them is so quick too. Usually just takes a few minutes to figure out what’s wrong and we’re off to the races. And because smoke tests are so cheap, it’s easy to ensure 100% of the code gets exercised into a runtime so nothing falls flat on its face. It’s similar to how statically compiled languages are at least runnable after compilation (I know that’s a generalization).
I’m not trying to say writing comprehensive tests is a bad idea. I’m just saying it depends on the situation. It’s up to the team to figure what works best and it’s important to be honest with each other. Many software engineers like to tout how immaculate their test coverage is, but in reality, if it takes a long time to release a new feature because the team is buried in maintaining tests then it doesn’t matter. Most companies have a QA process that provides another layer of protection as well. Software development isn’t a utopian society where everything is perfect. It’s a cold and it’s a broken Hallelujah. Ok, maybe I’m being a little dramatic.