Got code quality issues? Stop looking at dashboards and start failing your build
There are a lot of tools out there for monitoring the code quality of your solution by using static code analysis and then displaying a list of issues to fix. Many come with pretty graphs and dashboards.
I’ve used both of these tools on recent projects. The concept is appealing: you can monitor your code quality and take action if it rises above a threshold. However, I’ve never found this approach to be effective in a team environment.
On most development projects, developers are focused on completing functionality as fast as possible. They have scrummasters and product owners beating the drum every day on this. Unless you have a process to enforce it, my experience is developers will ignore lint warnings and errors in code every time. They rationalize that they can fix them later, after their features have begun testing.
The problem is that “later” is a horrible time to resolve lint issues. The functionality has likely already passed regression testing, and the details of the code are no longer fresh in the developer’s mind. Changing code at this point presents the risk of unexpected side effects.
The only way I’ve found to actually ensure consistent and safe resolution of code quality issues is to stop them before they start — by failing the continuous integration build if they’re present. Do not let code with new quality issues get into the environment where testers can see it. As long as you preserve this barrier, developers will be forced to resolve the issues prior to calling their features done. There’s no way to cut corners.
I recommend not distinguishing between warnings and errors either. The vast majority of lint errors can be resolved in just a few minutes time. There’s no reason to let even minor issues through. If you consistently don’t care about certain warnings, just turn them off in your configuration file. You can also apply these concepts to test coverage: Don’t embark on test coverage writing sprees when your coverage drops too low. Instead, set a minimum coverage threshold and fail the continuous integration build if it’s not met.
You can make the error process easier by using tools that fix issues on file save, such as IDE plugins for prettier and linters. Here’s the key though: don’t just rely on IDE red underling to alert developers. You should also check for issues in a precommit or prepush hook to avoid any issues sneaking through and failing the build. Consistent enforcement on developers’ local machines is critical— otherwise people will complain about the build constantly failing and advocate to turn off the lint checks there.
(Hound is an interesting middle ground here — it enforces the rules as part of the pull request. That’s better than nothing, but it seems like enforcing on developers’ local machines and then again as part of the CI build should do the job just fine. If style issues are appearing in your PR and you’re enforcing locally, the more efficient feedback is to tell the developer to stop adding no-verify to commits, not to comment on each issue to fix.)
If you have a lot of existing code with lint issues, I recommend either A) outputting the existing number of errors to a file and writing a build-time script to confirm the number of errors hasn’t increased in each file, or B) adding disable/ignore comments to each individual error along with a TODO to evaluate. Option B is more effort one time, but it allows more rigorous enforcement of lint rules against new code. Do NOT just disable lint for entire files.
One final note here is that tools vary in how straightforward it is to run them at build-time. Most linters (e.g. eslint, PMD, FindBugs) make this easy, but with SonarQube it’s surprisingly difficult. If your tool doesn’t support build-time enforcement, I recommend switching tools.