Coding with strict inspections and no warnings

Ivo Miller
Ingeniously Simple
Published in
6 min readJan 7, 2020

This is quick account of how we got our large C# codebase down from a few thousand ReSharper warnings to zero and the benefits of having done so.

Motivations

There are many motivations for setting a strict set of code inspections and abiding by them. Here are a few:

- It promotes consistency. Developers should feel equally at home across different areas of the code base.

- It helps eliminate niggly bugs. IDEs are increasingly capable of highlighting code which is probably erroneous or may lead to subtle null references. It is worth paying attention to this.

- It can help promote timely refactoring. If, after adding some new code, a method produces a warning for some complexity metric, that is probably a good time to rework it slightly, before it degrades further.

- It helps identify redundant code. There is significant cost in maintain code which is unused.

- It can be a good source of learning for new developers

Even better when there are no longer any warnings

If you can fix all such benefits in the code base and fail the build on any warnings, appearing the benefits are even greater. All the benefits will be applied before new code makes it to master (and production). Additionally, developers tend to be far more motivated not to add new warnings, as the difference between zero warnings and two warnings seems dramatic, whereas the difference between two thousand warnings and two thousand and two does not. It is better not to live with broken windows.

Choosing a technology

Getting developers to care about inspections can occasionally be a battle. This is exacerbated if multiple technologies are being used to track the code style or quality. In our case we use both ReSharper inspections and SonarQube. It is unfortunate to be using both technologies side by side, especially as they often overlap in the inspections they report. That said they each provide sufficiently distinct benefits that we have continued to use both.

ReSharper contains a very powerful array of inspections which integrate very well with the IDE (whether it be Rider or Visual Studio) and which integrate very easily with the build steps on the build server. It is straightforward to fail the build if the number of inspections goes up. It lacks some of the more sophisticated inspections for analysing class/method length/complexity and so on. This is slightly surprising since IntelliJ offers all of these inspections for Java.

SonarQube does offer these inspections and it also has the handy feature of being able to add comments to GitHub pull requests, picking up on problems in the new code affected by the pull request. But it does not integrate so nicely with the IDE or the build server. Issues are discovered after pushing the code rather than as the code is typed. It is also frustrating that warnings cannot be suppressed in-code. Warnings should rarely be suppressed but it is sometimes appropriate, and any time a class gets refactored, a suppressed SonarQube warning will reappear.

We have reduced our ReSharper warnings down to zero for our project but still have outstanding SonarQube issues. This is mostly because a reasonable number of the remaining SonarQube issues are quite hard to fix, but also because we have to track the SonarQube issues by going to a separate website from the build server, and we remember to check it less often. We do hope to reduce the SonarQube issues down to zero eventually as well.

Building motivation and getting everyone on board

In any event we initially had thousands of issues in both tools. For a while it was difficult to get people motivated to do much about it, because there were just so many, and fixing them was quite tedious work. The build server UI for examining the inspections across the solution also did not scale well to that many warnings.

The first step was to prevent the problem getting worse. We added a build failure condition so that our builds would fail if the number of warnings increased. This turned out to occasionally produce some irritating side effects: increasing the number of warnings and then reducing it to a number still larger than master might produce a green build for a branch, and then once that branch was merged everyone else’s branches would fail because the number of warnings had increased on the rebased branch. This was an annoyance but we still agreed that it was better to deal with this than not enforce the rule.

We initially just worked on the boy scout principle, fixing the issues in the area we were working in. On the back of that, the real way to build momentum was to demonstrate that significant progress could be made in reasonably short order. Starting with the simplest types of inspection which could be fixed by the IDE automatically with a “fix all instances of this inspection in the solution” command, the number of warnings could be dramatically reduced.

Many hands make light work

After a few initial pull requests which made significant progress, several different developers across the team all contributed in fixing the remaining issues, parcelling them out by type. It turned out not to take that much time at all.

Incremental or bulldozer approach?

When seeking to abide by a strict set of inspections, the question arises whether to turn on all inspections and seek to fix them all, or whether to incrementally switch on inspection rules and then seek to apply them. The biggest cost of not having rules switched on is that new code will not abide by them, but it can be bad for motivation to have too many inspections which are difficult to fix.

I think we hit upon a fairly good middle ground. We started with all the default ReSharper warnings and made sure to fix those. We are now able to incrementally add additional inspections as warnings, one by one, and seek to fix them across the solution.

In terms of fixing the inspections, it was much easier to review the pull requests if each commit represented fixing a particular type of inspection across multiple files, rather than many types of inspection across fewer files. Some would require careful review, and some which were just fixed with IDE automation could be merely glanced over.

Solution-wide analysis and multi-targeting

Having fixed all the inspections we were able to transform the slightly problematic build server failure condition of failing if the number of warnings increased to an explicit failure if the number of warnings was greater than zero. This was far more satisfactory.

One of the biggest benefits to fixing these inspections has been the ability to immediately identify unused code, following refactoring or deletion of existing code. This feature is provided by the solution-wide analysis feature of ReSharper. Unfortunately we subsequently had to turn this feature off for a long time, because of multi-targeting. In making some of our projects compatible with .NET Core, we made them multi-targeted. ReSharper quite reasonably treats a multi-targeted project’s code as two separate code bases (as does DotCover). Unfortunately if the .NET core code calling a multi-targeted library does not call all the same classes as the framework code then there can be a number of false-positives in the inspections. Moving away from multi-targeting and towards .NET standard addressed this issue.

We have been very glad of reducing our code to zero warnings. It reaps constant benefits and having got it there, it is easy to keep it there. With the release of every new version of ReSharper, we can immediately update to fix new inspections, and we can quickly adapt our code base to adhere to shifts in company code style and coding standards.

--

--