Improving Code Quality with SonarCloud
While we all strive to improve the quality of our code, assessing the results of that effort remains a challenge. SonarCloud provides a standardized way to measure code quality and the progress of your team.
The Tool: SonarCloud
SonarCloud is the hosted version of the Sonar tools suite, including SonarQube (self-hosted) and SonarScanner (different scanner agents e.g. generic, maven, gradle…). SonarCloud allows you to analyze your code through an appropriate SonarScan agent and easily record the results in a centralized location. Your team can then access a quick and simple interface to evaluate their code and take informed decisions on how to proceed. Isn’t that exciting?
SonarCloud revolves around projects: these are essentially units of code that are built from a single code base. The code of each project gets parsed through an appropriate agent to identify issues; the results are then sent to SonarCloud to be grouped by branch. The general expectation is that short-lived branches will be merged into a specific long-lived branch; this allows the interface to assess a branch impact on the existing code and helps you decide whether a feature branch is mature enough to be merged.
The distinction between long- and short-lived branches is also helpful in reducing noise: code quality graphs take only the former into consideration, visualizing improvements only where it matters. These can be customized to select specific measures and with the version naming of your choice.
Surely you have spent many hours writing appropriate tests for your code. SonarCloud can parse the test reports providing a good estimate on your code coverage. Depending on the different agents, the tests are either run automatically or you will have to provide the test output in an appropriate format.
The Idea: Code Quality
All of this is well and nice, but there is an unavoidable question: what is code quality? It is surely a matter of perspective and different use cases, how is it possible to have a one-size-fits-all solution?
SonarCloud provides two levels of abstraction to make it easier for you to customize its rules to your needs. Quality profiles are closest to the code: they group sets of rules to trigger certain warnings or evaluations; quality gates, instead, are more abstract: they group rules evaluating the number and severity of warnings, to return a uniform evaluation of code quality. In other words: quality profiles define what a violation is, quality gates define how many violations are considered acceptable. While quality profiles are necessarily language-specific, quality gates depend only on the results of quality profiles, and can be used as a common frame of reference to evaluate projects in different languages.
Still, does it really work? Can a set of rules replace careful review and mindful analysis? That is obviously not the point: the tool works very well out of the box to highlight possible issues. The maintainers of the project then need to go through the flagged items and assess the actual severity of the issue. After that, SonarCloud allows the user to modify the initial evaluation (change the severity, mark it as not an issue, or an issue of a different kind, open a ticket…); such a change will be saved and tracked, so the issue will not resurface when the same code gets propagated across branches.
Overall, code quality is graded according to five core parameters:
- Bugs: issues that can make your code unreliable
- Vulnerabilities: issues that can put your code at risk of attack
- Code smells: issues that make your code harder to read
- Coverage: how much of your code is covered by tests
- Duplications: how many times a bit of code is repeated across the code base
Depending on your needs, you might want to weight some of them more than others.
The Implementation: Pipeline integration
To have proper results, the code scan needs to be run quite often. While integrating it into the pipeline seems like a great idea, it may come at the expenses of pipeline runtime. We found that running a scan for every commit in every branch was a little too much for us, so we opted for a model where the scans are run in two cases:
- for each branch that has an open pull request, because the result of the scan will be necessary to evaluate the final impact on the code base; and
- for each commit in the master branch, because their count should be limited to branch merges and the results are important for historical evaluation
Integrating the SonarScan generic agent into our Bitbucket pipelines was quite easy, as we simply needed to provide an additional step to be run from an custom docker image containing only the agent itself. This way we managed to support most of our projects without having to modify our CI images. Even where the generic agent was not supported (gradle) and we had to made a couple more changes to our build process, the installation was not particularly challenging. In this case the installation and configuration of the scanner was managed through the step definitions of the build tool itself.
The biggest challenge we faced was integration with monorepos. The entire environment seems to be built around the paradigm “one repository, one project”; if your team is using one, some integrations will not be available (e.g. the Bitbucket widget) and some additional scripting will be necessary if you do not want to scan each project every time a scan is triggered.
The evaluation
Despite some minor downsides, our overall impression is that SonarCloud does provide useful insights on how well your team is progressing towards a more secure, more maintainable code base and, as such, is a very valuable addition to a developer’s toolbox.