Code quality, measuring and improving it

In this article I’ll explore some ideas about the subject of code quality and share the story of how we are trying to improve it at Devgurus. There are many ways to increase the quality of our code but we are going to focus on things we can measure and implement through the use of automated tools. But before we do that, let me try to convince you that is actually a good idea to try to push for better quality code whenever possible even if time is tight.

The disciplines that lead to successful software are always valid, no matter what phase the company is in. It is laughable to think that good disciplines are less important during the start-up phase. The truth is that, during the start-up phase, those disciplines are just as critical as they are at any other time.
Robert C. Martin — Clean code blog.

Here we go… high quality code is reliable. There are many patterns used by static code analysis tools that can detect code that is likely to have mistakes in it, these issues can be the present source of failures or leave the door open to future unexpected failures.

High quality code is easy to read, understand and change. This greatly lowers how long it takes developers to become familiar enough with it to start making valuable changes and how easy it is to actually make those changes. Another way of looking at this is by saying that a developer’s code is not really theirs, since other developers are almost sure to interact with it eventually. We should try to be considerate to these future developers, and think twice before implementing that crazy obscure solution we came up with just for the sake of time. Static code analysis tools can help us here by measuring the complexity of our code and giving us a maintainability rating.

High quality code is tested. Code level tests absolutely pull their weight and should not be ignored, this practice (implemented correctly) will greatly increase the confidence of developers in their work. A common argument against code testing is that it takes too much time, but I believe this to be a function of how little we actually do of it. If we truly integrate it into our workflows and become proficient at it, this should’t be the case. As a piece of advice, if we find that writing tests for our code takes too much time or is too difficult that probably reflects badly on your code and not on the practice of testing. Static code analysis here can help us by reporting our code coverage in a centralized manner.

Having said that, you would think that most developers are constantly thinking about ways to improve it and keep quality high, but in my experience that is not the case (I include myself here) even when the problems are detected by an automated tool. Bugs and issues reported by automated tools are all too easy to ignore since we generally don’t see their impact right away. Do that enough times and soon you’ll be ignoring them all.

There are no simple answers or tips on how to improve this shortcoming most of us have. Its a matter of tightening up our processes and owning up the fact that quality is everyone’s responsibility regardless of our job title.

Before we implement any static analysis tools we should set up our team for success, otherwise there’s really no point in measuring things like code quality. That means setting up good practices and standards, giving trainings and making it clear to the team that quality is a priority.

Also there are plenty of things that can improve code quality that are not listed in this article since they are not the focus of it. An example of this are code reviews, their usefulness shouldn’t be understated and we at Devgurus consider them to be mandatory for every pull request, regardless of size or perceived complexity.

Static code analysis tools — Sonar

In our case we implemented a fairly well known tool, the cloud based option for Sonar named Sonarcloud.

It is impossible to do a true control test in software development, but I feel the success that we have had with code analysis has been clear enough that I will say plainly it is irresponsible to not use it.
John Carmack — CTO at Oculus VR, founder of Armadillo Aerospace.

Below there’s a small list of some of the cool things a tool of this sort can offer.

Reliability

This is a measure of how buggy the tool judges our code to be. The analyzer detects possible failure points and points them out to us. These types of issues are important and should be resolved first since they expose points of failure. That means that our code is not simply low quality, but potentially broken.

Maintainability (code smells)

Code smells are maintainability related issues that may cause problems in the future and contribute to the technical debt of our codebase. Technical debt is measured by the amount of estimated time it will take to fix all code smells and contributes to the maintainability rating of the project. Lastly, the maintainability rating is a measure of the proportion of technical debt and total project time.

Technical debt

Complexity

The degree of complexity of the code as a function of its cyclomatic and cognitive complexity.

Cyclomatic complexity is the number of linearly independent paths within our code and it commonly seen as a good practice to keep it under 10. This number can be negotiable of course but we should be mindful about it. If we’re going to ignore it then we better have good reasons to do it.

Cognitive complexity is an attempt at improving the above metric. It uses higher level rules to measure complexity rather than plain mathematical models. It assigns different weights to different code structures judging their complexity as compared to that of others and not simply by how many paths they diverge into. For example you can have a “switch” statement with 10 cases versus a code structure with 10 “if” statements, they will have the same cylcomatic complexity but vastly different cognitive complexity since the switch statement is much simpler to understand.

Security

These rules are a bit different from the rest, in general Sonar tries to limit the amount of false positives as much as possible, but with these rules that is not the case. Security issues should be taken as a warning and should be reviewed in a per case basis to determine if they require any action to be taken. These rules are based on the advice of two main sources, SANS Top 25 and OWASP Top 10.

Coverage

Percentage of unit test coverage as reported by our continuous integration. Sonar really doesn’t have anything more interesting to say about this metric than our coverage percentage, and the coverage is reported by tools not contained within Sonarcloud. Standardizing minimum coverage percentages is a tricky business and there’s little agreement on it. One thing most people agree on is that we shouldn’t enforce arbitrary coverage minimums and rather have a more conscious approach to it. What that approach is may depend on many things about your own organization.

Leak period

A cool feature the tool offers is the ability to only see the analysis of the code added during the “leak period” which is an arbitrary time frame as opposed to the complete lifetime of a repository. This simple feature allows us to make sure that regardless of the overall quality of our code we do not keep adding technical debt to it. I find this particularly useful since its common to have projects that have a lot of technical debt and tackling it feels like a daunting task and no steps are taken to resolve any of it. With this workflow you can at least prevent adding problems and maybe in the future you’ll find the time to fix the existing ones.

We’ve work to do!

Git integration

A good way to make sure developers are aware of the static code analysis report is to integrate Sonar with your git based repository. In our case we use BitBucket and it was fairly straightforward. This way we managed to get the results of the analysis to the developers as soon as they create a pull request and before the code is merged to a main branch. This is useful for both the author and the code reviewers and allows us to break the cycle of adding technical debt and then fixing it.

These are, of course, not all the features this tool gives us but it’s a good starting point to stand on if you’re interested in learning more. For a more in depth description you can refer to their documentation.

In Devgurus, we always invest in technology and offer the most innovating technologies to our clients. If you are interested in knowing more success stories don’t hesitate to contact us: support@devgurus.io,

Follow us on Twitter and LinkedIn.