Continuous code quality and automated code review tools

Javier Ortiz Saorin
Devgurus
Published in
11 min readNov 15, 2017
Sonarqube dashboard

The static code analysis is an effective tool to have a good overview of the project code quality and to be able to predict potentials issues that can arise. The term ‘code quality’ is a bit vague in general but in our context, we can understand code quality as everything related to code consistency, readability, performance, test coverage, vulnerabilities…

This analysis can easily expose the areas of code that can be improved in terms of quality, and even better, we can integrate this analysis into the development workflow, and thus, tackle these code quality issues in the early stages of the development even before they reach the main branches.

The idea is to add another stage to our Continuous Integration process, so anytime we want to merge new code in the main branch via Pull Request, our CI server (or a 3rd party service) will run this code quality analysis, setting the result into the Pull Request and making it available for the commiter and code reviewers.

At this point, we could define code quality goals per project that if not met, the Pull Request is marked as “not passed”. These goals should be set and agreed by everybody involved in the development (developers, QA team, project manager…). For example, we could define that the new code has to have at least 80% of test coverage and no code quality issues, otherwise, the Pull Request is not successful (don’t get me wrong, unsuccessful doesn’t mean declined, but code reviewers should consider this analysis at the time of deciding if they approve or not the Pull Request). And all of this automated!

Pull Request integration with Github

Furthermore, this analysis introduces a point of objectiveness in the process of code review, reducing the number of comments and discussions between developers, and thus, the time spent in code review.

Automated code review tools

So, after explaining how is the process of Continuous Code Quality, we need to choose the tool responsible for running the analysis based on a set of rules and thresholds. One thing to point out is that these tools don’t run your tests, so test coverage has to be provided externally, meaning, you have to run your tests (usually on your CI server) and send the coverage given by your test reporter in an expected format (i.e ‘lcov’ for Javascript).

I have been testing for some days the most known ‘automated code review tools’ (this is how they are called) and I will give you a small briefing of my personal experience using each one and the pros/cons I have found. But before digging into it, I would like to describe the requirements we were looking for in our company (obviously, they aren’t probably the same as yours)

Our basic requirements:

  • Full support for Javascript since it is the main programming language in our projects. Our main stack is NodeJS in the backend, and in the frontend we use Angular for older projects, and React/Vue.js for new ones (depending on the particular project).
  • Integration with Bitbucket Cloud (our VCS service) in order to add inline comments and code quality checks in the Pull Requests
  • Good static code analysis with an extensive set of rules
  • Cloud-hosted. We want to focus on software development and
    not spend time on maintaining a server/tool
  • Define code quality goals and fine-tune thresholds for each code quality measure (i.e Test coverage over 80%, No more than one critical issue)
  • Well documented

Nice to have

  • Easy integration with Codeship (our CI service)
  • Integration with IDE/Text editors
  • Hotspots. Easy way to find the places that we should focus because they are potentially a source of bugs
  • Library for uploading the test coverage result with ease

Code Climate

I don’t know if it’s me but lately, when I check the Github repositories of the libraries/frameworks that we use I often see the Code Climate badge on it.

It has a very friendly and usable UI. Even if you’ve never used this kind of tools, you can easily understand the different figures and charts provided. The first time you log in you can see a dashboard showing all your projects with a summary of the maintainability and test coverage per project.

The maintainability is graded from A to F according to various measures (mainly the number of code smells and code duplications)
Test coverage is also graded from A to F based on the overall percentage.

Project summary

Out of the box, the code analysis is not too accurate when you run the analysis for the first time. At least in Javascript… For example, it doesn’t discriminate a normal function from a factory function (a way to create new objects in Javascript with functions), so it gives a lot of ‘false positive’ errors stating that the factory function exceeds the maximum lines of code, since it considers it a ‘normal’ function (there is no way to exclude factory functions from this check and keep it for the rest of functions, or at least I haven’t found one).

By default, it doesn’t have more rules than the ones related to Complexity (method count, file length, cognitive complexity, etc…) and duplicated code. Even though you can add more analysis plugins with more rules than the ones given out of the box, it’s worthy to mention that for Javascript, in particular, the only two engines are ESLint and Node Security, and since ESLint is something you can easily integrate into your workflow and validate via your CI server, I wouldn’t consider this an asset in Code Climate.

Regarding testing coverage, it displays pretty well the coverage per file and you can even sort them, so it’s easy to see which files have a poor test coverage.

It also provides different charts that show the trends in your code (i.e if technical debt is increasing/decreasing, test coverage increasing/decreasing). I would remark the one that crosses Maintainability against Churn (files that change frequently). The files at the top right corner, this is, files with a high technical debt with a high frequency of changes are more likely to generate bugs.

Maintainability vs Churn

It states there is an integration with several IDE/Text Editors such as Atom, Vim but I haven’t tested.

One thing to point out is it doesn’t have integration with BitBucket Cloud for Pull Requests, so it would be great if the Code Climate team provides this feature in the near future.

Additionally, it provides a library called cc-test-reporter for uploading your test coverage result. It’s simple and easy to use. It works like a charm.

Codebeat

This tool takes a pretty similar approach as Codeclimate in terms of grading the projects. The main difference is that Codebeat uses a ‘4.0 scale’ system instead of A to F grades.

Projects and their grades

Codebeat uses its own algorithm for analysing the complexity (Code Climate uses well-known engines instead) and I would say it works pretty well by default.

An example of things that are analysed in addition to the common ones such as the number of functions, total lines of code, etc… are:

  • The depth of block nesting which normally means very complex functions.
  • The arity (number of arguments) of functions
  • Cyclomatic complexity
  • Number of returns within a function
  • Assignment Branch Condition (ABC), which considers the number of Assignments (=, +=, ++, — , …), the number of Branches (calls to other functions or class methods) and the number of Conditions (==, >=, <=, if, else, …) within a function

The algorithm for duplicated code is also pretty accurate. It differentiates ‘similar’ code (same structure but different values) and ‘identical’ code (same structure and same values).

One thing that I really like is the ‘Quick Wins’ section, which is a section with the top 5 issues that penalise your code quality grade, so you will likely find here a potential source of bugs/unmaintainable code.

Quick Wins

But there are also things I don’t like. The most important one is that you can’t configure the thresholds for considering a Pull Request successful or not. Basically, if there is a new complexity issue, duplication code or decrease in your test coverage, no matter to what extent, it will consider the Pull Request as failed. This feature is something that the other tools provide and would be nice if it adds it.

The other drawback is that its algorithm is closed and it’s as it is, which means you can’t add more plugins nor rules. This is good if the default algorithm works fine for you, but if this is not the case, you can’t extend it.

Codacy

It has the best UI of all analysed tools with a very clean user interface. It’s an example of a good UX interface. Easy to find the most important information and easy to configure the main parameters.

After using for a few days, you feel really comfortable with the tool and don’t get stuck looking for some needed info. You notice that the Codacy team has invested in this area.

The code quality measures are grouped into 8 categories: code complexity, compatibility, error-prone, security, code style, documentation, performance and unused code. So at first, it seems that it provides a more detailed analysis than the other tools (nevertheless, that’s not completely true for Javascript).

8 code quality categories: code complexity, compatibility, error-prone, security, code style, documentation, performance and unused code

It also allows you to define goals for your projects, either per file (you want to reduce the issues number or increase the grade of a particular file) or per category (i.e improve the security), and it recommends you the steps to follow or the issues you have to tackle to accomplish these goals.

Regarding Pull Requests, you can configure multiple thresholds giving the most advanced configuration you can find among these tools. It has the best integration with your VCS (i.e Github, Bitbucket or Gitlab).

Code quality thresholds configuration

With this being said, what’s wrong with Codacy if it seems to have all that we need?

Well, basically its static code analysis for Javascript is not as good as the others tools. While the other tools gave us several issues related to complexity in our projects, Codacy gave us no issues in almost all code quality categories for all projects.

This has to do with the fact that in order to check the code quality issues in Javascript, it uses ESLint, which is something we already included in our workflow, so we didn’t expect issues coming from it. It also has other rules engines such as PMD, but it seems to be outdated (i.e The PMD version installed is not capable to read ES6 properly, so we had a lot of ‘false positive’ issues).

Worth to mention it includes Hadolint for analysing your Dockerfile. 10 points here!

Duplicated code analysis works pretty well but I don’t like that it’s not considered for grading your project, so it seems it doesn’t give the importance it deserves.

In summary, Codacy has all you would expect for a code review tool…. but not for us, since it didn’t find any code quality issues in our Javascript code (our code was perfect xD). However, I think Codacy could be your right choice if the code analysis works better for your programming language.

SonarQube

Every programmer has heard about this tool at least once in his life. I remember working at Capgemini someone on the team saying: “Let’s check Sonar and fix the errors”. Ohh! great memories :)

Project overview

It’s by far the most powerful code quality tool with a lot of measures and filters but that leverages a more complicated UI and configuration. It has so much information that you can easily find yourself lost between the figures and charts.
The set of rules is huge, i.e for Javascript it includes 186 rules of different types: code smells, bugs, vulnerabilities, etc.

Set of rules for Javascript

The main difference between SonarQube and the other tools is that the code analysis runs externally in your CI server and the result is sent to SonarQube. Then, this analysis is processed by SonarQube and stored in a database in order to be served. That means an extra effort in configuring your CI server.

SonarQube architecture and integration

Furthermore, you can configure Quality Profiles, which is the set of rules you want to be applied in the analysis and the Quality Gates, which are the code quality thresholds. This last feature differentiates from Codacy in the way that SonarQube warns you of the unmet Quality Gates afterwards, once your code is merged and not in the Pull Request itself.

Nevertheless, the code issues found in the analysis can be added to your Pull Requests using a plugin. This plugin is something that you have to configure in your CI server, so again another extra effort over the other tools.

Overall perception in SonarQube is that it’s powerful but requires more effort in configuring and integrating into your workflow (although if you use Travis as CI server there is an add-on that makes this integration effortlessly).

In my opinion, it should add the Quality Gates in the Pull Request stage. Doesn’t make sense to define the Quality Gates and inform you that your project doesn’t fulfil the code quality thresholds once your code is in the main branch.

And finally, one point that can make SonarQube to be your choice is that you can download for free (thanks to its LGPL license) and host yourself, or you can use their cloud service called SonarCloud.

Our choice

Honestly, we haven’t found the perfect tool for us based on our requirements. We finally had to make a balance between pros and cons and decide which requirements were mandatory and which weren’t.

The feature of providing the code analysis info in the pull request is something we really wanted because we believe in the Continuous Code Quality process. So that made us discard Code Climate since it doesn’t have this integration with BitBucket cloud (our VCS).

Codacy didn’t give us a good static code analysis for our Javascript code. We already use ESLint, therefore we would have liked to have a different rules engine and algorithms to assess our code quality. The main goal of a code review tool is actually reviewing and we didn’t have that with Codacy, so we were forced to discard this option too.

Thus, we finally ended up with two options: Codebeat and SonarQube.

Neither of them has the feature we like that marks the Pull Request as unsuccessful if the defined thresholds aren’t met. However, Codebeat gave us a very good static code analysis out of the box and we found that there wasn’t too much difference with the one provided by SonarQube.

Since we didn’t want to host SonarQube ourselves (an asset for other companies but not for us), considering the time required to configure SonarQube and to integrate into Codeship (our CI server), our Devgurus QA Lead and I finally decided to move forward with Codebeat even though SonarQube would have been a good choice too. Time will say if we chose well or not.

Hope this article helps you to convince yourself about the benefits of Continuous Code Quality, and if so, have a good overview of the different code review tools that exist in the market.

In Devgurus, we always invest in technology and offer the most innovated technologies to our clients. If you are interested in knowing more success stories don’t hesitate to contact us: support@devgurus.io

--

--

Javier Ortiz Saorin
Devgurus

www.orsaorin.com Web developer, proton (+), Full stack developer and software engineer. Freelance