How SonarQube made me cry?

Wassa Team
Wassa
Published in
7 min readJul 11, 2017
How SonarQube made me cry?

Discover SonarQube

Sonar at Wassa

While Wassa is growing, our needs about code reusability, maintainability or reliability are becoming stronger. Let’s say frankly that it is necessary, so we took a look over tools on the market, and decided to use SonarQube.
Then we decided to make this feedback for our followers. Please enjoy.

What is SonarQube ?

SonarQube is a tool to measure code quality. We talk about static analysis, because it only looks at code without running it.
It will whip you, until you get some readable, maintainable, beautiful … let’s say a perfect code.

You will follow who has written bugs, who is suppose to correct it, it will identify duplicates, evaluate documentation level, help to follow best practices, estimate comments and unit test coverage, prevent complexity, analyse architecture …
After that, SonarQube will generate a beautiful (or awful most of time) report about quality, and its evolution over the time.

SonarQube official site
SonarQube on Wikipedia

Install SonarQube

Installing SonarQube is not the main purpose of this article. One of the reason is, we chose to use all default settings to start faster, and understand the product.
But you should be able to find help over the Internet like. What we have done :

Configure the first project

Since we chose to not to use other tools in a first step of building our continuous integration system, and want to start with static code analysis, we do not have many other tools. First surprise, there is no user UI that could help for that. There is only a report management interface, and many configurations. But when creating a project, nothing can help us to setup sources locations (as an example).
Too bad, we will have to get our hands dirty, update config files, and run command line, to make SonarQube analyse something. That’s the game.

We have followed this tutorial (step 5)

First project analyzed

The first analysis hurts !

It’s true. It was expected but it hurts anyway. Developers were frustrated, leads shameful, managers disappointed. Let’s take a break, and look at our first report.

How to read the reports ?

Everything is measured, and summarized in a global score shown as an overview. In fact, we get several scores in various categories. At the lower granularity there is the issues, each have its own tiny report, which contains several useful informations :

  • Categories (maintainability, reliability, security…)
  • The type (bug, vulnerability, code smell, duplication, complexity …)
  • The severity ( critical, major, minor, info …)
  • Correction state (open at this time)
  • Who has created the issue (he have to be ready for the whip)
  • From when it is active,
  • How many time to correct = The Effort

OK, we get an E in reliability score, but what is the amount of work to get an A ?

They have think about that, you need to look at the technical debt ! The technical debt is the amount of work you need to correct all issues. It is measured simply by summing all efforts needed for all issues. Elegant !

After calmly crying a while, carefully studying the report … Guess what ?!

We wisely decided to improve our quality.

How to correct

Simple : taking items one by one, and correct !
For each issues, you have several possibilities for corrections :

  • Attribute to a developper who should correct.
  • Mark as false positive
  • Mark as ignored (or will not be corrected)
  • Mark as done

The issue report has a description explaining how and why the code is an issue, and generally suggests a way to correct it. Advised, the developper will improve, refactor the code depending on the kind of issue. Most of the time, the developper simply has to follow indications provided by the issue itself to solve the problem. We were surprised to see that all evaluations were correct. Thank to the community I guess. In fact, when the effort was 5 minutes generally it took 5 minutes to make a patch.

Sometimes, there was items that were more difficult to correct such as complexity reduction. It was quite hard to correct some errors like « the complexity of a method is 48, the maximum is 15 ». Method complexity is about loops in if in loop in an inline anonymous method in if in loop in a callback. But once more, the evaluation of work time is accurate and reliable (unlike our code).

A few issues were simply ignored because leading to heavy refactoring or deep architecture upgrade. For this few issues, the technical evaluation debt was stratospheric (minutes instead of days), the fault to our bad architecture, not to the evaluation itself.

So finally the whole technical debt evaluation is quite precise.

Get an A ?

Yes it is possible. After a few loop over report — correct — report — correct — report — correct (I can do it all the night), we were able to reach the grail A score ! Ta ! Da !

BUT IT’S ALIVE !
I heard you scream … please relax, the project is alive, it’s a fact, and it will continue to evolve, mutate, grow. Developers will continue to use bad practices, write bugs (it’s not a bug, it’s a feature), forget comments, and … simply do their job after all !
Yes, that’s true. But maintaining an A score is easier than you think, and especially easier than to upgrade from E to A. The worst is behind.

The technical debt grows slowly and each commit should trigger a new analysis. Each analysis should be handled by concerned developers. In parallel, developers learn about each issue, get better practice and adopt good habits, making the code better and better and SonarQube happier and happier.

Continuous integration

How mature is our project?

Is my maintainability stable over the time ?

Are my critical issues solved fast enough ?

How many new issues have I had since last month ?

Is my unit test coverage still correct ?

SonarQube can take pictures over the time, you can play the “Spot the differences” game between pictures at specific time intervals. In addition, SonarQube can evaluate how the project lives, it proposes many time-based reports, that show graphical curves of our project life time. Since it is a measurement tool, it can provide history on every measure it has done. The inside database provide all tools to filter, organise, and request over many criteria, as you can imagine.
From this start, you can monitor the code quality over time, and hope seeing curves converge to the best.

We probably will write a further article in a few months to show how SonarQube has changed things for us, and have more history on our project quality over a longer period. Explaining with an indicator looks useful for us, and could be used as a KPI.

Please see a few screen shots from various projects showing beautiful curves (even if our history is quite short)

What’s next ?

At Wassa we have no continuous integration environnement yet. But we need the industrialisation process to improve our efficiency. Today, our sys admin is still running analysis manually using command line, which is quite annoying, and not user friendly. Tomorrow we will automatize the whole process, connect SonarQube to Gitlab, to run analysis after each commit, or every night. To do this job, we plan to use another very well known tool named Jenkins. It looks easy to integrate with our current technical environments, and in particular with SonarQube, and Elastic Stack (article coming soon).
Furthermore, we could connect many other tools like, Ant, Graddle, Jira, Artifactory…

But that’s another story, for future blog entries.

Do you want to know more about Wassa?

Wassa is an innovative digital agency expert in Indoor Location and Computer Vision.

Whether you are looking to help your customers to find their way in a building, enhance the user experience of your products, collect data from your customers or analyze the human traffic and behavior in a location, our Innovation Lab brings scientific expertise to design the most adapted solution to your goals.

Find us on:

--

--

Wassa Team
Wassa
Editor for

Wassa is a company specialized in the design of innovative digital solutions with great added value.