I am always thinking of ways I can improve the codebase of the GumGum Sports platform. Because of this, I’ve developed a habit of jotting down areas of improvement for this Java code. This journal tends to grow faster than the rate in which I actually address these concerns. In a similar light, I find that code-smells which slip through the cracks during the review process tend to linger in the codebase for some time. It’s difficult to balance the time commitments between addressing technical debt & building features to moving the product forward. But, we aren’t hopeless in having the codebase suffer due to this fallibility. Like with many ways we address our fallibilities as humans, process, whether automated or otherwise, can be just the remedy. In this case, augmenting the build process with static code analysis is a crucial step towards resolving this issue.
I recently attended a meetup which focused on the use of static code analysis tools for Java and the code-quality gain they yield. I thought the insight was very much useful, so I decided to experiment with the tools which were recognized. Some of the tools which were discussed include PMD, CheckStyle, Jacoco, FindBugs, and SonarQube. Below, I will detail my experience in utilizing each of them as part of the build process for the Java-based Sports API project.
PMD — I was able to procure feedback from this tool very quickly, as configuration was a breeze. It just took a few lines of Groovy code in the gradle build script for the tool to start identifying problematic lines of code. These issues may include improper use of BigDecimal, unused constructor parameters, or variables which should be declared final. These findings would be printed to the console, and the presence of any “violation” would force the build to fail. With the build failing on this condition, it means the team will strictly adhere to a standard of code-quality enforced by the tool. So, for what I’m trying to accomplish, this behavior is ideal.
PMD comes equipped with a collection of predefined rulesets, with the flexibility to create your own. Rulesets are defined in a straight-forward XML-schema, and it’s easy to exclude rules you may disagree with. PMD will pick up on code-quality issues related to best-practices, code style, design, documentation, vulnerability to errors, concurrency, performance, and security.
CheckStyle — The tool doesn’t do much to scrutinize the programmatic approach expressed in the code. As one could assume based on the name, CheckStyle is more concerned with style & formatting. The checks performed fall into categories such as naming convention, size/length convention, and javadoc/comments. One example of a guideline is to check whether curly brackets belong on the same line, or on the succeeding line by themselves.
CheckStyle is already implemented in our build process, and I think it has really contributed to the “hygiene” of our codebase. Like with PMD, the rules are defined in an easily-configured XML file.
Jacoco — Another tool already utilized by Sports API, Jacoco helps to ensure that the codebase maintains a threshold of code coverage. Flexibility is offered, where the minimum threshold is configurable in the build script. Jacoco also provides a handy HTML report which breaks down code coverage across classes and packages. In most circumstances, it’s not feasible to implement a high coverage threshold out of the box. So, as the team works towards better code coverage, the HTML report is crucial to understand which components pose the highest risk for regression.
SonarQube — This tool takes a little more effort to get set up, as SonarQube relies on a dedicated webapp to display its findings. Once this webapp is installed and running, it’s very easy to add the right plugin to your build script and begin analysis. Upon opening up the webapp UI, I was greeted with about 300 code-quality violations. Despite this, gradle was able to build successfully. By design, SonarQube will not force your build to break simply because code-quality flaws were found. You may be able to configure what is called a Quality Gate, but that would only cause the build to fail if newly committed code contains issues. As my mission is to enforce a higher standard of code quality at build time, this isn’t the right tool for the job. This is not to say that SonarQube does not offer valuable insight. SonarQube provides their own code-coverage analysis, along with detailed insight related to code smells, bugs, and vulnerabilities. All of these issues are assigned a level of severity, and even describe when they were introduced to the codebase, based on git metadata.
FindBugs — Unfortunately, I found this tool to lack the ease-of-use that the previously mentioned tools provided. Similar to SonarQube, violations are not printed to the console. However, unlike SonarQube, there is no easy-to-use UI provided. It doesn’t even provide an HTML report like Jacoco. All code-quality violations are exported into an XML file. With these other tools available, requiring the user to write a custom XML parser is a non-starter for many, including me. However, as FindBugs analyzes bytecode (opposed to raw source code), it is able to find flaws that would otherwise go unnoticed by other tools. Some of these flaws include concurrency antipatterns, invalid regex patterns, and poorly implemented equals()/hashCode() methods. With this in mind, I will not completely write off FindBugs. But, for now, I have plenty of code-smells to address that were found by PMD and SonarQube.
One of the most essential benefits to static code analysis is how it encourages developers to cast aside personal style preferences and adhere to a team-instated convention. This leads to more effective pull requests, with less nitpicking involved. Most IDEs perform this type of analysis, and their findings show up as suggestions or warnings. Across a team, different developers can use totally different IDEs, different versions of the same IDE, and maybe the same version of the same IDE, but choose to suppress or enable different warnings/suggestions. Adding code analysis tools to your build script gets everyone on the same page. If your project doesn’t already utilize one of the aforementioned tools, I strongly encourage you to give one a try!
This blog was inspired by a talk presented by Philip Yurchuk, hosted by South Bay JVM User Group.