Govern and Empower — Using static analysis tools successfully

What are the success criteria for applying static analysis tools to manage quality software development at scale?

Joost Visser
Software Improvement Group
5 min readDec 22, 2016

--

Static analysis tools for measuring software quality hold a great promise for providing both governance over software development and empowering developers to create their best products. However, there are a number of pitfalls that need to be circumnavigated to cash in on that promise.

There are pitfalls on the developer level and on the governance level.

Pitfalls on the developer level

In a poll conducted by O’Reilly Media and my colleagues at the Software Improvement Group, we found that virtually all developers recognise the importance of code quality and code quality tools, but only 35% use them regularly.

So why do developers not use these tools? Or not effectively?

This question is addressed by a recent survey by Maria Christakis and Christian Bird (both from Microsoft Research). They found the following top pain points for developers when using program analysers:

1. Wrong checks are on by default

2. Bad warning messages

3. Too many false positives

4. Too slow

5. No suggested fixes

In other words, the tools are simply not giving the developers the information they need. At least not out of the box.

Pitfalls on the governance level

Not only developers, but also their managers struggle with taking the most out of static analysis tools. At the Software Improvement Group, we help our clients on a daily basis with software governance issues, and here are the main causes we observe for failing to use tools effectively.

  1. Underestimating the configuration effort and required knowledge. Most tools do not work properly out of the box. They need configuration. Not just to get them to work in your own environment. Also to customise their behaviour for your code base and your information need. And customisation is typically not a trivial task. It may require hiring external specialists to get it right. Typically, organisations that start using tools have not included time and money into their planning and budgets for the required configurations. Tool implementation programs regularly go over budget and time, or are abandoned due to such underestimations.
  2. Disconnect between technical measurements and organisational goals. When you do get the tools to work, configured properly and all, you typically get data, not information. More precisely, well-configured tools will provide developers with useful, actionable insights to apply local optimisations. But your software landscape needs to be optimised on an organisational level. Without a clear connection between the technical measurements and the organisational goals, the tools will not support high-level decision making. The developers get insights, but the larger organisation is left in the dark.
  3. Forcing tools onto self-organising teams. The realisation has taken a hold of the industry that software development is the work of creative teams that are most effective with a high degree of self-organisation. This is the trend of Agile software development. With self-organisation comes the autonomy to let each team make most of its own choices regarding which tools they use. Trying to force certain tools onto these teams by decisions from the top will not get those tools used. You’ll just pay for tools that barely get used. To ensure that teams actually adopt the tools you buy for them, they need to be involved in the selection, and you need to convince them of the benefits of the tools for their own work as well as for meeting organisational goals.
Success criteria to avoid pitfalls at developer and governance level.

Success Criteria

Given the above pitfalls, what do you need to get into place in order to get the most out of organisation-wide implementation of static analysis tools? Here are 5 criteria that can make the difference.

  1. Provide a simple definition of done. Tools tend to compete with their large variety of checks. This is not helpful. You need to select a limited set that can be embraced by everyone as their definition of done for quality. For anything that is organisation-wide, simplicity is key.
    (For an example of a simple definition of done, see the 10 guidelines for future-proof code presented in “Building Maintainable Software”.)
  2. Avoid project-specific, developer-specific configurations. If everyone gets to configure and optimise the tool for their own use, this defeats shared use. The definition of done should be company-wide or, ideally, industry-wide. Any specific configurations will reduce comparability, hence sharing and accountability.
    (The definition of done provided by “Building Maintainable Software” is even industry-wide, which allows inter-organisational benchmarking.)
  3. Demand technology-independence. Tools that specialise in a limited number of programming languages to the detriment of covering others are less usable for organisation-wide use. They have their use as specialised instruments for specific pockets within the organisation, fine. But a broad definition of done is still needed.
  4. Take measurements as the beginning of a discussion, not to end discussions. If you think you can use measurement results to end all discussions, you have the wrong expectation. Measurements should end discussions that are not fruitful (e.g. “Is this method complex?” or “Is this class evil?”) but should spark discussions that are useful (“Are we using the right approach?” or “Where should we invest more effort?”).
  5. Coach teams on how to fit the tools into their way of working. Having a tool is one thing. Using it productively is another. This requires the right mindset, the right habits, and some skills. These are not created overnight by installing a tool. It also requires some initial training and regular coaching. These help teams to see the bigger picture. Of how the tools can help them optimise their own way of working. And of how they can better contribute to organisational goals.

For the past 15 years, we have helped countless customers take the most out of static analysis tools through our tool-based consultancy practice at the Software Improvement Group. Whether these are the tools we ourselves have developed for our customers, those from specialised tool vendors, or open-source, we found that the success factors above are essential to get right.

A competitive advantage

And those organisations that do get it right gain an important competitive advantage. Because the increasingly pervasive role of software in society and business implies that effective use of tools for measuring and managing the quality of software becomes a key to success. Contact us and let’s talk about static code analysis in your organisation.

Joost Visser is CTO at the Software Improvement Group, Professor of Large-scale Software Systems at Radboud University, and author of O’Reilly books “Building Maintainable Software” and “Building Software Teams”.

--

--