DeepCode AI Code Review Vs. Other Static Analysis Tools

DeepCode automating as much as possible from the code review process

As a DeepCode team member, I am proud to be working in a team that puts the customer needs and betterment as a single most important core priority. In light of this core value, I am lucky to speak to many individual developers and enterprise customers. Just last week I had 4 developers asking me the same question of What makes the DeepCode AI Code Review service unique? I decided to start providing answers to common questions we hear: Below are the answers to the question of comparing DeepCode AI Code Review service to other static analysis tools

The 3 main differentiators for DeepCode are Coverage, Strategic Focus and Advanced Technologies

Code Review is an overloaded term, heavily abused in the industry. Typically, this is the process where another person looks at the code and reports if everything seems right with it. Our Strategic Focus for the AI Code Review service is very simple: do as much as possible from code review with machines.

However, there are plenty of tools that claim to be doing code review by running simple static analysis tools (such as Lint) that find a small set of annoying and repeatable stylistic issues, formatting and minor issues. We strongly believe that to get a good code review, the automated tools need to provide a much insightful and wider range of suggestion. So, when designing DeepCode, we focus on these major areas:

On Coverage: DeepCode offers a considerably larger range of defects than any existing static analysis tool:

  • Only 10% of the sampled defects/rules from existing static analysis tools overlap with the large range of defects and suggestions DeepCode can detect.
  • DeepCode can very quickly encode checkers/rules/patterns from other tools if/when required or integrate with existing Lint tools, but we believe you will not want them.
  • We have working integrations with GitHub, GitLab. Bitbucket is coming soon.

DeepCode’s key Technology Differentiators are the unique combination of Program Analysis, AI, ML representation & Big Code to offer:

  • Never-ending learning from the collective brain of the development community
  • Language independent platform enabling the addition of a new, or even custom languages, within weeks
  • Speed: immediate results. There are no compilations or lengthy processing needed: our average time to analyze a large-size repository is approximately 5 seconds, compared to overnight runs offered by the incumbent analyzers and linters.

Of course, DeepCode will not find all the bugs in a program. In fact, it is far from this and code review, in general, does not guarantee lack of bugs. Below we summarize how we position DeepCode in comparison to using Lint tools or other static analysis tools.

DeepCode’s large suggestions coverage contrasts the specialized checkers which focus on only a few challenging problems

The AI Code Review services can detect issues in any category, as soon as some open source repository has fixed the same issue (but possibly in a different context). In a number of cases, we automatically identify a specific category for an issue (as opposed to manfully specifying them), but many defects cannot be categorized automatically. Some of the commonly categorized example-categories are:

  • Security changes (e.g. fixing misconfigured cipher)
  • Discover API misuse (a general category, issues here can lead to user-facing problems, performance problems or just ugly code).
  • Subtle Bugs (e.g. need to use LinkedHashMap instead of HashMap to preserve order).
  • Coding Style (e.g. switch from callback functions to lambdas)
  • Many many others …

In an effort to further expand our areas of novel coverage we commission, participate and collaborate with all researchers around the world so we can incorporate the latest tangible developments into our platform and further expand the coverage of DeepCode.