A Perfect Union, The Convergence of Software Performance and Security: The Imperative for Automatically Scanning Codebases for Vulnerabilities Before Releasing Code

Edward Koeller
Dec 31, 2019 · 5 min read

The importance of automating unit, integration, and performance testing, packaging, and delivery of software is very well known and championed throughout the software development community. Being able to click a button and release the latest functioning code to consumers has, and is, the gold standard for code delivery. For some time, there has been parallel thinking in the cybersecurity community around the need to ensure robust code, reducing risk to business users and consumers of web applications. Integration and Delivery Pipelines must do more than just unit test and package up a binary for consumption, they should be leveraged to ensure due care. Due care is the legal argument for creators to be proactive in the prevention of harming the consumer or public. In software terms, it’s about being proactive in prevention and detection of vulnerabilities during development and design phases to reduce the risk of legal action against your company should you introduce a software vulnerability that is exploited and causes loss. Pipelines should be able to test and ensure the code you are releasing is free of known vulnerabilities and easy to detect exploits.

According to OWASP’s latest top 10 risks, the most common vulnerabilities are database injections, xml processing errors, insecure deserialization, dependencies that are vulnerable, insufficient logging and monitoring buffer overflows, database injections, and many other detectable errors. Certainly, there are other errors that are harder to detect and prevent, but the goal is to attempt to automate the detection of the most common and detectable vulnerabilities as part of reasonable approach to following a due care mindset when it comes to releasing software and reducing risks. Beyond OWASP, NIST 800–53 offers guidance to ensure software is evaluated prior to being released to help ensure minimize the number and severity of risks.

According to Verizon’s 2018 Data Breach Investigations Report (DBIR), over 48% of data breaches used vulnerabilities in code or SQL Injections to breach databases. The report also states that the lines of code in a product doesn’t have any reflection of how many vulnerabilities can be detected. Small microservices can have just as many exploits as large enterprise spanning services. Performed, manually finding buffer overflows, database injections and other common vulnerabilities can be very difficult in thousands or even millions of lines of code, making it a less than practical solution. Most of the time, security code reviews focus on core logic functions, common functions, or can focus on functions that are deemed complex by McCabe cyclomatic complexity. OWASP’s latest code review guide estimates that around 250 lines of code takes an hour to review in a security focused review as a baseline, this could be more or less in a given company. That doesn’t include fixing the vulnerabilities, bugs, buffer overflows, or logic errors that can cause an exploit in your code. This also doesn’t include why a code security review is being conducted on a codebase. Hopefully it wasn’t due to a known exploit being released into production.

Integrating vulnerability scanners, security scanners, and static code analysis jobs into a code pipeline early in the SDLC will significantly reduce the required security code reviews by finding the issues much earlier in the SDLC. Rapid7 recommends doing this as it reduces the time spent in manual reviews for security specific issues. Instead of reviewing thousands of lines of code over the course of days, an automated analysis is run before there are thousands or millions of lines of code. This would block a release and reveal to the developer any issues before releasing to production. The time spent performing reviews would be drastically reduced, the risk and cost of exploits in the production release are reduced significantly. A report from IBM states that the design phase of the SDLC is the least expensive phase to fix a vulnerability, but if a vulnerability is found in the testing phase it could be 15 times as expensive to fix than if it were found in the design phase, and it gets even worse if the vulnerability is found in production where it could be 100 times as expensive.

When developing software, the goal is to minimize the time from concept to production code release. Having to perform a manual day-long security review on the 2,000 lines of new code for that new feature a customer requested drastically increases that key metric of a developer’s workflow. The benefits are exponential in adding these as automated checks into a pipeline for a new greenfield application. The more difficult thing to do is to add these automated scans into an existing workflow and pipeline as it will likely show many issues and problems that the developer will not think are significant.

When adding these automated scans and analysis programs into the pipeline the goal should be to start small and continue to add scope to larger portions of the business logic. At several of my positions I have added these scanners and static analysis tools to the core business logic portions of many codebases. Usually an application has several abstraction layers, we would always start with the layer that has the largest possible risk score which would either be the outer layer, logic layer, or database layer. We would use due care to establish what services needed these automatic tools and what layers we would work with first. This always required fixing several known issues and common vulnerabilities from known exploits in third party libraries, logic errors for buffer overflows, and SQL injection errors in the layer we choose to start working towards. We would then continue to expand the risk scope to the other layers while fixing the warnings and errors the new tools in the pipeline found. The key outcome was the errors were fixed, an automated check would detect those same errors in the future if they were missed in a manual review, and as new features and functionalities are added to the scanners and analyzers the code base would be scanned with those new features without any changes to the pipelines moving forward.

There are many products that perform static analysis of code bases and introduce metrics to baseline and track where a codebase stands compared to other codebases within a company. Whether it’s checking for security vulnerabilities, analyzing the codebase for code metrics and complexity, checking for SQL injection statements, or common authentication missteps, there are many choices for each. There are many free to use applications to do all these things, and many paid applications that range over $100,000 annually depending on the size of the company or codebases.

Depending on the situation, the costs of these tools can far outweigh the potential for data loss, data breach, or any other issues that may arise from having a known exploit within your codebase. It’s a business decision. Following principles of due care helps ensure preventative measures are made, while automating as much as possible allows developers to still move quickly. The goal of automating as much of a security review before releasing the code will provide the proper balance between high velocity and making the codebases stronger and more resilient to security vulnerabilities. Being able to block possible vulnerable binaries from getting into production releases is also a big win for the company’s risk management function.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade