In the previous article we discussed why the default behavior of an OWASP Dependency Check does not suit our needs and that we can mitigate the issues by constructing a Dockerized image of the National Vulnerability Database (NVD). Most importantly we demonstrated how you can easily construct such an image yourself using the gradle-docker-plugin.
In this article we expand on our knowledge by integrating an OWASP Dependency Check with the Dockerized version of the NVD. Our implementation must not (and does not) create any additional configuration overhead for the Application Developers using it — because — if that were the case, adoption of this approach would suffer. Additionally, as this integration will be a part of all application builds, we must be able to bundle it as a separate module.
Custom Gradle plugin
Zero additional configuration and re-usability can be easily achieved in the Gradle world by writing a custom plugin. In our case the plugin will be a wrapper around the original Dependency Check Gradle, one which extends its functionality with start/stop NVD container logic.
In this article we will, for the sake of simplicity, use the buildSrc style of plugin. This means that the plugin will be directly bundled to the project using it. In real-world scenario, the plugin would be a separate project and would be distributed as a jar dependency. If you want to convert the plugin to a standalone project for use in production, please take a look on this Gradle documentation page.
Now let’s perform a deep dive into the code. We will go through implementation from several perspectives. Starting from the top (developers perspective) and then working our way down (to look at the low-level logic).
Application developers perspective
As we have promised, there is exactly zero additional configuration from the perspective of the end user of our plugin (application developer). The developer only needs to define the plugin applied to his project (line 2). All remaining complexity is hidden within the implementation.
Now, let’s dive in and see what is one layer deep in the class NvdContainerDepCheckPlugin itself.
Here we can see that we are adding two tasks, one for starting our container and one for stopping it. We also apply the original DependencyCheck plugin and instruct it to use a database from our Docker container (running at a randomized port). At the end, we specify the order of execution of our tasks: startContainer → dependencyCheckAnalyze → stopContainer.
The tasks are just wrappers for the logic implemented in the NvdContainer class, no real logic is embedded here. They are only necessary as they define the order in which the execution of start/analyze/stop actions is performed.
The real meat of our plugin is the NvdContainer class, which handles the actual creation and destruction of the NVD database container.
The core of the logic uses the docker-java library. Please note that we are using localhost:2375 for communication with the daemon, because we run in a mixed Windows-Linux environment. If you are in a Posix/Linux only environment, use sockets — its cleaner and more secure.
In the start method, we create a container from our image and configure it to expose H2 on an available port provided by the NvdContainerDepCheckPlugin. We tag the container in a way that allows for its removal later. Once the container is started, we need to wait for the database to startup (waitForContainerStart). To verify that the DB is operational, we use a simple validation query that returns the date of the NVD database synchronization. Once this step succeeds, we print the date to the build log — as it is useful debugging information.
The stop method iterates over all running containers and stops all with the tag provided by the start method. We iterate over all containers as a fail-safe measure. This is performed as it may happen that a previous build was killed in the middle and a container was left running. This tweak makes sure that we also clean up these forgotten instances.
Alternatively, we may give the NVD container time-to-live (and allow it to kill itself after some pre-defined time) — this would be a part of the container entry point. Or — if time allows, utilize an even better solution that depends on a Ryuk container. The Ryuk container will make sure all the containers are automatically purged, even if the control process (the build) goes down.
Now, when we execute the build of the sample project, the startImage (with image downloaded) task takes approximately 7 seconds, and then stops 1.2 seconds. The analysis takes the same amount of time as with the default setup (for the sample project this was 7 seconds). The only time that varies is the time required to download the image itself (400MB), which takes approximately 15 seconds (including extraction time) on our corporate network . This means that on the sample project the current overhead for analysis is between 15 seconds with a hot start (image already present) to 30 seconds (cold start; no shared image layers available on the system). This is a significant improvement over the default behavior where the overhead ranges from 7 seconds (counting only the analysis, as the DB is cached and its start is negligible) to 3–7 minutes (download of the NVD dataset and its import to H2 DB).
In this article and our previous article we have demonstrated some of the issues with deploying an OWASP Dependency Check in a real world environment, we have learned how to address these by constructing a National Vulnerability Database Docker image and integrating this image to the analysis pipeline using a custom Gradle plugin. This approach greatly reduces the build times associated with a Dependency analysis, while allowing the developers to work in offline mode.
Get access to the full source code for the plugin and its usage in a showcase project in this GitHub repository.