From Black Duck to Grafana

Ciorceri Sorin
Globant
Published in
5 min readAug 31, 2021

Introduction

Black Duck is a solution for managing security, license compliance, and code quality risks that come from the use of open source in applications and containers.

It has a nice web interface, cool tools that are easy to integrate with a CI system, but it’s missing an important thing : an history of vulnerability evolution over time for the scanned project.

Grafana can get in our way to help us achieve this and using the following 3 steps you can also reproduce it on your environment.

This article will describe how to complete the vulnerability story using Grafana.

Below points are covered in this article:

If you want to simplify the process I recommend to have docker installed on your environment. In our example we will use a Windows machine for this, but the process is almost identical for other OS.

  1. Black Duck : Get the Black Duck scan results to something easy to parse and use. In our case to a CSV format.
  2. Elastic Search : Install and import those results to an Elastic Search instance.
  3. Grafana : Install Grafana and use data stored inside Elastic Search to generate the desired graphs.
  4. References

Part 1 : Black Duck

In this github repository you can find the python code that will export the Black Duck scan results into a CSV format.

Before using it go to Black Duck web interface -> “My Access Tokens” and generate a new token with “Read & Write Access”.

Clone the following github repository : https://github.com/ciorceri/BlackDuckReporting

Follow the instructions in the README.md to set the environment.

Usage of the tool:

usage: main.py [-h] -b BASEURL -t TOKEN [-v VERSION] 
[-f FILTERPROJECT]
BlackDuck reportingoptional arguments:
-h, --help show this help message and exit
-b BASEURL, --baseurl BASEURL
Url to BlackDuck server
-t TOKEN, --token TOKEN
Token used to connect to BlackDuck server
-f FILTERPROJECT, --filterproject FILTERPROJECT
Filter project names
-v VERSION, --version VERSION
Project version to scan

The -b/--baseurl and -t/--token parameters are required and must be filled with the BlackDuck base url which can will have the format “https://your_name.app.blackduck.com” and the token is a BASE64 string.

Using the -f/--filterproject parameter you can filter the projects that the report will be generated for. If is not provided all the projects will be added to report.

The -v/--version is an optional parameter which will filter only the specified version of the scan. If the version is not specified at the scan time, by default BlackDuck will detect the version by using the git tag/branch name. In my case the default version is ‘master’ (the same as the branch name of the scanned repository)

The output of the report is in .csv format which is easy to parse an use later with the Elastic Search.

Part 2: Elastic Search install

To test that Elastic Search is installed & running I recommand to install also ElasticHQ (or any other simmilar tool) using this guide : http://docs.elastichq.org/installation.html#running-with-docker.

ElasticHQ serves as a monitoring and management platform for Elasticsearch clusters.

Open ElasticHQ web ui and connect to your instance of Elastic Search.

Note : you cannot connect using “localhost:9200” since the, use your IP Address assigned to your machine by your router.

We can see that we have one node and one indices present, named .elastichq.

To import the Black Duck scan results to Elastic Search we have to run a simple Python 3 script

import csv
from elasticsearch import Elasticsearch, helpers
def main():
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
with open('black_duck_nodegoat_scan.csv') as f:
reader = csv.DictReader(f)
helpers.bulk(es, reader, index='nodegoat')
if __name__ == '__main__':
main()

After import a new indices named “nodegoat” appears into ElasticHQ interface.

We are all good, the scan results are uploaded to Elastic Search, all is left is to use this data inside Grafana.

Part 3: Grafana install

Is not important what image you will use, in our case I’ve used the Ubuntu one.

Open Grafana web interface : http://127.0.0.1:3000/ login using admin:admin and go to Configuration -> Data Sources

and add the “Elastic Search” data source.

Change the configuration:

HTTP -> URL : http://10.18.21.47:9200 (use your IP Address assigned to your machine by your router)
Elasticsearch details -> Index name : nodegoat
Elasticsearch details -> Pattern : Daily/Weekly
Elasticsearch details -> Time field name : Timestamp
Elasticsearch details -> Version : 7.0+

Grafana -> Create -> Dashboard and do your custom Grafana dashboards using Lucene query language. Few examples below :

After configuring Grafana Dashboard you should have some graphs containing past data, like this:

Part 4: References

Grafana official guide: https://grafana.com/docs/grafana/latest/installation/docker/

Elastic official guide: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html

Globant Cybersecurity

--

--