Leveraging secure multi-architecture application development to remediate Log4j-type issues

Grzegorz Smolko
AI+ Enterprise Engineering
7 min readMar 5, 2022

By Grzegorz Smolko and Greg Hintermeister

Recent log4j vulnerabilities show how important is to know what libraries you use in your applications and if they are safe. In this article we will show how you can minimize such threats by introducing the following activities:

  1. Scanning legacy JEE applications using Transformation Advisor
  2. Running dependency checking and image scanning in your application development process
  3. Building and deploying multi architecture images (in this case we will create image that can be run on OpenShift installed on x86 and os390)
  4. Centralizing observability across all application components

Let’s get started.

Scan legacy applications

The first activity is to scan your legacy applications using Transformation Advisor (TA) to see what it takes to containerize them. The benefit of a tool like TA is that it scans the binary files (.EAR files and .WAR files) to determine what libraries are present, what technology is used, and what legacy implementations need to change in order to containerize them.

For example, after scanning using TA, it shows the composition of each .EAR file. In this example it shows log4j-1.2.4.jar is in the build.

Application dependencies

Another advantage in using TA to scan is that it shows all legacy technologies and how to change them so that the application can run in container. This example shows how JAX-RPC should be replaced with JAX-WS.

JAX-WS modernization

Once these suggestions are implemented, you now can containerize your application.

Why is that important? Because now you have unlocked the ability to much more quickly develop and deploy, check for dependencies, scan for vulnerabilities, and even develop for cross-architecture solutions.

Develop and Deploy

Now that the application is containerized and has an updated CI/CD pipeline, let’s see how you can use that to more quickly identify issues and make changes.

Dependency checking

Committing new code usually triggers your application CI/CD pipelines that build and deploy new version of the application. But what if added feature introduced a dependency vulnerability or your old code becomes vulnerable? Can you detect it early in the cycle?

Yes, you can. One of the methods is to integrate dependency checking in your DevSecOps. In this example, we will integrate it by creating GitHub Actions workflow, which does this checking, but similar could be achieved for example using Tekton pipelines.

GitHub Action that does the checking is quite simple, it utilizes Snyk.io for doing the check:

- name: Run Snyk to check for vulnerabilities
uses: snyk/actions/maven@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --sarif-file-output=snyk.sarif --severity-threshold=high

The CI workflow is triggered by creating or editing pull request with incoming changes. Another benefit of this approach is that you are protecting your master branch from direct modifications. You can see right away if any of dependencies have vulnerabilities. This is how it looks on code which has issues:

Pull request with failed check

You can drill down to see the details:

List of found issues

We additionally integrate this check with GitHub Code Scanning by adding another action to the workflow:

- name: Upload result to GitHub Code Scanning
if: always()
uses: github/codeql-action/upload-sarif@v1
with:
sarif_file: snyk.sarif

Which allows you to see the checks directly on pull request ‘Checks’ tab:

Output via GitHub integration

You can again drill down to see details page, which gives you information about affected file, issue description and possible workarounds and remediation.

Issue details

By running this workflow, you learnt that you have vulnerable dependency, and what to do to fix it.

Image scanning

Another step that you can take, is to ensure that image that you are creating does not contain known vulnerabilities. Some companies will have dedicated teams for ensuring that, but for a start, its simple and easy to just add one more step to our pipeline that will do that check.

You can implement this using action that is utilizing Trivy:

- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: '${{env.APP_NAME}}:${{ github.sha }}'
format: 'template'
template: '@/contrib/sarif.tpl'
output: 'trivy-results.sarif'
exit-code: '1'
ignore-unfixed: 'true'
severity: 'HIGH,CRITICAL'


- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v1
if: always()
with:
sarif_file: 'trivy-results.sarif'

Again, result is uploaded to GitHub Code Scanning, where you can see which system packages have issues:

Image scanning results

These two simple checks will ensure you, that there are no known vulnerabilities in application or system dependencies in the image you are building.

Hybrid cloud in action — building multi architectural image

In the hybrid cloud era, you can have your containers running on various cloud providers and various environment. These environments may utilize various cpu architectures, just to name a few x86, Power, s390x, ARM. Unfortunately, by default, a container can run only on single architecture, the one that was used to build it.

There are several approaches to by pass this problem. Here you will learn how to solve it using Docker BuildX and QEMU emulator to build image that will run on x86 and s390x.

In your pipeline, you need to replace steps that are building the image. First you set up the BuildX and QEMU defining which architectures you want to use:

# Setup buildx
- name: Docker Setup Buildx
uses: docker/setup-buildx-action@v1
# setup qemu for multi platform build
- name: Docker Setup QEMU
uses: docker/setup-qemu-action@v1.2.0
with:
platforms: amd64,s390x

then you modify your build with additional platforms:

- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64,linux/s390x
push: true
tags: ${{env.REGISTRY}}/${{env.REGISTRY_NAMESPACE }}/${{env.IMAGE_NAME }}:${{env.GITHUB_SHA}}

Once the action is successfully executed you should see in your image repository (Quay.io in this case), that image contains manifests for multiple architectures:

Image in the repository

You successfully built image that can be run on x64 and s390x. Now its time to deploy it.

Deployment in hybrid cloud

To deploy application to multiple OpenShift clusters, running on x64 and s90x, you can for example use Red Hat OpenShift GitOps that uses Argo CD to maintain continuous integration and continuous deployment (CI/CD) of applications. Current limitation of OpenShift GitOps is that it can be deployed only on x86 platforms, but it can deploy your applications to any architecture available for OCP clusters.

Argo CD gives you quick and easy insight to applications deployed to various clusters:

Applications in Argo CD

You have successfully built and deployed multi architecture application, with no known security vulnerabilities in dependencies.

Observability

Now that you have your applications running across Dev, Test and Production environments, including OpenShift clusters running on x86 VMs and IBM Z, you will want a way to centrally observe the platforms and applications. One of such tools is Instana.

While Instana can monitor multiple clusters, what is most compelling is the ability to group application instances into one Application View so that from one place you can see all that is happening across clusters.

Instana dashboard

In this example, we’ve gathered all content in the “stock trader” namespace from each of the clusters to monitor calls, latency, and processing time. Further, we can start analyzing calls and get down to the individual calls and even stack trace, as shown in the image below.

Analyzing call details

Summary

Enterprises need, more than ever, the ability to speed up deployment, innovation, and in some cases, remediation of issues as new vulnerabilities are found.

As you saw in this article, it starts with understanding how you can introduce containers, security and observability in your development and operation practices. You saw how you could scan for dependencies and vulnerabilities. You then saw how a single code change, once scanned, could be containerized and deployed across multiple locations and architectures, Finally, you saw that with this new flexibility, you could bring all those locations into one view to better manage the application.

--

--