Securing Deployments through a DevSecOps approach on CI/CD Pipelines on AWS Cloud

Davide Andriano
Storm Reply
Published in
7 min readFeb 8, 2024

Davide Andriano 8 February 2024

This article wants to give a general overview of the DevSecOps approach, mainly focusing on security key concepts, technologies and tools in such a way to create a general and individual awareness of security, giving at the same time an example of a potential implementation of a DevSecOps CI/CD Pipeline in the AWS Cloud using the CodePipeline AWS service.

DevSecOps

DevSecOps is the natural, security-related, evolution of the DevOps concept.

In today’s landscape, security is becoming more and more important due to the increase of security threats and malicious attacks which are targeting organizations worldwide causing e.g data breaches, disruption of business services leading in losses in economic and credibility terms, or in a more general form: losses of different nature.

But what does DevSecOps really mean?

DevSecOps is a shift-left mindset where security becomes a shared responsibility for both developers and IT operations specialists who must keep security requirements in mind; it’s a methodology aiming in bringing the security-by-design concept inside the entire lifecycle of the application starting by the planning phases ensuring that security is considered and tested at every stage, allowing to identify issues quickly and fix them.

A DevSecOps Pipeline takes advantage of automation, speeding up the process of vulnerability detection through automated tests, allowing patching and releasing secure code, thereby reducing its attack surface.

Before moving further in the example and the implementation of a real use case, it’s important to have a background of how a Pipeline is built and which tools can be used along with their purpose.

In the image above can be seen that a DevSecOps Pipeline consists of various stages, in which every process has its security checks.

1. Plan: It’s the initial stage in which the requirements of the application are collected and based on its output a threat modeling is made for evaluating potential risks and to understand the impact these threats might have on the system in order to apply the appropriate security controls.

An example of a tool may be used is OWASP Threat Dragon.

2. Code: The phase in which the developers develop code inside a version-controlled repository, such as one managed by Git; in this phase is important not to inject vulnerabilities and secrets inside the code and to follow the best practices of secure coding.

It’s crucial to use tools for secrets management to store sensitive information, which may include for example: secrets, certificates, tokens, and to use pre-commit hooks to prevent the commitment of these.

A tool for secrets management of the AWS suite is AWS Secrets Manager.

Some tools for pre-commit hooks are: git-secret, detect secrets.

3. Build: This is the phase in which we’ll be mainly focusing on this article, it aims in building the code and to create images/artifacts; during this phase relevance must be put in using static analysis tools to check if misconfigurations, third-party components, vulnerabilities, or secrets are present inside the code. Some possible checks might be:

- Software Composition Analysis (SCA) identifies known vulnerabilities in 3rd party components, license risk, and out-of-date libraries.

Examples of SCA tools may be: OWASP Dependency Check, Snyk Open Source

- Static application security testing (SAST) relies on white-box security testing using automated tools to perform security code review to find vulnerabilities inside the code.

Examples of SAST tools may be: Snyk Code, SonarQube, Bandit, Graudit.

- Secret Scanning identifies sensitive information pushed inside the code.

- DockerFile Linting identifies vulnerabilities or misconfigurations inside the Dockerfile.

4. Test: This is a phase where the output of the build phase can be used to deploy the application in a staging environment close as much as possible to the production one, in which is possible to perform Dynamic Analysis Security Testing (DAST) relying on black/grey-box testing on the running application to check the behavior of the application and to reduce the false positive findings produced by the SAST Analysis.

Some examples of DAST tools are: OWASP ZAP, Arachni, Checkmarx DAST, StackHawk.

5. Release: Before releasing the application, some other tests can be performed, such as penetration testing and vulnerability assessment.

6. Deploy: The final stage where the output of the build phase is sent in production, security is still relevant and the application must be still protected from the outside by unknows threats, for this purpose monitoring and logging tools — essential for detecting and responding to security incidents in nearly real-time — must be integrated along with firewalls and RASP tools (such as Falco).

Some examples of tools in the AWS suite may be: AWS Shield, AWS WAF (Web Application Firewall), AWS CloudWatch, AWS GuardDuty.

DevSecOps CI/CD Pipeline on AWS

Now that we know what DevSecOps is and how it operates we move further, a bit deeper in its implementation on AWS in a CI/CD Pipeline.

Here a high-level infrastructure of our CI/CD Pipeline and the components used.

The starting point of this infrastructure is the DevSecOps Team composed by Developers, IT Operations and Security experts making a conjoint team, because, as previously mentioned, security is shared and must be taken into account by all of them.

The triggering event of the CodePipeline is the pushing events made by the team inside the CodeCommit repository.

The EventBridge service used to get events and to react is set with two rules:

1. The first rule is set for scanning the repository for push events, triggering the pipeline.

2. The second rule is set for inspecting the pipeline and fetch “Failure” events among the phases, speeding up the feedback time to the Team through the SNS service for delivering notifications.

The CodePipeline might be made by several stages, each one performing one or more tasks; the triggering event just said will make the Pipeline fetch the code from the repository making a full clone of the code and whole git-history (in our use case) starting the first CodeBuild phase performing our Secrets Scanning phase looking for secrets embedded into the code or the git history.

To do so, the AssumeRole IAM for the services involved must be carefully set for performing the necessary operations attaching the right policies following the Need-To-Know principle, to perform only the operations needed, resulting in Least Privileges.

This policy shows the Actions the CodePipeline Role is allowed to perform and on which CodeBuild resources.

Once this phase is completed, the second stage of the pipeline will perform the Software Composition Analysis to check whether some vulnerabilities on 3rd party components are present into the code.

Then we get, with a particular focus, into the SAST Analysis stage, phase in which the code is inspected looking for vulnerabilities.

In our use case we are using the Snyk Code CLI tool.

In order to use it, once logged in into the Snyk webpage, it’s necessary to activate it through the toggle button, as shown.

Another step to perform for being able to use it inside CodeBuild is to create an API token to let it correctly authenticate to Snyk hence allowing it to scan the repository.

Once the token has been created, it can be copied and saved encrypted inside Parameter Store, a functionality of the AWS Secret Manager service, through the AWS Console.

At this point everything is correctly set up and CodeBuild can, through the AWS CLI, get the Snyk Token by the Parameter Store, decrypt it, and use it to authenticate and correctly use the tool.

Short code snippet of the CodeBuild buildspec authenticating to Snyk

These commands provided would result into this output displaying the correct authentication to Snyk.

Short description of the CodeBuild output

Now it’s possible to use the Snyk Code CLI to test the repository and get potential vulnerabilities present inside the codebase.

Example of snyk code output after testing a repository

Conclusion

We saw together, what are the DevSecOps principles, why this is important and how it can help organizations in producing more secure and robust code along with an enterprise use case usage into a CI/CD Pipeline on AWS.

In your implementation, stages and tools used may vary based on what is the input provided and what’s the outcome expected, so it’s important an accurate analysis of which are the potential real threats of your system and to react to them.

I want to express my deepest gratitude for your time, curiosity, and engagement.

It is my sincere hope that the insights shared here be helpful in your DevSecOps journey.

--

--