Using DevSecOps to Strengthen Security on AWS

Spiros P
Reblaze Blog
Published in
7 min readJul 4, 2019

Organizations use automation to increase efficiency, quality, and repeatability. They are looking to scale, be dynamic, and adapt to the next emergent technologies. But DevOps is not enough — DevSecOps is just as important when integrating security into each element of your applications and infrastructure.

We previously covered how DevSecOps is necessary to meet the current challenges of building in the cloud. In this article, three concepts are discussed and applied to AWS specifically:

  • DevSecOps at the Infrastructure: Maintaining security configuration through code. This means using Infrastructure as Code (IaC) instead of manually configuring servers and network configuration.
  • DevSecOps Through Continuous Integration/Continuous Delivery (CI/CD): Security is implemented in CI/CD pipelines. As builds and deployments are run, scanning, remediation, hardening, and patching are also conducted as part of the process.
  • DevSecOps for Applications: Building automation for security within applications. Rather than post-facto testing for security vulnerabilities on web and app endpoints, tools can be used during the development process to ensure hardened applications function properly prior to deploying to production.

Now let’s dive into securing AWS infrastructure through DevSecOps, and explore how to design airtight security.

DevSecOps at the Infrastructure

Perimeter security at the ingress of your AWS environment is critical. Companies that abandoned their firewall teams and appliances have turned to using IaC as the new change management for ports and protocols. IaC is a great first step to closing the gaps, but sometimes it isn’t enough (more on that later). DevSecOps has become a big part of automatically ensuring external threats are reduced to a minimum. There are multiple items to consider when securing the perimeter, such as:

Network Access Control Lists (NACLs)

Depending on how flat your subnets are, NACLs may or may not help. If your services, stacks, layers, functions, etc. are in their own subnet, it’s best to close unneeded ports between them. For one, it’s faster to eliminate unnecessary traffic because packets are dropped before hitting your security groups (which requires evaluation time) and then EC2 resources. Just remember, if there are TCP connections, you’ll need to open ephemeral ports! Your DevSecOps engineers should be auditing your IaC to ensure its NACLs are allowing only necessary traffic and are fully in code control (especially rule exceptions, as they should never be outside of code).

Security Group Trust Model

Poor security group design can lead to many problems, such as accidental denial of service, overly complicated rules for determining access, or, the very worst, allowing all access and rendering the security group worthless. Security groups are one of the easiest and most efficient models to create layers of trust. For example, you can create a security group on the database which trusts the upstream application server’s security group — which, in turn, trusts the load balancer layer. This ensures only trusted traffic from a trusted path is allowed. Adding an additional security group for remote access (SSH/RDP) is sometimes necessary, but best if added as requested, rather than leaving these ports open on all EC2 resources and allow for brute force attacks.

AWS Config

Some IaC, such as Amazon CloudFormation, HashiCorp Terraform, or Red Hat’s Ansible, only evaluate and reset the configuration drift when these tools are executed. This means that if someone purposely or inadvertently opens access from the perimeter, it won’t be closed until another execution of your IaC tool.

This is where AWS Config can help. AWS Config has tools which allow you to watch for changes and execute an IaC run to fix the drift. AWS Config also sends an AWS SNS notification if a watched change occurs. This is a reactive DevSecOps method of closing issues, but it’s a fast and automated one. AWS Config is where your DevSecOps can shine — not only through code in proactive monitoring, but also automated remediation. This will ensure changes go through (approved) proper channels, and AWS Config will alert you when they don’t.

AWS WAF and AWS Shield

Watching and protecting your perimeter from application attacks is more difficult than just locking ports and closing security groups. AWS WAF and AWS Shield provide a codifiable tool to restrict access to your environment and help prevent many well-known attacks, like DDoS. These are excellent tools for basic protection, but they are not full-featured security offerings.

DevSecOps Through CI/CD

The popularity of CI/CD has skyrocketed in recent years. Ensuring security is an integrated part of those recurring builds and deployments has become just as critical. The areas covered in this section focus on ensuring a secure baseline is released for operating systems, as well as preparing for managing secrets without exposing them accidentally through code.

Golden AMIs and Encryption

One best practice is to keep a secure and configuration-controlled Amazon Machine Image (AMI). This is to ensure that the secure baseline is not only known, but also that each time a server is launched, it meets a minimum required security configuration. Your DevSecOps can ensure compliance and security by using scanning tools like Tenable Nessus, and then remediating discovered vulnerabilities through automation.

Encryption on All Volumes (Including Root)

Amazon makes it easy for you to create your own KMS encryption keys. As a good practice, create keys per service/environment. The keys can be enabled to auto-cycle every year, and Amazon will maintain the history of keys for decryption.

Whether creating your AMIs from the AWS Marketplace or using one of your own custom AMIs, you can encrypt the root Amazon EBS volume by copying the AMI and choosing to encrypt with a key. You can even copy that encrypted AMI between accounts. All additional Amazon EBS volumes in an AMI should be set to encrypted by default (especially since it’s free and has no performance impact).

Encryption on Amazon S3

Like Amazon EBS volumes, Amazon S3 buckets can be encrypted either by default or custom KMS keys. Enabling encryption on each bucket protects its contents via server-side encryption (SSE). Amazon S3 buckets support client-side encryption (CSE) as well, but this relies on the application/sender to use encryption and could be forgotten. At a minimum, when creating an Amazon S3 bucket and configuring its settings through your IaC tool of choice, turn on encryption by default and assign the buckets a custom KMS key for Amazon S3.

Patched and Hardened Instances

DevSecOps engineers should be involved in any automation that concerns patching and hardening to meet security standards like HIPAA, GDPR, CIS, or SOC. Patching can be easy. Operating System vendors produce marketplace AMIs, and these AMIs are frequently updated with patches. Querying for the most recent AMI upon server creation is good, but taking an AMI from the marketplace, patching it completely, applying hardening scripts, and then cutting a new AMI is even better. That way you know your secure baseline, and it’s much faster to bring online secured instances. This is especially true with Microsoft Windows servers, considering the time required to patch and reboot.

Using AWS Secrets Manager and AWS Systems Manager Parameter Store

If you’re using Amazon’s database services, like Amazon Relational Database Service (Amazon RDS), Amazon DynamoDB, or Amazon Redshift, taking advantage of AWS Secrets Manager is a good way to have your database passwords auto-rotate and maintained in an encrypted location. Additional coding through AWS Lambda can extend this service to other features, such as APIs and OAuth.

Alternatively, and far cheaper, is using AWS Systems Manager Parameter Store to store passwords, keys, and data through KMS encryption. Secrets should never be stored in code, and some IaC will expose secrets if not properly handled. Also, rather than hard-coding sensitive variables in jobs or functions, use the AWS CLI or AWS SDK to dynamically pull these secrets from Parameter Store.

DevSecOps for Applications

There are many tactics for securing applications through automation. Here, we will cover two architecture methods that can be used to increase application repeatability and durability while reducing the threat area and chance for compromise.

Immutable Servers

Once your servers are patched and hardened, the next step is to throw away the key. By doing this, your DevSecOps automation will remove any SSH keys from Linux servers and scramble administrator passwords for Microsoft Windows servers. This will greatly reduce login exploits.

Note that achieving this requires a lot of work and an automation framework to support managing an environment without issues. Having all logs piped from servers to ELK stacks, Datadog, Graphite, or other logging and monitoring solutions is a must. So is ensuring that the configured applications are operating statelessly. Once you throw away the keys, there should be no backdoors or access available. If there is a server issue, it should be terminated, and another should be launched in an auto-scaling group to replace it.

Lastly, removing 22 (SSH) and 3389 (RDP) ports from security groups and NACLs will eliminate the attack vector externally and internally. No edge resources (outside of VPN servers, jump boxes, or bastion hosts) should have login access ports opened externally, regardless.

Containers

At this point, your perimeter, operating system, and server are locked down through automation. There are measures you can take to harden your application, but one immediate improvement for isolating workloads is to use containers. Containers allow for greater stability and resource utilization, in addition to their security benefits.

When adopting containers, AWS provides a few features that eliminate the need to harden the servers themselves. Using Amazon ECS with AWS Fargate or Amazon EKS (Kubernetes) gives you methods to deploy your containers into environments without worrying about the underlying system. This significantly reduces your security automation footprint to the container builds in your CI/CD pipelines. Also, there are best practices for hardening containers which should be considered in your DevSecOps automation.

Summary

When it comes to securing infrastructure and applications, DevSecOps plays a critical role in the automation solution of any business. DevSecOps engineers can use IaC tools to maintain a security perimeter, configure a secure baseline, approve security changes, control security tools in CI/CD pipelines, write code that closes vulnerabilities in the cloud, and isolate application workloads. DevSecOps engineers should consider reducing their threat boundary by offloading the responsibility for servers if containers are a possibility.

Also, for a robust security posture, protection must be provided not only for the infrastructure, but also for your web applications and APIs. Incoming web traffic must still be scrubbed, and the security solution that does this must run natively on AWS, be cloud-native and able to support your CI/CD and DevSecOps practices, and have other necessary features as well. To protect your web assets with a comprehensive web security solution that fulfills all these requirements, consider Reblaze.

Get more information or request a demo here.

--

--