5 Simple AWS Security Tips

Ryan Baker
Singularity
Published in
4 min readDec 2, 2022

--

If you’re building a business in the AWS cloud that needs to be secure (i.e. any business these days), there are some easy, but non-obvious things you can do to make sure your security is buttoned down and ready to sell confidently to enterprise customers.

Here are five simple things to do to make sure you are not opening yourself to any unnecessary risks without having to change much (or any!) of your application code.

1. Drop “NET_RAW” requirement in your ECS task

As with all things in AWS, networking is difficult to manage and easy to get wrong. This is one of those things that can leave you very vulnerable to an attack but is easy to patch. This permission is available by default in your task definitions, so you need to explicitly forbid it.

What is does: this makes sure that your containers that run in the same docker network cannot craft raw packets and send them to each other. If you have one container that is taken over by a bad agent, you want to make sure they cannot communicate with another container in the network.

How to do it: simply add the following line to your container definitions inside of your task definition:

...
"linuxParameters": {
"capabilities": {
"drop": ["NET_RAW"]
}
},
...

2. Docker containers should not use the root user

In the same vein, you should limit the permissions of a container that is compromised. For example, removing the ability to install various packages via the package manager of the container. Removing the ability to run arbitrary commands for a container allows you to limit the attack vectors.

What is does: this makes sure that your containers cannot run arbitrary commands if they become compromised. The same way you would not want to give root access to any instance in your cloud, you do not want to give root access to any container in your fleet.

How to do it: first, you need to make sure a non-root user exists. Add the following couple lines to your Dockerfile when building the image:

FROM python:3.11

...
# your logic
...


RUN groupadd -r your_username && useradd --no-log-init -r -g your_username your_username
RUN mkdir -p /home/your_username && chown -R your_username /home/your_username && chown -R your_username $APP_DIR
# assumes APP_DIR is where your application code is

Then, you need to tell the container definition in the task definition to use that user when running the container:

...
"user": "singularity",
...

3. SQS queues should have server-side encryption

You will never be as secure as AWS. That being said, there are options in AWS to make your AWS cloud more secure than they are by default. One of those options is to tell SQS to be encrypted at rest.

What is does: this makes sure that when the messages on the queue are stored (i.e. between the queue receiving the message and the queue providing the message to a consumer), they are encrypted at rest without any performance hit. Encryption at rest is an important checkbox for SOC2 compliance.

How to do it: In the AWS console, find the queue you want to encrypt, click “edit”, then configure your queue to look like this:

KMS is the best option, but SSE is the simplest option that checks the “encrypted at rest” box

Alternatively, if you use terraform, check the docs for having your queue use SSE.

4. S3 bucket server-side encryption should be enabled

For the same reason you encrypt the SQS queues, you also want to enable encryption of the S3 buckets.

What is does: this makes sure that the new objects in the bucket are encrypted before being stored on disk and decrypted when being retrieved from disk. Encryption at rest is an important checkbox for SOC2 compliance.

How to do it: In the AWS console, find the bucket you want to have SSE. Click on the bucket name, click on the “Properties” tab, scroll down until you see “Default Encryption”, then click “Edit”. Then configure your settings to be the following:

KMS is the best option, but SSE is the simplest option that checks the “encrypted at rest” box

Alternatively, if you use terraform, check the docs for having your bucket use SSE.

5. RDS instances should be encrypted

For the same reason you encrypt the SQS queues, you also want to enable encryption of the RDS instances.

What is does: this makes sure that the contents of the disk on the RDS instance are encrypted before being written to disk and decrypted when being retrieved from disk. Encryption at rest is an important checkbox for SOC2 compliance.

How to do it: Unfortunately, if you did not enable encryption when you created the RDS instance, there is no way to modify the instance to enable encryption. AWS has a doc on how to migrate to an encrypted database if you have an unencrypted instance running. If your application allows downtime, that’s a good path to use. Otherwise, you will have to do a dual-write methodology where your application code will need to write to both databases and switch to read from the encrypted one once it’s up to date.

In this post, we’ve covered 5 simple, but not obvious, security holes you can easily cover. We learned about these holes by undergoing a third party audit, but thought that everybody would benefit from knowing about them. Interested in more of what we’re doing? Think we’re amateurs and want to show us a thing or two? Stay up to date with when we’re hiring⚡️

--

--