Enforcing AWS S3 Security Best Practices Using Terraform & Sentinel

Yulei Liu
HashiCorp Solutions Engineering Blog
4 min readApr 3, 2020

We often hear the news that some apps or companies have data breaches because of insecure AWS S3 buckets. Although AWS published S3 security best practices a few years ago, this still happens and makes headlines. One of the reasons is that S3 buckets created with the Amazon S3 Management Console are subject to human mistakes.

Hashicorp’s Terraform uses the concept of infrastructure as code to avoid human mistakes. Terraform Cloud/Enterprise uses Sentinel to implement governance as code to make sure provisioned resources are compliant with corporate security and operational requirements.

Terraform Enterprise/Cloud is a secure and reliable Infrastructure as Code pipeline, with preventative guardrails to make sure resources generated via this pipeline are in line with your organization’s security and operational guidelines. The diagram below illustrates this idea:

This blog will show you how to use Terraform Enterprise/Cloud to govern the AWS S3 provisioning process and ensure that all S3 buckets provisioned by Terraform are compliant with AWS published S3 security best practices. It is suitable for DevOps engineers with a security mindset.

Prerequisites

  1. A Terraform Enterprise/Cloud account. If you don’t have one, you can apply for a trial Terraform Cloud account here.
  2. An AWS account to provision S3 buckets. (Provisioning S3 buckets by itself won’t cost you any money.)
  3. A GitHub account.

Steps

  1. Once you get a free Terraform Cloud account, please create an organization, a VCS connection, a repository in your VCS for your test s3 code, and a workspace with your AWS credentials that are connected to that repository.
  2. You can start a trial plan that includes Terraform Cloud Governance in the Plan and Billing screen of your organization.
Step 1
Step 2

3. Connect a policy set to a fork of this repository and select workspaces that you want to govern.

You can find instructions for connecting Terraform Cloud and Terraform Enterprise organizations to VCS repositories here.

Connecting a policy set

You can also edit sentinel.hclin your own fork to select the policies you want to use and their enforcement levels.

4. Trigger a Plan or Run of your workspace, and see the result.

For example, running a plan against this Terraform code:

resource "aws_s3_bucket" "bucket-public-read-write-acl" {
bucket = "bucket-public-read-write-acl"
acl = "public-read-write"
tags = {
owner = "yulei"
}
}

will result in this:

Policy check failed

Detailed Explanation

In the above example, we try to create an AWS S3 bucket that has the property aclset to one of the canned ACL policies, “public-read-write”. When we perform a plan, Terraform Cloud sends the planned output to Sentinel for policy checking. In our policy set setting file, sentinel.hclwe have specified 7 policies that are set to soft-mandatory:

policy “allow-s3-private-only” {
enforcement_level = “soft-mandatory”
}
policy “disallow-s3-acl-public-read-write” {
enforcement_level = “soft-mandatory”
}
policy “disallow-s3-acl-public-read” {
enforcement_level = “soft-mandatory”
}
policy “enforce-s3-versioning-mfa-delete-enabled-true” {
enforcement_level = “soft-mandatory”
}
policy “enforce-s3-versioning-enabled-true” {
enforcement_level = “soft-mandatory”
}
policy “enforce-s3-server-side-encryption-enabled-true” {
enforcement_level = “soft-mandatory”
}
policy “enforce-s3-logging-true” {
enforcement_level = “soft-mandatory”
}

Any new or updated resource will be checked before the plan can be applied. This preventative behavior is better than finding non-compliant resources after the fact.

Let’s do a detailed review of one of the policies, disallow-s3-acl-public-read-write.sentinel:

import “tfplan/v2” as tfplan

This statement allows the policy to use the Terraform Sentinel tfplan/v2 import, which contains data from the plan.

//find all aws_s3_bucket that has acl=”public-read-write”violatingS3Buckets = filter tfplan.resource_changes as _, rc {
rc.type is “aws_s3_bucket” and
rc.mode is “managed” and
(rc.change.actions contains “create” or rc.change.actions
contains “update”) and
rc.change.after.acl in [“public-read-write”]
}

The filter expression will find a subset of resources from all resources referenced in the plan. In our example, it will find all aws_s3_bucket resources that will be created or updated and that also have the ACL property explicitly set to “public-read-write”.

//print out address for non-compliant bucket
for violatingS3Buckets as address, bucket {
print(address + “‘s acl is : “ + bucket.change.after.acl + “,
this is not compliant.”)
}

The abovefor loop prints out the addresses of all violating S3 buckets.

main = rule {
length(violatingS3Buckets) == 0
}

The last statement validates that the number of violating S3 buckets is 0 in the main rule, which determines if the policy should pass or fail.

This example is simple yet powerful. All code and examples from this blog can be found in this repository. Please feel free to collaborate with me there and make your S3 buckets more secure.

--

--