Leveraging AWS for Incident Response: Part 2

Tstillz
Tstillz
Nov 30, 2018 · 12 min read

In my previous post (https://medium.com/@tstillz17/leveraging-aws-for-incident-response-part-1-2963bb31bc05) we covered how AWS resources such as S3 can be used to quickly spool up storage, lockdown access to this storage and provision users in the AWS console. In this post, we’re going to cover how we can automate this process. Before we began, let’s review some common issues with the previous manual process of using AWS console to provision and manage AWS resources:

  • Time to provision: If you’re new to AWS, using the AWS console to provision the S3 bucket, bucket policy and IAM user account with programmatic access may take ~30 minutes, while those who are more familiar, ~10m.
  • Standardization: When using AWS console, simple copy/paste errors may occur. This may expose the bucket to the wrong customer (or even to the public). First item to consider is ensuring the bucket names are consistent for all customers. A defined naming convention should be used that is unique for each engagement, as one customer can have multiple engagements. Entering names manually are subject to human error. Second, ensure the right policy is assigned to the right customer bucket, with the proper permissions. Entering policy permissions manually are subject to human error. Third, ensure access keys are given to the right customer. Copy/pasting credentials with the wrong bucket path may expose the data to the wrong customer.
  • Scale: As you start to scale out your response, it can become difficult to manage multiple customer buckets, policies and keys.
  • Deprovisioning: Logging into the AWS console manually to destroy resources while ensuring you destroy the proper resources can become a time consuming task that’s prone to error.
  • Data retention/life cycle: When dealing with sensitive data related to incident response, you may encounter requests to hold customer data for longer periods of time. You may wish to apply longer retention to these customers buckets while others can be destroyed completely.
  • Logging: It’s important to know who has accessed objects in a customer bucket, when and how. Logging can be configured on an s3 bucket and delivered to the customer upon request post investigation if required.

To solve these issues (and many others), we can leverage a free application called Terraform.

About Terraform

Per the HashiCorp website “HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure.”. We call this “Infrastructure as Code”. Using Terraform allows your IR team to quickly write your infrastructure as code, review, plan and deploy without the need to log into AWS. Terraform works with many cloud providers such as AWS, Azure and Google Cloud to name a few. A full list of providers can be found here: https://www.terraform.io/docs/providers/.

Two key parts to Terraform we will discuss are:

  • Modules
  • S3 encrypted state storage backend

Modules

To take advantage of code reuse, Terraform uses modules that can be imported into your code base and helps keep your code base organized. For our use case, we will create an S3 module that defines a customer S3 bucket and an IAM module that creates a customer user account and defines what permissions to assign to this user. Once these modules are defined, you can reuse them across all customers, thus ensuring the bucket names are consistent, permissions are correct, keys/bucket paths are given to the right customer and the proper data retention/logging is setup on the bucket.

Terraform backend

To ensure the state of your AWS infrastructure is saved, Terraform uses a .tfstate file. This file holds the state of all AWS resources and their metadata, which may contain keys/passwords. This state file is used to track changes to your environment when performing CRUD operations such as “terraform apply” or “terraform destroy”. Potential changes can be reviewed before committing to AWS using the command “terraform apply”. To secure this file, we will use an encrypted S3 backend to prevent any direct access or viewing of this file.

Getting Started with Terraform

To begin, we need to download terraform: https://www.terraform.io/downloads.html. Once terraform is downloaded, you can begin using it immediately.

It’s important to keep Terraform updated. I can’t tell you the number of times a simple update fixed a Terraform error.

Checking your Terraform version is as simple as running terraform -v from your command prompt. If Terraform is out of date, you’ll see the following standard output from your terminal.

$ terraform -v

Terraform v0.11.7

Your version of Terraform is out of date! The latest version

is 0.11.10.

After you have the latest version of Terraform, we will need to configure Terraform to use the AWS backend. To keep things simple, we will create an IAM account that has the role of AdministratorAccess. This can be done using the AWS console by navigating to the IAM section and clicking the Add User button. At the new user prompt, we type in the name of the user and select the Programmatic access then click Next: Permissions.

At the permissions section, we can simply choose an existing policy called AdministratorAccess, as outlined below:

After clicking through the remaining options, you’ll need to copy the Access Key ID and Secret Access Key at the last menu as we will be using these in later steps. While we’re also in the AWS console, create an S3 bucket named terraform-dev-mytest-<date>. (We covered this in my prior post here: https://medium.com/@tstillz17/leveraging-aws-for-incident-response-part-1-2963bb31bc05)

Setting up AWS CLI

Now that we have our IAM account setup and S3 bucket created, we can now install and setup the AWS CLI. The AWS CLI bundled installer instructions can be found here: https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-bundle.html. Once installed, you should be able to run the command aws configure. If successful, you should see the following options below:

AWS Access Key ID:

AWS Secret Access Key:

Default region name [None]:

Default output format [None]:

Enter in your Access Key ID and Secret Access Key we created earlier. For this demo, we can leave the region name and output format empty by simply pressing enter. The reason why we’re configuring the AWS CLI using this method is it outputs the file ~/.aws/config, which Terraform reads and uses when connecting to AWS.

Creating the S3 module

Now that we have Terraform configured and our AWS CLI configured, we can now create our base Terraform project to automate our customer S3 bucket creation and locked down IAM user we did manually in our prior blog post below:

To begin, let’s create a project folder called terraform_dev on your workstation to hold our Terraform project. Inside this folder, let’s create the following files: main.tf and us-east.tf. I’ll explain each file below:

main.tf

Inside the main.tf, we will use the following code:

terraform {

backend “s3” {

bucket = “terraform-dev-mytest-<date>”

key = “terraform-dev.tfstate”

region = “us-east-1”

encrypt = true

}

}

The code above tells Terraform to store our tfstate file in an S3 bucket called “terraform-dev-mytest-<date>” and the the key of “terraform-dev.tfstate”. We also set encrypt to true to encrypt the files contents.

us-east-1.tf

With the main.tf file created, we can move on to the us-east-1 terraform file. Since we haven’t setup up our S3 and IAM_Customer modules yet, the only contents we will place into this file are as follows:

provider “aws” “us_east”{

alias = “use1”

region = “us-east-1”

}

This code tells Terraform to use the AWS provider and set the region to us-east-1. Having a terraform file per region allows you to place customer data/resources in their proper region for either data privacy restrictions and/or speed and optimization purposes. With our two Terraform files created, we can now initialize the Terraform backend using the Terraform command terraform init. If successful, you will see the following output below:

If you’d like to use an IDE to help with Terraform syntax project structure, you can use IntellJ’s GoLand. Just install the Terraform plugin called HashiCorp Terraform / HCL language support and restart the IDE. The plugin can be found under GoLand > Preferences > Plugins.

Creating the S3 module

With our backend initialized, we can proceed with creating our customer S3 module. This module will become our reusable template for deploying new customer locked down S3 buckets with enforced standards such as naming convention, encryption and destruction options. To begin, let’s create a folder called modules inside our project folder. From here, we will create another folder called customer_s3. Since this is a new module we’re building, each module will contain 3 files:

  • <module_name>.tf
  • vars.tf
  • outputs.tf

Let’s take a look at the first file vars.tf below:

This file holds variables that will be passed to our module. In this case, our customer alias will be passed from our main file us-east.tf as parameter to our module. We will cover this more in later steps. The second file is called the outputs.tf and tells what outputs the modules should pass after module usage. This is valuable when one module depends on another or printing output to console (such as the bucket arn or “Amazon Resource Names” and user keys). The last file our module needs is the module code itself, held in the <module_name>.tf or in our case customer_s3.tf. This file will define the standard on how a customer bucket should be created, what server side encryption to use and how the bucket should be destroyed, as outlined below:

You may be wondering why each module has a provider line at the top. When performing incident response, you must be able to support the creation of buckets across regions. Allowing modules to take the provider as parameter, which in turn allows us to define the provider for that region. We will show this in the next section. For this simple use case, we’re only using the bare minimum parameters for the data source aws_s3_bucket. You can view other arguments and definitions at the following link below:

Creating the IAM module

Now that we have created a module that defines how our customer bucket will be created, we need to create another module that creates a customer user account and a defines an IAM policy for that user which limits the user’s access to their S3 bucket including limited permissions.To do this, we create another folder in our modules directory called customer_iam and our new module files below:

output.tf

vars.tf

customer_iam.tf

This is a very basic example and additional parameters for both the aws_iam_user, aws_iam_access_key and aws_iam_user_policy can be found at the following links below:

Putting it all together

Now that both modules are created, we can now use them. Let’s open up our us-east-1.tf file again and create our new customer bucket, user and policy, as outlined below:

As you can see from the snippet above, we have defined our provider as aws using the region of us-east-1. This enables us to create a new Terraform file in the future such as eu-west-3 and many other regions. In this file, we also import our new modules using the module syntax and include the path to our module using the source parameter along with any arguments the module requires.

In the end, our project structure should look like the following:

Now that the code is completed, we must tell Terraform to import our modules so they are recognized. To do this, you simply type in the Terraform command terraform get in the console. Your output should look similar to the image below:

Awesome, now that our modules are imported, we can do a dry run and see what our AWS infrastructure will look like before committing our changes.This can be accomplished be running the command terraform plan. Your output should look similar to the image below:

The important part to this output outside of the module outputs is the Plan: segment at the bottom, which shows Plan: 4 to add, 0 to change, 0 to destroy. It’s important to check these changes prior to moving forward and ensure you’re adding/removing the proper resources/parameters. For awareness, the console also color codes changes outlined below:

  • Green (Add)
  • Yellow (Change)
  • Red (Destroy)

If everything looks good, you can proceed with the next command terraform apply to allow terraform to provision our new resources. Terraform apply will do two things:

  1. Show you the same output as Terraform plan to perform a last chance review
  2. Will ask for your confirmation before applying these changes to your infrastructure

If you agree with the changes, type yes to begin provisioning your new customer resources. Once completed, the final output will look like below:

As stated above, review the output of your apply command and ensure the proper number of resources are created. If any errors are shown, they will be in red. You will also see the following items in the outputs below:

The outputs will contain your customers bucket arn, access id and token, which can be used by the customer to authenticate to their bucket using either the AWS CLI or other tools like CyberDuck.

As we saw from the last blog post below, keeping track of all the customer buckets, policies and users at scale can become a tedious task.

You don’t want to delete the wrong bucket, policy or user account. The beauty of Terraform is to destroy resources, you just need to delete or comment out the proper code, plan and apply your changes, done! You should never have to login to AWS console ever again! To try this out, comment out the following code in your us-east-1.tf file:

Now that the code is commented out, type in the command terraform plan review the changes. You should see the output below stating the customer bucket and iam resources will be destroyed:

We can see during the planning process that this change will destroy four resources.

Once you confirm this change is correct, type in terraform apply and type yes in the console to proceed with the destruction operation.

Success! In seconds, we have destroyed all the customer resources.

Summary

In this post, we covered how to use Terraform to quickly spin up a new S3 bucket, IAM user and keys. Using Terraform also helps us ensure the proper policy is applied and bucket contents are encrypted at rest. While this example is very simple, we can build upon this to enable automated post processing of data (reading a log file for example) using SQS and Lambda. Lastly, you should commit your new Terraform code to a version control system such as GitHub to ensure any changes to the Terraform code base is tracked. I hope you enjoyed this blog post and stay tuned for Part 3, “Automated post processing with SQS and Lambda”. Also feel free to read up on more of my writings below. Until then, Happy hunting!

Join our community Slack and read our weekly Faun topics ⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Tstillz

Written by

Tstillz

Posting on various topics including incident response, malware analysis, development and finance/investing automation.

Faun

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade