Composing AWS Systems Manager configurations with Terraform — Part 1

Alain Reguera Delgado
Globant
Published in
6 min readJun 10, 2022

This article describes how to set up AWS Systems Manager (SSM) configurations using the terraform-aws-ssm module. In case you are interested in composing SSM configurations for your AWS infrastructure using Terraform, this article is for you.

When you finish this article, you will be able to use the terraform-aws-ssm module to create your own SSM configurations.

The following sections will be reviewed in this article:

  • Module configuration
  • Infrastructure
  • Conclusion

Module configuration

The terraform-aws-ssm module expects a directory structure like the following:

.
├── ansible
│ ├── 00-application-configuration.yml
│ ├── 99-application-tests.yml
│ └── roles
│ ├── application-httpd
│ │ ├── handlers
│ │ │ └── main.yml
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ ├── index.html.j2
│ │ └── welcome.conf.j2
│ └── application-httpd-tests
│ └── tasks
│ └── main.yml
├── main.tf
├── README.md
└── variables.tf

The ansible directory

The ansible directory in this layout exists to organize Ansible playbooks and roles. Use this location to declare the desired state of your SSM managed EC2 instances.

The terraform-aws-ssm module creates a private S3 bucket named ${var.name}-ssm/ and uploads the entire ansible directory up to it for further usage, when it applies the SSM associations. When you introduce changes to ansible directory, they will be reflected in a new S3 bucket version the next time you run terraform apply command, so SSM service will use them on associations.

When you write Ansible playbooks in the ansible directory, keep in mind they will be downloaded to the SSM managed instance and applied there. You don’t need to install ansible command yourself because the ${var.name}-ApplyAnsiblePlaybooks SSM document already takes care of it, but you must write your playbook files to run on localhost only using a local connection. For example, consider the following playbook:

---
- name: Examples simple - Configure application
hosts: localhost
connection: local

roles:
- application-httpd

The variables.tf file

By default, the terraform-aws-ssm module expects two variables as input. These variables allow you to customize the SSM configuration you create. Also deploy more than one configuration if you need to.

variable "name" {
type = string
description = "The project's name. This value is used to identify resources and tags."
}

variable "region" {
type = string
description = "The AWS region used by terraform provider."
}

The main.tf file

The main.tf file has three main sections. The first one describes the terraform provider and the restrictions related to it.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = var.region
}

The second section in main.tf file calls the terraform-aws-ssm module and provides the information the module needs to set up the patch baseline, the maintenance window, and the associations resources of an SSM configuration.

# ------------------------------------------------------------------
# SSM Configuration
# ------------------------------------------------------------------
module "ssm" {
source = "../../"

name = var.name

operating_system = "AMAZON_LINUX_2"
approved_patches_compliance_level = "CRITICAL"
approved_patches_enable_non_security = false

approval_rules = [{
approve_after_days = 7
compliance_level = "CRITICAL"
enable_non_security = false
patch_filters = [
{ key = "PRODUCT", values = ["AmazonLinux2"] },
{ key = "CLASSIFICATION", values = ["Security", "Bugfix"] },
{ key = "SEVERITY", values = ["Critical", "Important"] }
]
}]

maintenance_window = {
enabled = true
schedule = "cron(0 9 */7 * ?)"
schedule_timezone = "UTC"
cutoff = 0
duration = 1
}
}

This is all you need to deploy your first SSM configuration using the terraform-aws-ssm module. The remaining configuration sections in the main.tf file are dedicated to illustrate the deployment of SSM managed EC2 instances.

# ------------------------------------------------------------------
# VPC
# ------------------------------------------------------------------
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"

name = var.name

cidr = "10.0.0.0/16"

azs = ["${var.region}a"]
public_subnets = ["10.0.1.0/24"]
private_subnets = ["10.0.21.0/24"]

enable_nat_gateway = true
}

# ------------------------------------------------------------------
# Security Groups
# ------------------------------------------------------------------
module "security-group_http-80" {
source = "terraform-aws-modules/security-group/aws//modules/http-80"
version = "~> 4.0"

name = "${var.name}-sg-http-80"
description = "Allow http traffic from public subnets."

vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
}

# ------------------------------------------------------------------
# Autoscaling
# ------------------------------------------------------------------
module "asg" {
source = "terraform-aws-modules/autoscaling/aws"
version = "~> 6.0"

name = var.name

min_size = 1
max_size = 5
desired_capacity = 1

iam_instance_profile_name = module.ssm.iam_instantace_profile_name
security_groups = [module.security-group_http-80.security_group_id]
vpc_zone_identifier = module.vpc.private_subnets

launch_template_name = var.name
launch_template_description = "Launch template for ${var.name} autoscaling group."
update_default_version = true

image_id = "ami-0022f774911c1d690"
instance_type = "t2.micro"
ebs_optimized = false
enable_monitoring = true

instance_market_options = {
market_type = "spot"
spot_options = {
max_price = "0.004"
}
}

tags = {
"Name" = "${var.name}"
"Patch Group" = "${var.name}"
}
}

The communication between the SSM agent and the SSM server starts when a new EC2 instance is deployed and the agent contacts the SSM server to inform about its existence. By default, this communication is denied, and you need to allow it by creating an EC2 instance profile with the necessary permissions, and later, writing your EC2 instance deployments code to use it. Once an EC2 instance is registered in SSM, its agent is ready to execute all actions the SSM service has been configured to run, locally, in the operating system.

The terraform-aws-ssm module creates an EC2 instance profile named ${var.name}-ssm-managed-instance with all necessary permissions an EC2 instance needs to interact with the SSM service. However, it doesn’t configure the EC2 instance profile name in the code that deploys your EC2 instances. You need to establish such a relation yourself, when you write your own EC2 instance deployment code.

For instance, in the example/simple/main.tf file, the iam_instance_profile_name attribute uses a reference to iam_instance_profile_name resource, inside the ssm module previously defined in the module configuration block, to establish the relation between the EC2 instance profile created by terraform-aws-ssm module and the EC2 instance deployment code created for this article.

module "asg" {
source = "terraform-aws-modules/autoscaling/aws"
version = "~> 6.0"
name = var.name # ... iam_instance_profile_name = module.ssm.iam_instantace_profile_name # ...}

Infrastructure

This section describes the infrastructure desired state and how you can deploy it.

Desired state

The infrastructure desired state is a list of conditions that you want to be met. For example, the desired state for example/simple/ infrastructure is the following:

  • All EC2 instances must be installed and configured using Ansible playbooks to allow HTTP requests to http://localhost/ and receive the Hello, World! response.
  • All EC2 instances must approve all operating system patches that are classified as “Security” and that have a severity level of “Critical” or “Important”. Patches are auto-approved seven days after release. Also approves all patches with a classification of “Bugfix” seven days after release. System reboots caused by patching must also happen without human intervention, in a coordinated, progressive, and predictable way.

Deployment

  1. Clone the repository:
    git clone https://github.com/areguera/terraform-aws-ssm
  2. Change working directory into the simple configuration example directory.
    cd terraform-aws-ssm/examples/simple/
  3. Initialize terraform provider and modules:
    terraform init
  4. Customize to setup your own SSM configuration. The main locations where you should change are the examples/simple/main.tf file, to modify the patch baseline definition, maintenance window schedule, and auto-scaling group capacity and the examples/simple/ansible/directory structure, to modify the infrastructure desired state.
  5. Check terraform deployment plan:
    terraform plan -var name=MyProject -var region=us-east-1
  6. Apply terraform deployment plan:
    terraform apply -var name=MyProject -var region=us-east-1

Iterate between steps 4 and 6.

Conclusion

This is the first part of a two-parts article. In this first part you learnt about the very basic configuration required by terraform-aws-ssm module to provision SSM configurations in a consistent way considering key aspects like IAM permissions, patch baseline, patch group assignment to EC2 instances, associations to automatically patch instances and grantee their desired state using ansible.

The second part of the article “Composing AWS Systems Manager configurations with Terraform — Part 2” explains the terraform-aws-ssm module’s code, step by step, and closes the topic sharing final thoughts about composing SSM configuration using infrastructure as code.

--

--