Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 2

Bruce Dominguez
6 min readJan 17, 2019

--

Photo by Ben Elwood on Unsplash

In my previous article, we conquered WINRM to build our golden AMI using Packer, installed our custom applications using chocolatey and overcame connectivity issues with RDP. I now had a working AMI in AWS that I could use to spawn multiple copies on demand. It was time to automate the provisioning of the server so that it could be predictably provisioned and destroyed cleanly. My requirement was to keep the configuration of IaC in source control, for this I opted to use Terraform from HashiCorp. I won’t go into too much detail on the benefits of using Terraform, but it is a very powerful tool to deploy, teardown and codify your infrastructure.

Breaking up my Terraform Script to deploy my infrastructure I wrote 3 files:

Main.tf — This would hold the meat and potatoes of what I was building, my server and configuration.

Variables.tf — This holds all the variables used by my Main.tf to use, allowing me to change just a value in one file instead of sifting through my main.tf

Output.tf — This is where I specify any outputs that I need (more on this later).

You can check out my terraform scripts here.

So to kick off my Main.tf file I need to first let Terraform know that I want to provision to AWS. To do this just add the below:

provider “aws” {region = “${var.aws_region}”profile = “${var.aws_profile}”}

I use variables ${var.aws_region} to reference my AWS region in variables.tf and ${var.aws_profile} to also reference my AWS CLI named profile. Using the CLI named profile makes it easier to have multiple AWS accounts e.g. Dev, Prod etc. This easily set using aws configure — profile Prod. Check here for more info.

Next I need to dynamically identify the correct subnet to deploy to using Tags I have set up in my AWS VPC.

# — — Get VPC ID — — -
data “aws_vpc” “selected” {
tags = {
Name = “${var.name_tag}”
}
}
# — Get Public Subnet List
data “aws_subnet_ids” “selected” {
vpc_id = “${data.aws_vpc.selected.id}”
tags = {
Tier = “public”
}
}

Using data”aws_vpc” resource in Terraform to identify which one of my subnets are Prod or Staging environment, then using good old interpolation syntax in the data “aws_subnet_ids” resource to pull out my public subnet.

I also use another data resource to help identify an already existing security group that I had defined. Of course, you can always create your own specific security group for this, which I did end up doing to keep everything contained for my project.

data “aws_security_group” “selected” {
tags = {
Name = “${var.name_tag}*”
}
}

Now before I define my EC2 instance I need to do a few more steps. I needed to make sure I could RDP to my instance, which I managed to overcome in Part 1 by setting the userdata script to be used by my EC2 instance at boot.

data “template_file” “user_data” {
template = “/scripts/user_data.ps1”
}

The fourth problem — Dynamically create and store Key pairs on S3.

As part of this project I wanted to also dynamically create the instance key pairs, register them and store them on S3 for later use. This took a bit of head scratching and some googling to the solution that worked cleanly, mainly the storing on S3 part. During the research (googling) I came across a great Terraform Module by the guys at Cloud Posse that had the perfect Module for this.

module “ssh_key_pair” {
source = “git::https://github.com/cloudposse/terraform-aws-key-pair.git?ref=master"
namespace = “example”
stage = “dev”
name = “${var.key_name}”
ssh_public_key_path = “${path.module}/secret”
generate_ssh_key = “true”
private_key_extension = “.pem”
public_key_extension = “.pub”
}

Check the module at the Cloud Posse Github repo here.

Now to store my new shiny keys on my S3 bucket. I found some solutions for this problem however they seemed too convoluted for what essentially was just a file copy. So I decided to use Terraform’s the local-exec provisioner and kick off the copying of the keys on to my S3 bucket with the AWS CLI.

# — — Copy ssh keys to S3 Bucket
provisioner “local-exec” {
command = “aws s3 cp ${path.module}/secret s3://PATHTOKEYPAIR/ — recursive”
}
# — — Deletes keys on destroy
provisioner “local-exec” {
when = “destroy”
command = “aws s3 rm 3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pem”
}
provisioner “local-exec” {
when = “destroy”
command = “aws s3 rm s3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pub”
}

The first provisioner copies both keys from the path specified in ssh_public_key_path section of the “ssh_key_pair” module to my S3 bucket using AWS CLI commands.

The last two provisioners remove the keys when Terraform Destroy is performed. This is done using the when = “destroy”. NOTE: Don’t forget to add this to your aws_instance resource.

Now time to get to configure the EC2 instance with the AMI we created with our Packer script in Part 1. To do this we first must find the AMI we created using the Terraform data “aws_ami” resource and also filter to find our image.

data “aws_ami” “Windows_2016” {
filter {
name = “is-public”
values = [“false”]
}
filter {
name = “name”
values = [“windows2016Server*”]
}
most_recent = true
}

With the AMI image defined we can use it when we create the ec2 instance with ${data.aws_ami.Windows_2016.image_id}, with image_id as the attribute of the resource. With a few variables thrown in, I had the windows 2016 server ready to deploy.

resource “aws_instance” “this” {
ami = “${data.aws_ami.Windows_2016.image_id}”
instance_type = “${var.instance}”
key_name = “${module.ssh_key_pair.key_name}”
subnet_id = “${data.aws_subnet_ids.selected.ids[01]}”
security_groups = [“${data.aws_security_group.selected.id}”]
user_data = “${data.template_file.user_data.rendered}”
iam_instance_profile = “${var.iam_role}”
get_password_data = “true”
root_block_device {
volume_type = “${var.volume_type}”
volume_size = “${var.volume_size}”
delete_on_termination = “true”
}
tags {
“Name” = “NEW_windows2016”
“Role” = “Dev”
}

The fifth problem — Automating the decryption of the Admin password.

Having to log into the AWS console every time to decrypt the admin password when I want to RDP on to the server would be an absolute pain, and not very “DevOps-ey”. So I needed a solution that would be to automate the process of decrypting the password and then presenting to me as an output of that process. In order for me to do this I first needed to get the encrypted password from the server. Terraform is able to do this by using the “get_password_data” argument and setting it to True. However, a base64 encrypted password was useless and a nightmare to type. Terraform to the rescue again! Using rsadecrypt as part of the output in my Output.tf will decrypt the password generated and present it in a human-readable format.

output “Administrator_Password” {
value = “${rsadecrypt(aws_instance.this.password_data, file(“${module.ssh_key_pair.private_key_filename}”))}”
}

Now when I run Terraform Apply I get:

Administor_Password = XXADMIN PASSWORDXXX

Success! Now I have my admin password as an output when I run Terraform apply, all without having to log in to the AWS console.

Conclusion

Automating the build of a Windows 2016 server is not as straightforward as I initially thought. Overcoming build issues in Packer (problem 1, 2 and 3) and my deployment challenges with Terraform (problem 4 and 5) I can now move onto integrating the solution in a CI process. Hopefully, this article assists anyone in the same situation as I was. Thanks for reading!

Next Steps.. Terratest and BDD

My next project will be to write a BDD framework that will sit on top of my Terratest scripts. Terratest is a GO library written by the awesome guys at Gruntwork that help write automation tests to test your infrastructure code. The Terraform and Packer scripts are still code and still need to be validated after every update before it is released into Prod, and adding these tests as part of our CI server will provide that feedback early and often.

--

--