Infrastructure As Code : Automating AWS ec2 Virtual Machines Using SaltStack

This tutorial is taken from SaltStack For DevOps book that I published after long hours of work and self-learning. You can visit the website for more information


Infrastructure management and provisioning is moving to the next big thing: Infrastructure As Code.

Infrastructure as Code (IaC) is the usage of definition and configuration files to create, start, stop, delete, terminate and restart virtual or bare-metal machines. When mastering IaC organizations can reduce costs and time of infrastructure management in order to focus more on the product development.

With the rise of DevOps movement the fact of enabling the Continuous Configuration Automation approach is becoming a key step in the life cycle of a product.

This post is an example of how Infrastructure As Code can work.

I am using SaltStack and Amazon Web Services ec2 virtual machines so I assume you are familiar with:

  • AWS
  • SaltStack
  • System administration

If you are discovering SaltStack, you can start with this tutorial.

Actually, working with ec2 virtual machines is almost similar to working with Linode VMs, I am saying this because my first experience of provisioning infrastructure using code and configurations was with Linode virtual machines.

For a starter level, I recommend you to see the tutorial linked above since it is simpler.

Flying Cloud — Source: John Oxley Library, State Library of Queensland

AWS Provider Configuration

First thing we are going to do here is to check our key pairs.

If you have multiple key pairs, you are going to use one of them in the Salt Cloud Provider configuration, that’s why you should search for your key name.

You can use this link to list all of your key pairs:

https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs:sort=keyName

If you are using a different region, change the eu-west-1 by the region you are considering. In the next example we are using us-east-1 as the region:

https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#KeyPairs:sort=keyName

To check which key name corresponds to the used key pair, use this command (after installing aws kit):

ec2-fingerprint-key kp.pem

This will help you match your key pair to the fingerprint and allows you to check the name of your key.

In this example our key is called kp and it is located under:

/etc/salt/kp.pem

Now go and get your security credentials for accessing your ec2 instances, you can find the id here:

https://console.aws.amazon.com/iam/home?#security_credential

You can not get your key from there, but normally you would have kept this in a secret place, if you lost this, you should generate two other key pairs.

You should also know other things like the name of the security group you want to use and your ssh_username:

  • Amazon Linux > ec2-user
  • RHEL > ec2-user
  • CentOS > ec2-user
  • Ubuntu > ubuntu
  • etc ..

Another thing to set up is the ssh_interface that could have two different values:

  • private_ips > The salt-cloud command is run inside the EC2
  • public_ips > The salt-cloud command is run outside of EC2

This is a generic example of a provider configuration using private_ips:

ec2-private:
  # Set up the location of the salt master
minion:
master: saltmaster.myhost.com

# Set up grains information, which will be common for all nodes using this provider
grains:
env: test
  # Specify whether to use public or private IP for deploy script.
ssh_interface: private_ips
  # Set the EC2 access credentials
id: ‘use-instance-role-credentials’
key: ‘use-instance-role-credentials’
  # Make sure this key is owned by root with permissions 0400.
private_key: /etc/salt/test_key.pem
keyname: test_key
  securitygroup: default
  # This is optional but you can set up your default region and  availability zone (optional)
location: eu-west-1
availability_zone: eu-west-1c
  # Salt Cloud will use this user name to deploy, it depends on your AMI.
# Amazon Linux > ec2-user
# RHEL > ec2-user
# CentOS > ec2-user
# Ubuntu > ubuntu
  ssh_username: ubuntu

# Optionally add an IAM profile
  iam_profile: ‘arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile’
   provider: ec2

With a public_ips, we will have something similar to this configuration:

ec2-public:
  minion:
    master: saltmaster.myhost.com
  ssh_interface: public_ips
  id: ‘use-instance-role-credentials’
  key: ‘use-instance-role-credentials’
  private_key: /etc/salt/test_key.pem
  keyname: test_key
  securitygroup: default
  location: eu-west-1
  availability_zone: eu-west-1c
  ssh_username: ubuntu
  iam_profile: ‘my other profile name’
  provider: ec2

Please note two things:

  • Previously, the suggested provider for AWS EC2 was the aws provider. This has been deprecated in favor of the ec2 provider.
  • The provider parameter in cloud provider definitions was renamed to driver (since the version 2015.8.0).

AWS Profile Configuration

Let’s set up the profile to provide more “ec2-specific” configuration options.

In the profile configuration we should provide the provider, the image id, the size of the instance and the ssh_username which is Ubuntu since our image is also based on Ubuntu.

provider: ec2-private
image: ami-a609b6d5
size: t2.micro
ssh_username: ubuntu

If we want to add a volume (10Gb), we can do it like this:

volumes:
- { size: 10, device: /dev/sdf }

Suppose we want to add two other volumes while choosing the iops (Input/Output per second), we could add a similar configuration to the next one:

volumes:
- { size: 10, device: /dev/sdf }
- { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
- { size: 300, device: /dev/sdh, type: io1, iops: 3000 }

Note that to use an EBS optimised ec2 instance we should declare it:

ebs_optimized: True

We can also add tags to our new instance and it will be applied to all ec2 instances created using this profile:

tag: {‘env’: ‘test’, ‘role’: ‘redis’}

We have the possibility to force grains synchronization by adding:

sync_after_install: grains

One thing that I usually automate is setting my own configurations, like the .vimrc file, you can automate things like this by adding a script that will be executed:

script: /etc/salt/cloud.deploy.d/configure_vim.sh

Network configuration is also accessible using Salt Cloud, here is an example where the primary IP address is the private address and the ec2 instance will have a public IP (not an elastic IP) with subnet id and a security group id:

network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
AssociatePublicIpAddress: True
SubnetId: subnet-142f4bdd
SecurityGroupId:
- sg-750af531

If you prefer the EIP (Elastic IP):

allocate_new_eips: True

We want to delete root volume when we destroy our ec2 instance:

del_root_vol_on_destroy: True

When a machine is terminated, we want to delete all not-root EBS volumes for an instance:

del_all_vol_on_destroy: True

Now we have a functional ec2 profile:

base_ec2_private:
provider: ec2-private
image: ami-a609b6d5
size: t2.micro
ssh_username: ubuntu
  volumes:
- { size: 10, device: /dev/sdf }
- { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
- { size: 300, device: /dev/sdh, type: io1, iops: 3000 }
  tag: {‘env’: ‘test’, ‘role’: ‘redis’}
  sync_after_install: grains
  script: /etc/salt/cloud.deploy.d/configure_vim.sh
  network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
#auto assign public ip (not EIP)
AssociatePublicIpAddress: True
SubnetId: subnet-813d4bbf
SecurityGroupId:
- sg-750af531
  del_root_vol_on_destroy: True
del_all_vol_on_destroy: True

When we want to create a similar profile to the last one but we would like to change one or two options, we can use extends like in the following example:

base_ec2_private:
provider: ec2-private
image: ami-a609b6d5
size: t2.micro
ssh_username: ubuntu
  volumes:
- { size: 10, device: /dev/sdf }
- { size: 300, device: /dev/sdg, type: io1, iops: 3000 }
- { size: 300, device: /dev/sdh, type: io1, iops: 3000 }
  tag: {‘env’: ‘test’, ‘role’: ‘redis’}
  sync_after_install: grains
  script: /etc/salt/cloud.deploy.d/configure_vim.sh
  network_interfaces:
- DeviceIndex: 0
PrivateIpAddresses:
- Primary: True
AssociatePublicIpAddress: True
SubnetId: subnet-813d4bbf
SecurityGroupId:
- sg-750af531
  del_root_vol_on_destroy: True
del_all_vol_on_destroy: True
  base_ec2_public:
provider: ec2-private
extends: base_ec2_private

Using Salt Cloud To Automate AWS EC2 Creation

Starting a private ec2 instance can be done like this:

salt-cloud -p base_ec2_private private_minion

and launching a public one can be done using this command:

salt-cloud -p base_ec2_public public_minion

Like we have done when we saw how to use Salt Maps with Linode (refer to the tutorial linked above), nothing is different from using it with aws:

ec2_private:
  - redis
  - mysql
ec2_public:
  - web_1
  - web_2

and then we can start our ec2 instances using:

salt-cloud -m /etc/salt/cloud.map.app -P

Salt Cloud allows getting, setting and deleting tags after launching the ec2 instance using the instance name (or the instance id):

salt-cloud -a get_tags ec2_minion
salt-cloud -a set_tags ec2_minion tag1=value1 tag2=value2
salt-cloud -a del_tags ec2_minion tag1,tag2

It also allows renaming the machine:

salt-cloud -a rename ec2_minion newname=ec2_my_minion

To enable termination protection, Salt Cloud can be used like in the following command:

salt-cloud -a enable_term_protect ec2_minion

Other options are available:

salt-cloud -a show_term_protect ec2_minion
salt-cloud -a disable_term_protect ec2_minion

using Salt Cloud from command line allows adding volumes and specific configurations like choosing a snapshot to create a volume:

Creating a simple volume in a specific zone:

salt-cloud -f create_volume ec2 zone=eu-west-1c

Adding size:

salt-cloud -f create_volume ec2 zone=eu-west-1c size=100

Choosing a snapshot:

salt-cloud -f create_volume ec2 zone=eu-west-1c snapshot=snapshot_id

Selecting the type (standard, gp2, io1 ..etc):

salt-cloud -f create_volume ec2 size=100 type=standard
salt-cloud -f create_volume ec2 size=100 type=gp2
salt-cloud -f create_volume ec2 size=200 type=io1 iops=2000

Detaching a volume then deleting it:

salt-cloud -a detach_volume ec2_minion volume_id=vol_id
salt-cloud -f delete_volume ec2 volume_id=vol_id

This tutorial is taken from SaltStack For DevOps book that I published after long hours of work and self-learning. You can visit the website for more information.

Connect Deeper

If you resonated with this article, please subscribe to DevOpsLinks : An Online Community Of Diverse & Passionate DevOps, SysAdmins & Developers From All Over The World.

You can find me on Twitter, Clarity or my blog and you can also check my books: SaltStack For DevOps,The Jumpstart Up & Painless Docker.

If you liked this post, please recommend and share it to your followers.