Shell scripts for automating AWS EBS mounting and tagging

AWS EBS Mounting

EBS (Elastic Block Store) volumes can be attached to an EC2 instances as a separate volumes for the use of expand storage capacity. It can be used as root volume since it is capable of use as boot volume of an EC2 instance. EBS volumes are unmounted and get into available state when an EC2 instance is terminated. However data which are stored in the EBS volume is not destroyed when unmounting the volume from an EC2 instance.

If we need to use EBS volume with an EC2 instance following two steps are required.

  1. Attach EBS volume to an EC2 instance
  2. Mount attached EBS volume to an EC2 instance file system

Considering HA (High Available) setup, nodes can be lost at anytime but service can not be unavailable. If we have such kind of setup with two or more nodes, mounted EBS volumes can be unmounted when an EC2 instance is terminated. Though we use auto scalling group which is responsible for maintain minimum number of nodes count in the setup, It will not be able to attach and mount unmounted EBS volume which is in available state after terminating the instance. Below shell script is written to automate that scenario. This shell script can be run from EC2 instance that EBS volume need to be mounted. Script check AWS availability region of the instance, volume tags which can be added as your requirement and attach and mount the relevant volume to the EC2 instance.

#getting instance
azabzone=$(curl -s

#getting availble ebs volume-id
ebsvolume=$(/var/awslogs/bin/aws ec2 describe-volumes --filters Name=tag-value,Values=project Name=tag-value,Values=environment Name=tag-value,Values=product Name=availability-zone,Values=`echo $abzone` --query 'Volumes[*].[VolumeId, State==`available`]' --output text  | grep True | awk '{print $1}' | head -n 1) 
#check if there are avaible ebs vloumes

if [ -n "$ebsvolume" ]; then
#getting instance id

instanceid=$(curl -s
#attaching ebs    
/var/awslogs/bin/aws ec2 attach-volume --volume-id `echo $ebsvolume` --instance-id `echo $instanceid` --device /dev/xvdk

sleep 10
# mount ebs to /mnt

mount /dev/xvdk /mnt/data

As an example, If you are using WSO2 DAS product in HA mode and you can store data directory in a separate EBS volume and create a symlink to data directory which points the data directory in mounted volume. It helps you to keep indexed data separately and mount to any instance and run the newly spawned EC2 instance with that data.

AWS EBS Tagging

Another use case which related to the EBS volumes is forget to add tags to EBS volumes in order to identify for the purposes like costing when newly attach them to an EC2 instance manually or using IAC (Infrastructure as Code) like Terraform, Cloud formation etc. Below shell script can be use as automated way to adding tags based on EBS attached EC2 instance. Tags which we are provided to ensure through script will be checked and added to the EBS volume automatically. Script can be run form anywhere and recommend to run from a management node like puppet master. You can add cronjob as well to run the script and it will run periodically and tag untagged volumes which are existing by that time.

#getting instance ids 
for i in $(aws ec2 describe-instances  --filter  Name=tag-value,Values=$ServiceID Name=tag-value,Values=$Environment --query 'Reservations[*].Instances[*].InstanceId' --output text); do
# getting tag values based on key 
valuesiName=$(aws ec2 describe-instances --instance-id $i --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0]]' --output text)iServiceid=$(aws ec2 describe-instances --instance-id $i --query 'Reservations[].Instances[].[Tags[?Key==`Service ID`].Value | [0]]' --output text)
iEnvironment=$(aws ec2 describe-instances --instance-id $i --query 'Reservations[].Instances[].[Tags[?Key==`Environment`].Value | [0]]' --output text) 
#getting volume ids attached to the instances   
for j in $(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$i --query 'Volumes[*].{ID:VolumeId}' --output text); do
# checking there tag values   
vName=$(aws ec2 describe-volumes --volume-id $j --query 'Volumes[].[Tags[?Key==`Name`].Value | [0]]' --output text)   
vServiceid=$(aws ec2 describe-volumes --volume-id $j --query 'Volumes[].[Tags[?Key==`Service ID`].Value | [0]]' --output text)
vEnvironment=$(aws ec2 describe-volumes --volume-id $j --query 'Volumes[].[Tags[?Key==`Environment`].Value | [0]]' --output text) 
# if there are no tag values assign instance tag values to  the volumes            
if [ "$iName" != "None" ] && [ "$vName" == "None" ]; then              aws ec2 create-tags --resources $j --tags Key=Name,Value="'`echo $iName`'"           
if [ "$iServiceid" != "None" ] && [ "$vServiceid" == "None" ]; then              aws ec2 create-tags --resources $j --tags Key="Service ID",Value=`echo $iServiceid`          
if [ "$iEnvironment" != "None" ] && [ "$vEnvironment" == "None" ]; then              
aws ec2 create-tags --resources $j --tags Key=Environment,Value=`echo $iEnvironment`


Filter the EC2 instances based on Service Id and Environment tags and add tags to untagged volumes which are attached to those EC2 instances.

Both Shell scripts are used AWS API calls to run the functionalities. AWS CLI commands are used in the both scripts. AWS CLI is need to be installed and configured on script running node in order to use these scripts.