Reducing the size of the storage disks in AWS EC2

🛠 Works for all distributions of Linux including CentOS, Ubuntu

Nasir Bashir
FabHotels
8 min readJun 4, 2020

--

If you have played around a bit with the AWS EC2, you might have an idea of the type of block-level storage that is attached to it, either as root volume or as additional storage. There are broadly two types of volumes that an EC2 instance can be launched with, one being the Instance Store (ephemeral storage) and the other being the Elastic Block Store (EBS — persistent storage). The difference between the two is that an Instance store is a physically attached storage, having better performance but because of no persistence, data is lost once the instance is rebooted. EBS, on the other hand, is a network-attached storage, having reduced performance but can persist the data so that it’s not lost once the instance is rebooted.

Instance store Vs Elastic Block Store

This article is going to guide you to reduce the size of EBS volumes attached to your instance. While increasing the volume and type of storage attached is pretty straightforward, decreasing the same involves relatively more work. Let’s first have a look at how to increase the volume of an instance. First, choose the volume that you want to modify. Next, click the Actions button and choose Modify Volume

Choosing volume to be modified
Wizard to modify volume

In the wizard that is thrown open, you can change both the size of the instance and the type of disk that is attached to the instance. If you try to decrease the size of the instance here, an error is thrown because it’s only possible to increase the size in this wizard. You can also change the disk type from SSD to the Magnetic HDD and vice- versa.

Now that we are done with increasing the volume and type of disk attached, let’s have a look at how to decrease the volume of the instance that can be either attached as the root volume (one which has the boot instructions) or as additional volume.

Steps to shrink the storage volume

  1. Create the AMI of the instance for backup. After this step is complete select and stop the instance in question.
  2. Select the original volume attached to this instance and detach it.
  3. Create a new empty volume of the favored size. This volume will ultimately replace the current volume. Make sure to create the
    volume in the same availability zone as the original one.
  4. When the new volume has been created, reattach the two volumes and restart the instance.
  5. Format & Mount the new volume.
  6. Copy data from the original volume to the new volume.
  7. Compose the new volume for boot instructions
  8. Detach both the volumes and then reattach the new volume as the root.

Let’s go through these steps in detail one by one

Creating the AMI

In the AWS dashboard, go to the EC2 instances section. Choose the instance in question and click the Actions button; then choose Image, followed up by the Create Image option as shown in the image below. Complete the next steps. Select the No Reboot option if you don’t want the instance to reboot while the AMI is being created. After the snapshot creation is completed, stop this instance.

Create AMI of the instance

Detaching the volume

In the dashboard, go to the Volumes section under the Elastic Block Store. Choose the volume that is attached to your instance, Choose Actions, followed by Detach Volume as shown in the instructions below

Detaching the volume

Creating new volume

Go to the Volumes section under the Elastic Block Store. Click the Create Volume button from the top bar. You show land in the wizard shown below.

Choose the volume type and the new size that you want. One crucial thing to note here is that the new volume should be in the same availability zone as the EC2 instance. For instance, if the EC2 instance is in us-east-1a, then the new volume should also be in us-east-1a. Since we are creating a blank volume we can leave the Snapshot ID blank. Add a name tag though to identify the volume with key Name and value being the name that you want to keep for this volume. After this hit the Create Volume button and wait for the volume creation to finish.

Reattaching the two volumes

In the dashboard, go to the Volumes section under the Elastic Block Store. Choose the new volume that was created, click Actions, and choose Attach Volume.

Volume Attachment

In the wizard, choose your EC2 instance. In the device field, you basically put the attachment point for the volume. You can use /dev/sd[f-p] for Linux instances. For this example, we will go with /dev/sdf. After the new volume is attached, perform the same steps to attach the original volume with device name /dev/sda1. Once the attachment is complete, restart the instance.

Formatting & Mounting the new volume

Login to the EC2 instance via SSH. To list all the available volumes with the instance, you can use the lsblk command. Our original volume is at /dev/xvda1 and the new volume at/dev/xvdf. Since we created a blank volume, we can go ahead and format the same. If you tend to use an existing volume, it's always a good idea to check if the volume has any data or not by the following command

sudo file -s /dev/xvdf

If the output of this command is /dev/xvdf: data, it means the volume is empty and we can go ahead with formatting the volume. If the output does not match the one above, it means the volume is not empty. Do not format the volume then. Now we can go ahead and format the volume with any of the following commands

sudo mkfs -t ext4 /dev/xvdforsudo mke4fs -t ext4 /dev/xvdf

Now that we are done with formatting the volume, let’s begin to mount the same on our instance. The first step is to create a directory to mount the new volume.

sudo mkdir /mnt/new-vol

The next step is to mount the new volume into this directory using the mount command as shown below

sudo mount /dev/xvdf /mnt/new-vol

This will mount the new volume on the instance and you can verify the same by using the disk free command (df) df -h. The new volume should be mounted and available at /xvdf.

Copying data from the old volume to the new one

To copy the data from the original volume to the new volume, we are going to use the rsync command as follows

sudo rsync -axv / /mnt/new-vol/

This step usually takes considerable time ☕️ based on the amount of data that is to be copied. Stand by while it’s done. If the volume that you are replacing is an additional volume, then you can skip the next step.

Composing the new volume

Before the EC2 instance can boot up from the new volume, you need to load up the boot instructions into the volume. To begin with install grub on the new volume. Based on the flavor of the Linux present, you will either have the grub-install or grub2-install available for the same.

sudo grub-install --root-directory=/mnt/new-vol/ --force /dev/xvdf

The next step is to unmount the new volume to change some identity characteristics. To unmount the new volume, use the following command

sudo umount /mnt/new-vol

Now let’s replace the UUID (Universally Unique IDentifier) of the new volume. with that of the original volume. UUID is a unique identifier used in Linux OS to uniquely identify partitions in the system. Mounting the hard drives or SSDs using UUIDs, reduces the odds of a wrong hard drive getting mounted and thereby preventing serious data loss. You can explore more on UUIDs and how they are beneficial in mitigating the problem of data loss here.

To check the UUID of the original volume, use the following command

blkid

This command prints the characteristic attributes of the block devices on the system. The output would look something like this

/dev/xvda1: 
LABEL=”cloudimg-rootfs”
UUID=”263dc91a-fc69–2314-cdaf-23cabc336a24"
TYPE=”ext4" PTTYPE=”dos”

Copy the UUID from the original volume (/dev/xvda1), and use tune2fs command to replace the UUID of the new volume with that of the original volume.

sudo tune2fs -U UUID_OF_ORIGINAL_VOLUME /dev/xvdf;

The final step here is to set the system label of the new volume to what is present in the original volume. You can find out the system label of the original volume via the following command

sudo e2label /dev/xvda1The output would look something like"cloudimg-rootfs"

If there is no system label for the original volume (usually present only in Ubuntu and similar flavors), you can skip this step. Else set the system label for the old-volume using the following command

sudo e2label /dev/xvdf cloudimg-rootfs

This is it for the composition step. We can now log out of the instance via SSH.

Detaching both the volumes and reattaching the new volume at the root

  • Stop the instance in question
  • Detach both the volumes from the instance from the Volumes wizard.
  • Reattach the new volume at the root (/dev/sda1) from the same volumes wizard
  • Start back the instance. You can now log in to the instance via SSH and check if everything is working fine.

You now have the instance running with the new reduced volume. Do not forget to delete the backup AMI after letting the instance run for a few days.

--

--