How to downsize a *root* EBS volume on AWS EC2 (Amazon Linux)

Andrew Trott
4 min readNov 22, 2017

--

If you’ve ever created an especially large EBS volume for an EC2 instance by mistake, you’ll notice that AWS doesn’t make it particularly easy to reduce the size of the volume. It’s possible! And not that hard. And you’ll preserve all your data (and be able to boot again), while saving some money. But you need to follow these steps closely:

  1. You have an existing EC2 instance, with a root volume that is too large. In this example, mine’s 100GB. Make note of the attachment and availability zone information on your current volume:
My volume is currently attached to /dev/sda1, and in AZ us-west-2a.

2. If you don’t already know how much data is stored on your volume, ssh in to your instance, and check using:

df -h

3. Exit out, and stop your instance, if you haven’t already (from the Instances page).

3. Create a snapshot of your overly large volume (from the Volumes page):

Don’t delete this snapshot at least until you’ve verified everything is working at the end. As long as you keep this around, you can always repeat the steps.

4. Create a volume from this snapshot, of the same size (100GB in my case), and in the same AZ (from step 1 above). We will create the smaller volume next, trust me:

5. Create a brand new volume (not from a snapshot) that’s small enough to satisfy your penny-pinching needs, and large enough to hold your data with room to spare (see step 2):

In my case, my data takes up 14GB, so I’m going with 20GB.

6. You should now have three volumes in your list, with just one attached to your instance:

7. Attach the two new volumes you just created, and note the attachment information for these:

Note that mine are attached to /dev/sdf and /dev/sdg. These will appear in Linux as /dev/xvdf and /dev/xvdg respectively.

8. Start your instance from the Instances page, and ssh in:

ssh -i [ssh_private_key] ec2-user@ipv4-address

9. To see which partitions are present and which volumes are currently mounted (should be only your original root volume right now), run:

lsblk

(in my case this shows that /dev/xvda has one partition at /dev/xvda1 which is mounted on /, /dev/xvdf has one partition at /dev/xvdf1, and /dev/xvdg has no partitions yet)

11. Create a new partition table and file system on the target (smaller) volume (note, here my smaller volume is at/dev/xvdg; yours may be different):

sudo fdisk /dev/xvdg
wipefs
sudo fdisk /dev/xvdg
n [for new partition]
p [for primary partition]
[accept all defaults by hitting enter repeatedly]
w [to write out and quit]

12. Run lsblk again to verify that your new partition has been created at /dev/xvdg1.

12. Create a new file system on the target (smaller) volume’s new partition, and label the file system so that Linux can recognize the partition and boot from it:

sudo mkfs.ext4 /dev/xvdg1
sudo e2label /dev/xvdg1 /

13. Create mountpoints for the source and target partitions, mount them, and copy over all the data from your snapshot volume to your smaller volume:

sudo mkdir /source /target
sudo mount -t ext4 /dev/xvdf1 /source
sudo mount -t ext4 /dev/xvdg1 /target
sudo rsync -HAXxSPa /source/ /target
[NOTE! VERY IMPORTANT! There is no trailing "/" after /target!

14. Exit out and return to the AWS console. Stop your instance. Go to the Volumes page, and detach all three volumes. Now, reattach the smaller volume (your target) to /dev/xvda.

15. Boot your instance again. It should come up successfully, but now with only the smaller volume attached.

Tah-dah!

16. If you’ve confirmed everything’s working just how you like it, then go ahead and delete the two larger volumes from the Volumes page.

Please let me know in the comments if you found this useful, and if I missed anything crucial. Happy to incorporate your additions!

Cheers,

A

--

--