How to reduce your AWS Cost -Part 2

Prabhath Suminda Pathirana
4 min readJan 3, 2022

--

Photo by Towfiqu barbhuiya on Unsplash

Link to Part 1

In my previous article (link is above) I explained the background behind this series of articles and explained several ways in which you can reduce your EC2 costs.

In this article, I will try to cover a few EBS and S3 cost-saving methods.

EBS

EBS is not an expensive service. But when you have many EBS volumes it can add up to a significant cost. Below are several methods which I have used to reduce EBS costs.

  1. Do not over-provision space

EBS volumes are charged for the allocated volume regardless of the usage. As an example, if you create a 40GB EBS volume and use only 8GB you will be still paying for the 40GB.

EBS volume sizes can be increased without downtime (In Linux instances). Therefore we should not over-provision the disks assuming we will need more space in the long run.

You can use a monitoring service like Cloudwatch or Datadog to monitor your disks and trigger alerts when the free space falls behind a particular threshold (say 75%). Then you can add space rather than allocating a large amount of space at the beginning and pay for unused space.

The same applies to IOPS, you should only provision the optimal number of IOPS if you are configuring that. For many workloads keeping the default value (which is usually the minimum) should be good enough.

2. Use GP3 instead of GP2

Just like I explained in the context of EC2 in my previous article, EBS volumes also have generations. The latest generation is GP3.

https://aws.amazon.com/about-aws/whats-new/2020/12/introducing-new-amazon-ebs-general-purpose-volumes-gp3/

But still, in the AWS console, the default type is GP2. So if you switch to GP3 when launching instances it will save you some money as well as you will get more IOPS compared to a GP2 volume.

Existing GP2 volumes can be changed to GP3 without any downtime. You can easily change the type using the AWS console and save costs while getting more IOPS.

Further read https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/

3. Delete unmounted EBS volumes

EBS volumes are charged regardless of whether those are being used or not. If you keep an unmounted volume assuming it will be needed in the future, you can take a snapshot of it and delete the volume instead. Snapshots are much cheaper than an actual volume. If the necessity comes you can always create a volume back from the snapshot in a matter of seconds. So no point in keeping unmounted volumes.

If you want to be proactive regarding this you can create a lambda function to scan for unmounted EBS volumes and trigger an SNS alert to the admins.

S3

S3 is a very cost-effective service. But still, you can further reduce the costs using the following methods.

  1. Understand the different S3 storage classes

S3 has multiple storage classes. Some of those are much cheaper than the standard class. So you need to understand the nature of your data and use the most appropriate storage class when saving data to S3.

2. Use Life cycle rules

Life cycle rules are a very important feature of S3. This is not a tutorial about S3 and therefore I am going to only mention some important things which can be done using lifecycle rules. For more details please refer to the AWS documentation. Life cycle rules can be used as below to reduce your S3 costs.

  • Purging old data — Most of the data becomes obsolete after some time depending on your business requirements. Lifecycle rules can be used to delete data based on multiple criteria such as created date (e.g delete data that are more than 3 months old)
  • Archiving data — S3 provides several storage classes such as Glacier which are suitable for archiving data. Per GB cost of these classes is much less compared to other classes. So just like deleting you can use lifecycle rules to archive data instead of deleting if you need to keep the data for regulatory or other reasons. Also, you can have a combined flow like archive after 90 days and delete after 180 days.

3. Limit the versioning

If you have versioning enabled in your bucket it will keep all the versions of a particular S3 object. If you have frequently changing objects this can cost you a lot. You can use life cycle rules to limit the number of versions you are keeping.

4. Use the requester pays feature.

If your data is accessed by some other parties and you do not want to pay the data transfer costs associated with S3 out of your pocket you can use the requester pays mechanism in S3 to get the other parties to pay for the data transfer costs.

5. Compress data before storing

This is something most people overlook. S3 is charged for the storage used. So if you can store the same data consuming less storage it will definitely save your costs.

If you have an application that uses S3 as a storage layer you can code your application to compress the data using gzip or any other compression method before transferring to S3. When the data is retrieved you can uncompress it before passing it to other application layers.

In addition to the storage costs, this will save your data transfer costs as well.

In the next article, I will cover Lambda and RDS.

--

--

Prabhath Suminda Pathirana

Software Architect, AWS Certified Solutions Architect (Professional)