How to simply decrease your AWS costs

Amazon Web Services is a wonderful tool, but its costs can skyrocket really quickly if you don’t manage them.
Below I would give some advice for drastically cut them.

1. Advanced billing : without metrics, you have no controls.

By default, your costs are opaque. It’s only split by AWS product type ( EC2, S3, RDS, etc). Which is not really helpful. To begin having some metrics in your bill, you have to setup the advanced report billing. Following this documentation.

When you’re done with that, you will receive an advance CSV version of your bill on a S3 bucket. Don’t forget to add all tags with a management purpose to the report. For example, the project name, its environnement ( test, dev, preprod, prod), etc. Be careful to not add unused tags that will make the file bigger than needed without any more relevant informations.

This report have an hour by hour resource usage with associated tags. It’s really the first step to explore your costs. But you will probably need an additional software for that, because this file contains a lot of informations, reading it manually or with an excel like will be inefficient.

If you have multiple account, don’t forget to use the consolidated billing
for managing all your bills in one place.

2. Untagged Resources

Still with the goal to have more metrics, you have to tag all your resources. Which can be done with the AWS console, or better with your Cloudformation/Terraform template. In addition of your cost management, its will serve you for improving you global infrastructure management. I have often see only the EC2 instances tagged, but it’s a mistake, because most of the resource can be tagged : instances, RDS instances, Elastic Bloc Storage, S3 bucket, etc. Be sure that your tagging strategy is consistent across yours teams. A good AWS citizen, is a citizen that’s tag these resources.

3. Oversize instances

Instance pricing depends on instance characteristics. The more you get, the more you pay.

For a t2.micro you will pay 10.25 $/month
For a m3.medium 53.44 $/month
For a m3.large 106.88 $/month

It’s based on Virginia pricing for instance running 24/7.

For saving money, you have to choose the right instance type, that would be enough for supporting your system and avoiding using overpower instance. For that, a simple method is to use AWS Cloudwatch, the monitoring system of AWS, that let you access your average usage for the last two weeks. If its bellow a certain threshold, let’s say 10 % of CPU usage, during this period of time, your instance is probably over size. You should try to decrease its type.

4. Instance reservation

By default, you’re paying yours instances with “on demand” prices, the pay as you go promise by the Cloud. But AWS also offer another pricing model, where you agreed on a commitment of 1 or 3 years. In return of that engagement you’re winning a discount of 20 to 60 % depending of your commitment.

However, the commitment you are taking is linked to an instance type. So you should be careful to have well sized instances in the corresponding availability zone and you should check regulary that your reservations are matching your running instances.

In the same manner, you can reserved instances for other AWS services like RDS, Cache or Redshift.

5. Instance running 24/7

If you have instances that don’t need to run all days like your pre-production or any kind of test instance. You should stop this instances outside of office hours. Paying only for a 50 hours open office is a big discount comparing to a 168 hours full week. It’s even better that a reservation plan.

6. Oversize Elastic Block Storage

EBS, or Elastic Block Storage, are disks associated to your instances. They are not bundle with any sort of reservation and you are paying them by gigaoctet. It can quickly inflate your bill if you’re using big persistent disks.

For example, for an SSD of 100 Go you will pay 10$/month, 5 $ if you’re using a magnetic disk (less performance). In a more extreme example for an SDD of 1 To you will pay 100 $/month. Pass a certain size, you should seriously start thinking about moving your data to S3. For the same 1 To, it will cost you 30$/month. At the end of the year, you will win 840 $ by switching to S3, in addition of having a more scalable system.

7. Idle instances/ELBs/Volumes/RDS/Adresses

A pure waste, is of course, unused resources : instance without usage, Elastic Load balancer without attached instances, RDS without usage, Elastic IP unused, volume unused, etc.
For EIPs, Volumes and Load balancers, it’s easy to track them with the AWS console, for your instances you have to check CloudWatch or your monitoring.

8. S3 backup

S3 is generally cheap, that’s true. But it can be cheaper. Create multiple buckets, with multiple folders, split by different usages. For example, a bucket for your production assets, one for your preprod assets, one for your backup, etc.
Then, you can easily apply different lifecycles adapted to your data. For example, the bucket or folder that you used for you backup can be move to a cheaper storage over time. We rarely need backup older than one year. Now, It is built-in with the S3 lifecycle. A backup lifecycle can be 3 month on S3 standard, follow by 3 month on S3 Infrequent Access Storage, and finally X month on glacier before eventually destroy it. It’s really easy to make, it just works, and it can divise you’re S3 cost by three.

9. Too many snapshots

Always about the backup topic, on AWS it’s common to take snapshots of your disks. It’s the most simple backup to do. But be careful to not accumulate them indefinitely. A simple method for cutting this costs is to use a snapshot policy with a tool like simple_ec2_snapshot for example.

Conclusion

By the way, I’m working on a new project for doing all that, and more, automatically for you. You can check it out here: https://wizardly.eu/. It’s still in beta but the first release will be soon.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.