Optimizing AWS Costs: Your Handbook to Financial Efficiency(Part 1)

TechStoryLines
5 min readSep 8, 2023

--

“Ways to Expand and Economize with AWS Services”

Why FinOps is significant?

FinOps, an abbreviation for Financial Operations, is a management methodology that encourages collective accountability for a company’s cloud computing infrastructure and expenditure. Organizations prioritize cost optimization as a foremost concern. Significant cost reductions can be realized through a diverse range of effective strategies. The extensive array of AWS services and pricing alternatives offers you the versatility to control expenses while meeting your business’s capacity needs.

In this manual, we will provide straightforward cost optimization methods that are simple to put into practice, enabling you to attain an optimal data strategy while managing your expenses effectively.

Five Strategies for optimizing your AWS costs

1. Migrate from gp2 to gp3 EBS volumes

Amazon EBS gp2 volumes offer cost-effective SSD performance for various applications, including virtual desktops, medium-sized databases, and development environments. While they are user-friendly, their performance is tied to the provisioned size, which means larger volumes are needed for high performance. Some applications like MySQL, Cassandra, and Hadoop clusters need high performance but not necessarily high storage capacity, potentially leading to cost-inefficient over-provisioning.

Cost snippet from AWS documentation

Amazon introduced EBS gp3, a General Purpose SSD volume type that ensures consistent performance at 3,000 IOPS and 125 MiB/s, regardless of the volume’s size. With gp3, you have the flexibility to provision IOPS and throughput independently, resulting in cost savings of up to 20% per GB compared to gp2 volumes. This means you can opt for smaller, more cost-efficient volumes while still maintaining high performance. If you need even greater performance, you can scale up to 16,000 IOPS and 1,000 MiB/s for an additional cost. Gp3 offers four times the maximum throughput of gp2 volumes and is suitable for a wide range of use cases.

2. Leverage CloudFront Savings Plan:

The CloudFront Security Savings Bundle is a versatile, customer-controlled pricing plan aimed at reducing your CloudFront costs by up to 30% when you commit to a monthly spending target for a one-year duration. This discount isn’t restricted to data delivered by CloudFront; it extends to all CloudFront usage types, including Lambda@Edge. Moreover, the bundle includes the benefit of free AWS WAF (Web Application Firewall) usage, covering up to 10% of your committed spending amount.

Activating the CloudFront Security Savings Bundle is a straightforward process. Within the CloudFront console, you can utilize the integrated savings estimator and recommendations feature to project your potential savings, either based on your historical usage or by manually entering data. Additionally, you have the option to include multiple Savings Bundles to account for anticipated increases in usage as your needs grow.

3. Realize more than a 60% reduction in costs of ElastiCache through data tiering:

ElastiCache for Redis has introduced data tiering, a cost-effective solution for Redis workloads. This innovation combines lower-cost NVMe solid-state drives (SSDs) with in-memory data storage on each cluster node. With ElastiCache for Redis data tiering, you can seamlessly scale your clusters, supporting a total capacity of up to 1 petabyte.

ElastiCache for Redis offers data tiering, allowing you to optimize costs by storing ‘hot data’ in memory and the rest on SSDs. It automatically moves less frequently accessed data from memory to SSDs when memory is full and brings it back to memory when accessed. This feature is available with Redis version 6.2 and above on Graviton-based R6gd nodes, helping you achieve over 60% cost savings compared to R6g nodes with memory-only storage. Reserved Instances are also applicable to data tiering nodes, offering additional cost savings of up to 55%.

4. Consider usage of IPv6 over IPv4:

AWS has announced that they would be charging for IPv4 addresses, a change that has prompted organizations to reevaluate their resource utilization strategies. With this development, it becomes even more essential to plan for a seamless transition to IPv6.

Embracing IPv6 not only helps future-proof your infrastructure but also aligns with AWS’s evolving pricing models. By proactively migrating to IPv6, you can mitigate potential cost increases associated with IPv4 usage. It’s a strategic move that not only ensures cost efficiency but also positions your data streaming infrastructure for long-term scalability and compatibility with AWS services.

5. Kinesis records Aggregation

Amazon Kinesis Data Streams producers play a pivotal role in efficiently ingesting user data into Kinesis data streams. To simplify this process and empower developers to achieve impressive write throughput, Amazon offers the Kinesis Producer Library (KPL). The KPL streamlines the development of producer applications by providing a powerful toolset. One crucial aspect covered within the KPL is “aggregation.”

Aggregation, in the context of Kinesis Data Streams, involves bundling multiple records into a single Kinesis Data Streams record. This clever technique enables customers to maximize the efficiency of their data ingestion process. By aggregating records, users can increase the number of records sent per API call, thereby enhancing producer throughput. The implications of this aggregation are especially significant when considering the limitations imposed by Kinesis Data Streams shards.

Kinesis Data Streams shards are capable of handling up to 1,000 Kinesis Data Streams records per second or 1 MB of throughput. However, this limit applies to records smaller than 1 KB. Here’s where record aggregation comes to the rescue. By consolidating several smaller records into a single, larger Kinesis Data Streams record, customers can drastically improve their per-shard throughput.

For example, if you’re dealing with a scenario like one shard in the us-east-1 region, processing 1,000 records per second, each being 512 bytes in size, you can utilize KPL aggregation to pack those 1,000 records into just 10 Kinesis Data Streams records, each of them being 50 KB in size. This ingenious approach effectively reduces the records per second to a mere 10, significantly optimizing the efficiency of data ingestion.
— Assuming a customer has the following use case 25000 records per second then we need 25 Shards.​
— Applying the aggregation technique, the number of records per second has been brought down to 2000. This resulted in 91% cost savings.

We appreciate your readership and support. For more insightful updates and tips, don’t forget to follow us and stay connected on our journey through the ever-evolving world of cloud computing.

--

--