How to detect Kubernetes overspending by measuring idle costs

Webb Brown
Jan 17, 2019 · 4 min read

(This is part 2 of a series about cost monitoring and cost management for Kubernetes. In our first article, we talked about getting visibility into your infrastructure spend with Grafana & Prometheus.)

We’ve worked with dozens of teams to optimize their Kubernetes spend over the past year… we always start the process of tuning cluster resources by asking one question: what do your idle resources cost? Specifically, how much do your provisioned but unutilized resources (e.g. compute, storage, etc.) cost your team? We feel that a simple graph (in the format below) provides the best initial assessment of resource efficiency and the cost of waste.

Image for post
Image for post
Here’s a live snapshot from the KubeCost tool.

We see most companies actually underestimate their idle costs and can reduce their cloud spend significantly without impacting performance or reliability. This post is aimed at helping teams get started measuring their Kubernetes resource efficiency.

How to measure

The following steps will give you a quick estimate of your monthly idle costs by using Prometheus. See our previous post for a helm chart to quickly install Prometheus, kube-state-metrics, and Grafana. Once you have these open source tools installed, you can calculate cluster idle costs with the following Prometheus queries. Each query is written in the following format: Idleness Ratio * Provisioned Capacity * Price.

  1. Idle CPU costs — we recommend measuring this data over a week or more so that you will have a full week/weekend traffic cycle, but the appropriate window may vary based on your specific workloads and recent compute provisioning changes. Replace YOUR_AVG_CPU_RATE in the query below with your infrastructure’s average CPU cost per month. You can estimate this based off of current GCP pricing of $16/mo for on-demand CPUs, or we’ve created the Kubecost tool to dynamically query cloud billing APIs and handle factors like spot vs on-demand, committed use, etc.

2. Idle memory costs — just as you did for CPU, we recommend measuring memory over a week timeframe. An estimate from GCP pricing for on-demand memory is $2/gb per month.

3. Local disks idle costs — a starting estimate for is $0.04/gb for US-based standard storage and $0.08/gb for local SSD.

4. Persistent volume idle costs — again, you can use $0.04/gb for standard storage and $0.17/gb for local SSD as an estimate.

Summing these numbers will give you an estimate for the amount of money you are spending monthly on idle cluster resources:

Note that this methodology for estimating idle costs has limitations. This formula gives you an estimate of your idle cluster spend for a snapshot in time, but it doesn’t weight the price of idleness by each individual assets’ relative cost. Nor does it show you how these figures vary over time. Nor does it track idle costs outside of your cluster.

How much idle headroom to maintain?

With an overall understanding of idle spend we now have a better sense for where to focus our efforts for efficiency gains. Each component of this metric can now be finely tuned for your product and business. Most teams we’ve seen end up targeting utilization in the following ranges:

CPU : 50%-65%

Memory: 40–60%

Storage: 65%-80%

This is highly dependent on the distribution of your resource usage (e.g. P99 vs median), and the impact of high utilization on your core product/business metrics. While too low resource utilization is wasteful, too high utilization can lead to latency increases, reliability issues, and other negative behavior.

Why not just use cluster autoscaling?!

Cluster autoscaling works well for certain situations, but we recommend cost monitoring in place beforehand. Consider the following case, based on a true story we encountered recently:

  • You have cluster autoscaling on
  • You launch a pod with a CPU request that causes the autoscaler to create new nodes
  • You monitor actual usage, note that the CPU request was too high
  • You ship a config change to request less, expecting the autoscaler to turn down your node after
  • It doesn’t, because in the meantime Kubernetes has scheduled a non-daemon and non-replicated pod onto that node

This was caught by analyzing cost data in the dashboards from our first blog post, but it wasn’t until we put this data in the graph format above that the problem became obvious.

Stay tuned for our next post where we’ll be building upon this data to start safely tuning and optimizing clusters and reduce costs.

About us

We’re a team of ex-Googler engineers that get crazy excited about helping people monitor & optimize their cloud resources/costs. We’re building technology to help people effectively manage kubernetes costs. Reach out ( about joining our beta project or if you want to learn more!


Helping companies manage their Kubernetes resources and…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store