How to detect Kubernetes overspending by measuring idle costs

Webb Brown
kubecost
Published in
4 min readJan 17, 2019

(This is part 2 of a series about cost monitoring and cost management for Kubernetes. In our first article, we talked about getting visibility into your infrastructure spend with Grafana & Prometheus.)

We’ve worked with dozens of teams to optimize their Kubernetes spend over the past year… we always start the process of tuning cluster resources by asking one question: what do your idle resources cost? Specifically, how much do your provisioned but unutilized resources (e.g. compute, storage, etc.) cost your team? We feel that a simple graph (in the format below) provides the best initial assessment of resource efficiency and the cost of waste.

Here’s a live snapshot from the KubeCost tool.

We see most companies actually underestimate their idle costs and can reduce their cloud spend significantly without impacting performance or reliability. This post is aimed at helping teams get started measuring their Kubernetes resource efficiency.

How to measure

The following steps will give you a quick estimate of your monthly idle costs by using Prometheus. See our previous post for a helm chart to quickly install Prometheus, kube-state-metrics, and Grafana. Once you have these open source tools installed, you can calculate cluster idle costs with the following Prometheus queries. Each query is written in the following format: Idleness Ratio * Provisioned Capacity * Price.

  1. Idle CPU costs — we recommend measuring this data over a week or more so that you will have a full week/weekend traffic cycle, but the appropriate window may vary based on your specific workloads and recent compute provisioning changes. Replace YOUR_AVG_CPU_RATE in the query below with your infrastructure’s average CPU cost per month. You can estimate this based off of current GCP pricing of $16/mo for on-demand CPUs, or we’ve created the Kubecost tool to dynamically query cloud billing APIs and handle factors like spot vs on-demand, committed use, etc.
sum(rate(node_cpu_seconds_total{mode="idle"}[7d])) / sum(rate(node_cpu_seconds_total[7d])) * sum(kube_node_status_capacity_cpu_cores) * [YOUR_AVG_CPU_RATE]

2. Idle memory costs — just as you did for CPU, we recommend measuring memory over a week timeframe. An estimate from GCP pricing for on-demand memory is $2/gb per month.

SUM(avg_over_time(node_memory_MemFree_bytes[7d]) + avg_over_time(node_memory_Cached_bytes[7d]) + avg_over_time(node_memory_Buffers_bytes[7d])) / SUM(avg_over_time(node_memory_MemTotal_bytes[7d])) * SUM(kube_node_status_capacity_memory_bytes) / 1024 / 1024 / 1024 * [YOUR_AVG_MEMORY_RATE]

3. Local disks idle costs — a starting estimate for is $0.04/gb for US-based standard storage and $0.08/gb for local SSD.

(1-sum(container_fs_usage_bytes{device=~"^/dev/[sv]d[a-z][1–9]$",id="/"}) / sum(container_fs_limit_bytes{device=~"^/dev/[sv]d[a-z][1–9]$",id="/"})) * sum(container_fs_limit_bytes{device=~"^/dev/[sv]d[a-z][1–9]$",id="/"}) / 1024 / 1024 / 1024 * [YOUR_AVG_STORAGE_RATE]

4. Persistent volume idle costs — again, you can use $0.04/gb for standard storage and $0.17/gb for local SSD as an estimate.

SUM(kubelet_volume_stats_available_bytes) / SUM(kube_persistentvolumeclaim_resource_requests_storage_bytes) * SUM(kube_persistentvolumeclaim_resource_requests_storage_bytes) / 1024 / 1024 / 1024 * [YOUR_AVG_PV_STORAGE_RATE]

Summing these numbers will give you an estimate for the amount of money you are spending monthly on idle cluster resources:

Idle Compute cost + Idle RAM cost + Idle local storage cost + Idle PV cost = idle cluster resources

Note that this methodology for estimating idle costs has limitations. This formula gives you an estimate of your idle cluster spend for a snapshot in time, but it doesn’t weight the price of idleness by each individual assets’ relative cost. Nor does it show you how these figures vary over time. Nor does it track idle costs outside of your cluster.

How much idle headroom to maintain?

With an overall understanding of idle spend we now have a better sense for where to focus our efforts for efficiency gains. Each component of this metric can now be finely tuned for your product and business. Most teams we’ve seen end up targeting utilization in the following ranges:

CPU : 50%-65%

Memory: 40–60%

Storage: 65%-80%

This is highly dependent on the distribution of your resource usage (e.g. P99 vs median), and the impact of high utilization on your core product/business metrics. While too low resource utilization is wasteful, too high utilization can lead to latency increases, reliability issues, and other negative behavior.

Why not just use cluster autoscaling?!

Cluster autoscaling works well for certain situations, but we recommend cost monitoring in place beforehand. Consider the following case, based on a true story we encountered recently:

  • You have cluster autoscaling on
  • You launch a pod with a CPU request that causes the autoscaler to create new nodes
  • You monitor actual usage, note that the CPU request was too high
  • You ship a config change to request less, expecting the autoscaler to turn down your node after
  • It doesn’t, because in the meantime Kubernetes has scheduled a non-daemon and non-replicated pod onto that node

This was caught by analyzing cost data in the dashboards from our first blog post, but it wasn’t until we put this data in the graph format above that the problem became obvious.

Stay tuned for our next post where we’ll be building upon this data to start safely tuning and optimizing clusters and reduce costs.

About us

We’re a team of ex-Googler engineers that get crazy excited about helping people monitor & optimize their cloud resources/costs. We’re building technology to help people effectively manage kubernetes costs. Reach out (team@kubecost.com) about joining our beta project or if you want to learn more!

--

--

Webb Brown
kubecost

Ex-Google product manager building Kubernetes tools.