GKE Usage Metering


GKE Usage Metering is a great feature that enables GKE profiling, capturing the usage and cost of CPU, Memory, Storage, and Network Egress (optional). There are provided example queries and Data Studio dashboards.

The goal of this blog post is to show you an alternative hour-by-hour analysis and a detailed explanation for the queries being used.


There are a number of documented limitations, but there are also other limitations that should be considered:

  • Limited object-level metrics. There are metrics reported at the object level, but no object name or kind — just labels. Workaround: If you have meaningful labels, you can use those.
  • Usage Metering dataset is the same project as the cluster — there is no way to combine from multiple projects into one. Workaround: Create a view that combines all of the GKE Usage Metering together from multiple projects.
  • Usage Metering doesn’t capture all system and idle CPU and memory. Workaround: Join with billing data to adjust the metrics to the full cluster proportionally.
  • Billing data doesn’t include cluster name. This means adjustments from billing can’t connect to the same cluster. Workaround: Label the clusters or adjust according to all clusters in a project.

Normalising GKE Usage Metering

The GKE Usage Metering data has metrics representing approximately an hour and generally close to the start of the hour — but not exactly! This makes joining with billing data impossible.

Additionally, there is multiple sources of GKE Metering Data:

  • There may be one or more projects with data.
  • There is the resource_usage and resource_consumption metrics. The former represents actually used resources, while the latter GKE requests (reserved) CPU and memory.


The following query does the following:

  • Exposes in a single view the underlying data. Filtering by fields record_date, project_id, type will result in fewer tables and partitions being scanned.
  • Normalises the windows to be the same as used by billing (precisely on the hour).
CREATE OR REPLACE VIEW `gke_usage.gke_resource_normalised` AS-- SourceData is a UNION of all source tables, including the
-- different types and projects.
-- Any queries against this view that specifies the project_id,
-- type, or record_date (partition for the source data) will be
-- more efficient as it will scan fewer tables and partitions.
WITH SourceData AS (
_PARTITIONTIME AS record_date,
'<project-id>' AS project_id,
'consumed' AS type,
* EXCEPT (project)
_PARTITIONTIME AS record_date,
'<project-id>' AS project_id,
'request' AS type,
* EXCEPT (project)
-- Pull out all of the fields we need from the GKE Usage Metering
-- The CROSS JOIN with 'win' is the set of precise time windows
-- (on the hour for an hour) that overlap with the metrics. This
-- allows joining with billing later.
-- Additionally, you can use the GKE labels at this stage and
-- normalise for analytics.
win AS start_time,
usage.unit AS unit,
-- This formula calculates the size of the new window
-- against the original window for the new usage amount
usage.amount *
end_time = start_time,
GREATEST(win, start_time), MICROSECOND) /
TIMESTAMP_DIFF(end_time, start_time, MICROSECOND)
) AS usage
-- Array of overlapping periods
TIMESTAMP_TRUNC(start_time, hour),
TIMESTAMP_TRUNC(end_time, hour),
usage.amount > 0 AND
-- Bug -- sometimes start_time is at the epoch
TIMESTAMP_DIFF(end_time, start_time, DAY) < 1
1, 2, 3, 4, 5, 6, 7, 9, 8, 9, 10, 11

Now we have a useful view across our GKE Usage Metering tables.

GKE and Billing Aggregation

Aggregating the GKE Usage data with Billing is the next key step now that they are aligned time windows.

The next challenge: The GKE Usage Data doesn’t include all of the usage from GKE GCE nodes. We need to adjust the usage upwards. But only for GCE nodes, not for anything else.

Billing Query

We will extract from billing the usage and rate. This will be used as part of a larger query.

For billing we want only regular costs, where there is a defined project, and a positive usage.

We will call this table BillingData.

-- Same dimensions as GKE Usage Metering
usage_start_time AS start_time,
project.id AS project_id,
sku.id AS sku_id,
usage.unit AS unit,
-- If this billing entry is for a GCE GKE Node (it has
-- the label), then we count this as gke_usage and otherwise
-- it is zero.
EXISTS (SELECT * FROM x.labels WHERE key='goog-gke-node'),
0.0)) AS gke_usage,
-- The rate is based on total cost / total usage regardless
-- of GKE or not. All GCE instances for the same sku_id have
-- the same rate.
SUM(cost)/SUM(usage.amount) AS rate
`gke_usage.gcp_billing_export_v1_<billingID>` x
cost_type='regular' AND
usage.amount > 0.0 AND
project IS NOT NULL
1, 2, 3, 4
-- Aggregations must have a rate (positive cost)
-- otherwise we can ignore it.
rate > 0.0

Calculating Adjustment

Joining the billing data with `gke_usage.gke_resource_normalised` will provide us the adjustment for uplifting the metrics.

We will call this table AdjustmentData.

-- If there is a GKE usage to adjust, then safely
-- divide it over the SUM of GKE Usage over the same
-- keys as the billing. This is the proportion that this
-- specific usage needs to be adjusted so that the total
-- will match the actual billed usage (gke_usage).
IF(b.gke_usage > 0.0,
COALESCE(SAFE_DIVIDE(b.gke_usage, SUM(n.usage) OVER (
PARTITION BY n.start_time, n.project_id,
n.sku_id, n.type, n.unit
) AS adjustment,
// Add the rate. This is used for finding out the cost.
`gke_usage.gke_resource_normalised` n
JOIN BillingData b ON (
b.start_time=n.start_time AND
b.project_id=n.project_id AND
b.sku_id=n.sku_id AND
-- Skip system-overhead. The adjusted values will be the
-- recharged amounts, while the non-adjusted would allow
-- optimising.
namespace != 'kube:system-overhead'

Applying the Adjustment in a View

For the dashboard analysis, the data is only valid for specific types. This query will double the number of types to include the adjusted metrics and unadjusted metrics as well as the cost.

This will be constructed into a view as below.

CREATE OR REPLACE VIEW `gke_usage.gke_usage_dashboard` AS
BillingData AS (
Billing Query Here
AdjustmentData AS (
Adjustment Query Here
-- The fields need to be ordered the same for UNION ALL,
-- so they also need to have the same EXCEPT() field list.
-- This union will allow the queries against the view to
-- 'see through' with the type field. It will only process one
-- side or the other of the UNION depending on the type.
c.* EXCEPT (usage, type, rate),
-- Add the cost
usage * rate AS cost,
AdjustmentData c
c.* EXCEPT (usage, type, rate),
-- Add adjusted type, usage, and cost.
CONCAT(type, '_adj') AS type,
adjustment * usage AS usage,
adjustment * usage * rate AS cost,
AdjustmentData c;

Dashboard Query

This query has been put together for Data Studio. It can be a direct connection or another view. The end_time is added for data studio.

BigQuery Connection — Query

-- end_time is added to help with dashboards.
-- For calculating the average mem_gb, average cpu,
-- and average disk_gb you need to divide by
-- max(end_time) - min(start_time).
TIMESTAMP_ADD(start_time, INTERVAL 1 HOUR) AS end_time,
-- Turn the metrics into something that can be directly used
-- by a dashboard. Memory into GB, Egress into GB, disk into GB,
-- and _secs where it is over time.
SUM(IF(resource_name='cpu',usage,0.0)) AS cpu_secs,
SUM(IF(resource_name='memory',usage/1e9,0.0)) AS mem_gb_secs,
SUM(IF(resource_name='networkEgress',usage/1e9,0.0)) AS egress_gb,
SUM(IF(resource_name='storage',usage/1e9,0.0)) AS disk_gb_secs,
SUM(IF(resource_name='cpu',cost,0.0)) AS cpu_cost,
SUM(IF(resource_name='memory',cost,0.0)) AS mem_cost,
SUM(IF(resource_name='networkEgress',cost,0.0)) AS egress_cost,
SUM(IF(resource_name='storage',cost,0.0)) AS disk_cost,
-- We need the total cost across all resource types
SUM(cost) AS cost,
1, 2, 3, 4, 5, 6, 7, 8, 9;

Formula Fields

In order to calculate the average CPU, average disk GB, and average memory GB, calculated fields are needed in the dashboard. This is because it can aggregate over periods of time.

The elapsed time is DATE_DIFF(MAX(end_time), MIN(start_time)). The avg_cpu is then SUM(cpu_secs) / <ElapsedTime>, avg_disk_gb is SUM(disk_gb_secs) / <ElapsedTime>, avg_mem_gb is SUM(mem_gb_secs) / <ElapsedTime>.

Other Considerations

There are a few other features or approaches that can be taken.

  • Working with billing data and LEFT JOIN’ing GKE Usage Metering data. This has the benefit that billing data is primary (and should equal to the billing data without the LEFT JOIN) while still providing a breakdown in the cluster where possible. This is the best way if getting a total cost picture on GKE and off is paramount.
  • This enables actual usage and requests metering, but for cost-allocation a max(actual usage, request) may be the best. This can be added as an extra stage in the processing.


Grappling GKE Usage Metering data with GCP Billing is not straightforward, but hopefully this blog post provides a step-by-step guide for how this data can be exposed within a dashboard and made useful.

The following is using the query in Data Studio dividing up the total cost by namespace, as well as the memory (in GB) and CPU over time.

Total cost by namespace and memory and CPU over time

Any issues or feedback please let me know!




A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Recommended from Medium

Getting a handle on Gutenberg… slowly.

Building cross-platform voice apps with VOXA. Part 1

Ruby, config this

Android (Flutter) 64K Methods Dex Limit: How to use Proguard in Development

C#.NET Environment Class

Open-Source Software I’m Thankful For In 2021

The Path to Awesomeness at Comparethemarket.com

Create an impressive GitHub Profile Readme in 2022

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mark Scannell

Mark Scannell

Strategic Cloud Engineer at Google Cloud

More from Medium

Streaming Engine Execution Model

Exploring GCP Cloud Function 2nd Gen

Cloud Data Fusion: Update deployed pipelines through REST API

MS Teams with Cloud Composer for failure Notifications