How Understanding Our Kubernetes Cloud Costs Enabled Us to Scale

RTInsights Team
RTInsights
Published in
3 min readMay 11, 2023

By: Adrien de Castelnau

At Ogury, we offer a mobile-optimized advertising engine that enables our clients like VISA, Ford, McDonald’s, and others to connect with their target audiences at scale while preserving individuals’ security and privacy.

Our AWS Cloud resources run APIs that harness the contextual, semantic, and unique mobile audience data (cookie-less and ID-less for privacy) we require to deliver our clients the capabilities they need to deliver relevant ad content.

Our engine has utilized AWS since our launch in 2014, and we now run Kubernetes on EC2 instances after replacing an in-house container orchestration system. This is how we implemented cost visibility after migrating to Kubernetes and enabled successful scale.

See also: Managing Database Access for Kubernetes Workloads

Migrating to Kubernetes and recognizing the cost visibility challenge

While enabling some of the biggest global brands to reach target audiences at scale led to our own rapid growth, we were also quickly confronted with scalability issues. Necessitating a more robust approach to container orchestration, we began a two-year process of migrating to Kubernetes.

In planning out and shaping these Kubernetes deployments, we found that open source projects offered ideal tools for our needs. For example, we liked open source Prometheus for its monitoring and metrics collection capabilities.

We implemented open source Thanos to add high availability to Prometheus, as well as long-term data storage and metrics queries.

When it came to tracking, attributing, and optimizing Kubernetes costs, we also experimented with an in-house implementation of an open source Grafana dashboard.

Our goal here was to capture Prometheus metrics to achieve visibility into API call resource usage, costs, and opportunities for improvement. In practice, however, maintaining this dashboard brought in too much complexity. It left our team searching for a more efficient, user-friendly, and accurate approach.

Kubernetes costs became a particularly high-profile internal issue following our migration because the container orchestration platform had become the single-largest cost center under our technical team’s control.

We were also lacking visibility into the precise sources of those container-based cloud costs, so controlling them wasn’t inherently easy. Recognizing the need to act sooner than later as our growth continued, our finance and technical team leaders worked together to make Kubernetes cost visibility a top priority — and to seek out the right strategy to get it under control.

Implementing granular Kubernetes cost visibility and allocations

Our search began by vetting a number of large, broad cloud cost visibility options. However, we realized that the broadness (and the big price tags, with some even asking for a percentage of our total cloud spending) weren’t going to give us what we were looking for.

This vetting process helped us to hone in on the exact needs of our use case: making clear sense of our Kubernetes costs so we could confidently optimize our environments.

We then found the right Kubernetes-specific tool in open source Kubecost, and used the tool’s enterprise support to rapidly implement and integrate cost visibility into our Kubernetes deployments.

Having that Kubernetes cost visibility quickly transformed our technical team’s ability to understand where budget was going. The team can now drill down into spending data to view the granular cloud costs associated with each Kubernetes workload, each category, and even each service. Our cloud usage data is also integrated with our specific AWS…

Continued on CloudDataInsights.com

--

--