How to Manage a Kubernetes Environment on GCP on a Budget

Jacob Lundberg
Sep 28 · 5 min read
Image for post
Image for post
Photo by Josue Isai Ramos Figueroa on Unsplash

Have you ever felt that the costs of all available managed services or third party setups exceed what you were prepared to spend? Or that you were given a budget that leaves little room for playing around?

At Skira, all statements above appealed to us. To solve these issues we’ve done 4 things: spent time analyzing the billing section, optimized our choice of namespaces, set resource limits, and made use of Helm deployments, all of which have substantially reduced our GCP expenses. I want to share these 4 tips on how to be smarter with your infrastructure choices to avoid the unpleasant bills at the end of the month.

This article specifically targets GKE (Google Kubernetes Engine) instances and deployments. The article assumes you have set up a project in GCP (Google Cloud Platform) and have CLI access to your cluster via kubectl. If not, read this article about clustering with kubectl and this article about setting up Google cloud projects and come back!

1. Understanding Billing

Cloud providers are happy to charge for every small detail. If you have had a couple of services running for a month, the cost table will be properly populated, showing a breakdown of how every service affects your monthly bill. Navigate to

Navigation menu (hamburger) > Billing > Cost table

Select the month in question, and all the details will be available. In the leftmost column you can click on your project name, and all services will show. The service named Kubernetes Engine and Compute Engine are interesting. Basically, to have a cluster running, we have to both pay for the cluster management fee (under Kubernetes Engine) and for the cluster node VMs (under Compute Engine).

Image for post
Image for post
Overview of GCP cost table

Note that Google updated their payment solution on June 6, 2020. They’ve added a 0.10$ per hour cluster management fee, but allow one free zonal cluster per billing account.

This sounds nice, but if that zone in particular is down, your server and service will also be down. It is, however, useful for prototyping and testing.

2. Namespaces

Given that we now have a grasp on what is affecting our billing account, it is time to choose what type of cluster is most beneficial for us. Ideally, we want to separate our resources so that they do not interfere with one another.

Let’s say we want a production server and a testing server. With that in mind, we could create two clusters to separate them. That means twice the cost for the clusters running, and if you are not running a 1m+ users / day service (give or take), there will probably be a lot of RAM and CPU still unused. What if the two services could be on the same cluster, to save some money, and separated, to not interfere? Namespaces in Kubernetes is the solution.

By creating a namespace we are effectively shielding parts of the cluster from one another, mimicking multiple clusters. By default, all commands issued with kubectl will target the default default namespace. An action such as

$ kubectl get pods

will only yield what pods are running in the default namespace. We can utilize this by adding the --namespace (-n for short) flag onto every command. Start by creating a new namespace

$ kubectl create namespace test-space

and then create any type of Kubernetes resource. For example, create a secret for this namespace with

$ kubectl create secret generic test-space-key --from-literal=key=value --namespace test-space

and fetch it with

$ kubectl get secret --namespace test-space

Remember here that omitting the —-namespace flag will not show our newly created secret, since the namespaces are different! That means that you always have to add the same namespace for resources that are going to use each other. For example, if a deployment is using the secret we just created, they have to be in the same namespace. A workaround is to amend the namespace into your kubectl context.

kubectl config set-context --current --namespace test-space

Omitting the namespace flag will now use the test-space namespace instead of the default default namespace.

3. Resource limits

Another tool to manage workloads in a cluster is the resource tag. You can tell the Kubernetes environment how much resources to allocate for a pod or a workload, and how much it can use.

Here, we first request half a CPU (the m representing a thousandth of a CPU) and 512MiB of RAM. Then we limit our pod to not use more than 1 CPU and 1GiB of RAM. This is an effective method to plan the space on your nodes.

Read more about resources here.

4. Helm Deployments

The final tip is to utilize the Helm repo for more complex applications. It might be tempting to start up an Airflow environment. This can be a problem, since Airflow consists of three workers, potentially a Redis message queue and a database. That is quite an amount of resources to create yourself, not counting the job of building Docker containers and hosting them.

In GCP, this service is called Cloud Composer. However, according to their pricing you are not only charged for the service, but also for the cluster that they start for you. In their own example, that would mean ~75$ per month for the service plus ~150$ per month for a cluster with the smallest nodes. Note that this cluster is started next to your other cluster(s), and you cannot interact with it.

Instead, if you have resources available, host the service yourself! Helm is a package manager for Kubernetes clusters that has a wide range of products, Airflow included. The products, Charts, are community made and anyone can contribute.

Helm itself is a CLI tool that exists for most package managers (Homebrew for example):

$ brew install helm

Then, Helm attaches to your kubectl context, and manages all resources for you. Installing a package is done with

helm install <your-name> <repo>/<package> --namespace <your-namespace>

That will install the specified package _in the specified namespace_. You can interact with all resources that Helm deploy and tweak them if necessary. Often the resources tag will be exposed in a configuration file for easier configuration. Read more about it here.

Future ideas are to host a Jupyter Lab environment, a shared NFS server, and some NginX proxies in the same cluster.

Putting It All Together

Namespaces is a great functionality to cut costs by shielding teams, servers, and projects from each other, still while running on the same cluster. Combining namespaces with workloads that are given the resources they need, the cluster can be optimized to cram in as much as possible. Also, using Helm will let you deploy complex applications with ease.

Thank you for reading. And hey, if this all went so smooth, why not host your own Kubernetes engine as well?

The Startup

Medium's largest active publication, followed by +733K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store