Google Cloud DevOps Series: Agility with Cost Optimization

Google Cloud DevOps Series: Part-6

Tushar Gupta
Google Cloud - Community
6 min readDec 21, 2021

--

Welcome to part 6 of the Google Cloud DevOps series.. You can read other parts of the series starting from here.

Cost optimization in GKE environment

At Google Cloud, we always believe Cost Optimization should be one of the key pillars in every organisation’s IT or Business Strategy and we ensure that cost reduction will not come at the expense of user’s experience or any risks to customer’s business.

For customers adopting innovative technologies in Cloud (like Google Kubernetes Engine) for their application development and hosting, it is important and challenging too to implement the best practices of Cost Optimization to prevent their efforts from affecting their applications’ performance, stability and ability to service their businesses.

GKE provides several advanced cost-optimization features and capabilities built-in. This is great for customers who need to balance their applications’ performance needs with optimized costs.

Best Practices for Cost Optimization in GKE environments

Cost Optimization is the combination of Cost Control and Cost Visibility and in Google Cloud platform, we have numerous services to cater both requirements to achieve the desired results.

Based on correct analysis of hardware requirements of the applications, choice of correct machine types for cluster nodes can be selected and accordingly application grouping and namespaces can be designed.

Please note that while selecting for correct node types of any application, further cost reduction can be achieved using Preemptible nodes, but these are suitable only for fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of preemptible VMs.

Note: GKE uses E2 Machine types by default. For detailed information on different machine types available, please refer to this link

First enable the Vertical Pod Autoscaling from the cluster settings as shown below:

Then, we would need to create a Vertical Pod Autoscaler object for the required deployment. Through this object, when the Pods are created, the Vertical Pod Autoscaler analyzes the CPU and memory needs of the containers and records those recommendations in its status field. Based on the analysis, VPA automatically adjusts CPU or memory required for containers in Pods.
Please perform the following steps to see VPA in action:

  1. Save the following Deployment manifest as a file named vpa-demo-deployment.yaml:

2. Create the deployment as:

You should see the output as below:

3. Check the running pods:

4. Now, create a VPA object to analyze CPU and Memory requirements for Pods running as part of vpa-demo-deployment. Save the following VerticalPodAutoscaler as a file named vpa-object.yaml:

Then create the VPA object as:

You should receive the similar output as below:

5. Check the Pod’s resource requirements for one of the pods deployed from vpa-demo-deployment as:

In the output, you can see that the Vertical Pod Autoscaler has increased the memory and CPU requests. You can also see an annotation that documents the update:

6. Check the VPA object status to see the updated recommendations as:

The target recommendation says that the container will run optimally if it requests 590 milliCPU and 2097152 kilobytes of memory , different from what we provided as part of deployment (vpa-demo-deployment).

Note: The Vertical Pod Autoscaler uses the lowerBound and upperBound recommendations to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target recommendation

In a nutshell,Cost optimization for GKE can be summarized in the five broad categories as below:

Along with the above mentioned levers of Cost Optimization, Google Cloud also provides some additional offerings for the same:

Note: This free tier is only applicable to Container Management fee of $0.10 per cluster per hour (charged in 1 second increments). GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters. For more details on pricing, please refer to the section- ‘Cluster management fee and free tier’ from the link here

In this blog series, we discussed how conversation between Guhan and Ram led to a clear roadmap for Samajik to adopt DevOps processes with Google Kubernetes Engine in an easy way meeting all their strategic goals.

In addition to what we have discussed in this series, you can also follow the link here to know more on various tips and best practices on using Kubernetes and Google Kubernetes Engine (GKE).

Contributors: Pushkar Kothavade Shijimol A K Anchit Nishant Dhandus

Please follow the Google Cloud Community for more such insightful blogs.

--

--

Tushar Gupta
Google Cloud - Community

Infrastructure Modernization Specialist at Google having experience across multi-cloud platforms & Enterprise Datacenter technologies