Google Cloud DevOps Series-Google Cloud compute options for Kubernetes

Google Cloud DevOps Series: Part-2

Shijimol A K
Google Cloud - Community
6 min readOct 21, 2021

--

Welcome to part 2 of the Google Cloud DevOps series.. You can read part 1 of the series here

Why GKE

Understanding GKE Autopilot

Autopilot is the new deployment option for GKE, where the entire underlying infrastructure for your cluster, including control plane, nodes, and node pools. It is recommended for customers who are looking for a truly hands-free managed Kubernetes solution. The health of nodes and their capacity is monitored and necessary adjustments are done automatically by the service. The service is designed for production workloads and optimized for best utilization of resources while keeping the cost in check. Some key features of GKE Autopilot are:

While standard mode of deployment provides more flexibility on the cluster configuration, Autopilot is best suited for customers who are comfortable with the preconfigured settings offered by the service. There are few features that are available only on Standard mode deployment — preemptible VM support, Cloud TPU, Istio, Windows containers, Binary Authorization, Container Threat Detection, Node selection/affinity, etc. to name a few. If your workloads need any of these features, Standard mode might be more suitable. It is recommended to go through the full set of features available with Autopilot and the constraints explained here before taking a call on whether this mode of deployment is suited for your workloads.

Getting started with GKE

Let’s get started with GKE by creating a cluster and deploying a sample application in it. The steps given below can be executed from Google Cloud Shell. The demo application is available in the following GitHub repo: https://github.com/GoogleCloudPlatform/microservices-demo.

Prerequisites: Ensure that the GKE and Cloud operations API are enabled in your GCP project.

Start by setting the project id

PROJECT_ID=”<your-project-id>"

Enable the GKE API

gcloud services enable container.googleapis.com — project ${PROJECT_ID}

Enable the Cloud operations API

gcloud services enable monitoring.googleapis.com \
cloudtrace.googleapis.com \
clouddebugger.googleapis.com \
cloudprofiler.googleapis.com \
— project ${PROJECT_ID}
  1. Start by setting the project ID using the below command:
gcloud config set project <<project id>

2. Set the compute zone for the cluster. For this demo, we will be using the zone us-central1-a

gcloud config set compute/zone us-central1-a

3. Let’s now create a GKE standard cluster named ‘testcluster’, with default settings and single node in the node pool with auto scaling enabled

gcloud container clusters create hello-cluster — enable-autoscaling — min-nodes=1 — max-nodes=3

4. Clone the Microservices demo application

git clone https://github.com/GoogleCloudPlatform/microservices-demo.gitcd microservices-demo

5. Deploy the sample application to the GKE cluster

kubectl apply -f ./release/kubernetes-manifests.yaml

6. Once the deployment is completed successfully, you should be able to see the pods associated with the application using the following command

kubectl get pods

7. Access the application using its external IP address at Port 80

kubectl get service frontend-external | awk ‘{print $4}’

Alternatively, you can also browse to Kubernetes Engine service in GCP portal -> services and ingress and look for the endpoint of the service named “frontend-external”.

The application will open once you click on the link:

In this step, we deployed the cluster from cloud shell, however, the whole process, including the cluster creation and application deployment, can be automated using DevOps tools. We will be exploring this in detail later.

Serverless containers

In addition to GKE, you can also deploy your containers in a serverless manner using Cloud Run. Based on open-source Knative standards, Cloud Run abstracts away the complexities of infrastructure management and enables you to focus on developing applications in a platform of your choice. Cloud Run supports multiple popular languages like Go, Python, Java, Ruby, Node.js, etc. As it is based on Knative, it ensures portability of applications to other compatible environments.

Cloud Run can be deployed as a fully managed serverless solution or can use existing GKE deployment through Cloud Run for Anthos Service. The latter acts as an Anthos integration, which can be used to deploy pods to on-prem as well as multi-cloud K8s cluster using serverless constructs. This service would make serverless deployments faster for organizations with existing investments in GKE.

Coming up…

In this blog, we heard about various computing options for Kubernetes in Google Cloud. After listening to these details, Guhan started to think about the options for Samajik’s developers to get started with CI/CD workflows on Google Cloud for containers. Let us stay tuned for his (continuous) conversation with Ram….

Contributors: Dhandus, Anchit Nishant, Pushkar Kothavade, Tushar Gupta

Update: You can read Part-3 here.

--

--