Deploying Service or Ingress on GKE

Getting Started with GKE: Endpoints with Service and Ingress

After provisioning a Kubernetes cluster using GKE (Google Kubernetes Engine) and deploying a web application, such as hello-kubernetes, we want to access them through an Endpoint.

There are two common paths on Kubernetes that you can use for Endpoints:

Previous Articles

In previous articles, we covered building or provisioning the GKE cluster.

Provisioning using Cloud SDK

Provisioning using Terraform

Prerequisites

You will need the following tool requirements:

  • Google Cloud SDK that is authorized to your google account with a regsitered Google project. We’ll use the fictional project of acme-quality-team for this article.
  • Kubectl (pronounced koob-cuttle) is the Kubernetes client cli tool to interaction with the cluster and installing Kubernetes manifests.

Building the GKE Cluster

In previous articles (see above), there are tutorials on how to provision a cluster with either Google Cloud SDK or Terraform. You can use those for this exercise.

If you want to get something up quickly for just this exercise, you can run this below:

gcloud container clusters create \
--num-nodes 1 \
--region us-central1 \
"a-simple-cluster"

This process will take about 3 minutes for a three node cluster with one node in a unique availability zone in us-central1.

You can see the cluster and nodes you created with the following commands:

gcloud container clusters list --filter "a-simple-cluster"
gcloud compute instances list --filter "a-simple-cluster"

Endpoint with Service Resource

A service resources acts as a proxy to route traffic designated pods deployed across worker nodes within a Kubernetes cluster. For creating an Endpoint, we’ll use a particular service type of LoadBalancer. This will provision a Layer 4 load balancer in Google Cloud.

Service Resource with LoadBalancer Service Type

Deploy Deployment Controller

We first need to deploy the application using a deployment controller to deploy three pods across our cluster. Create a file named hello_gke_extlb_deploy.yaml with the following content:

Deploy this with the following command:

kubectl apply --filename hello_gke_extlb_deploy.yaml

Deploy Service Load Balancer Type

Now we deploy a service that will connect traffic to one of the pods.

Create a file named hello_gke_extlb_svc.yaml with the following content:

Deploy this with the following command:

kubectl apply --filename hello_gke_extlb_svc.yaml

Test the Connection

You can check the status of the service with the following:

kubectl get services --field-selector metadata.name=hello-gke-extlb

Initially, you may see pending under the EXTERNAL-IP column. This means Google Cloud is provisioning the network load balancer. After a few moments, you should see something like this:

Output of kubectl get services

We can also peak at the corresponding network load balancer outside the cluster:

gcloud compute forwarding-rules list \
--filter description~hello-gke-extlb \
--format \
"table[box](name,IPAddress,target.segment(-2):label=TARGET_TYPE)"

Which should show something like this:

Copy the IP address into a web browser. Using the above output, you would type: http://104.198.141.152 in the web browser and see something like this:

web browser display through NLB

Cleanup

You an remove these resources through the following:

cat hello_gke_extlb_*.yaml | kubectl delete --filename -

Endpoint with Ingress Resource

An ingress is essentially a reverse proxy with a common declarative language (ingress resource) to configure rules to route web traffic back to service. The implementation depends on the ingress controller you wish to install, such as ingress-nginx, haproxy-ingress, traefik, or ambassador, to name a few.

GKE comes bundled with ingress-gce, or GLBC (Google Load Balancer Controller) that is described as:

GCE L7 load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API.

Ingress Resource and Service Resource of NodePort Type

Deploy Deployment Controller

Deploy a web application using a deployment controller. Create a file named hello_gke_ing_deploy.yaml with the following content:

Deploy this with the following command:

kubectl apply --filename hello_gke_ing_deploy.yaml

Deploy Service NodePort Type

The default GKE ingress (gce) will only work with a Service type of either NodePort or LoadBalancer. As we only want one Endpoint through the ingress, we’ll choose NodePort.

Create a file named hello_gke_ing_svc.yaml with the following content:

Deploy this with the following command:

kubectl apply --filename hello_gke_ing_svc.yaml

Deploy Ingress

An ingress can route web traffic based on the hostname and URL path. In our simple implementation, we’ll route everything (/*) to our single designated service. The service will then further route traffic to one of three available pods.

Create a file named hello_gke_ing_ing.yaml with the following content:

Deploy this with the following command:

kubectl apply --filename hello_gke_ing_ing.yaml

Test the Connection

You can run this to see the ingress in action:

kubectl get ingress

You should see something like this:

We can also peak at corresponding http proxy that Google Cloud provisions:

gcloud compute forwarding-rules list \
--filter description~hello-gke-ing \
--format \
"table[box](name,IPAddress,target.segment(-2):label=TARGET_TYPE)"

This output should look similar to this:

Copy the IP address and type it into a browser, for example: http://34.98.86.241 in the web browser. Initially, as may see a 404 error, which could mean that Google is still setting things up.

Google 404 error from GLBC

After a few minutes, the URL should eventually work:

web browser display through GLBC

Cleanup

You an remove these resources through the following:

cat hello_gke_ing_*.yaml | kubectl delete --filename -

Resources

Here are some references to some of the documentation related to material covered in this article.

Blog Source Code

I put the source code used in this blog here:

Kubernetes

These have general overview of Kubernetes concepts.

GKE (Google Kubernetes Engine)

These are implementation specific documentation.

Google Cloud

Conclusion

The goal of this article was to demonstrate how to add an Endpoint for your web application on Kubernetes and also show how GKE integrates with Google Cloud to provision these Endpoints.

I hope this was useful in your Kubernetes with GKE adventures. In future articles I would like to expand this further by showing how to integrate management of DNS names and TLS certificates from Kubernetes.

--

--

--

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Recommended from Medium

Getting started with Kerberos

The Agile Problem

Laravel MySQL Select Latest Record From Relational Table

ScandiPWA Updates: July 3 (Issue #54)

Our Statement Regarding the Change in RugDoc.io Review Ratings System

Installing Longhorn on a on raspberry pi kubernetes cluster

The S.O.L.I.D Principles

Autofill on Browsers: A Deep Dive

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Joaquín Menchaca (智裕)

Joaquín Menchaca (智裕)

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code

More from Medium

Setting up your GCP foundations through Terraform — Chapter 3 — Deploying the bootstrap and CI/CD…

From Monolith to Kubernetes Architecture — Part IV — GKE / GCP

CKAD Tips: Kubernetes Architecture Explained

IAC with Google Cloud Monitoring