GKE Ingress Options

Sujay P Pawar
Niveus Solutions
3 min read4 days ago

--

Introduction

Traditional approaches to Google Kubernetes Engine (GKE) ingress involve multiple public internet entry points. This can lead to complex management and configuration inconsistencies. However, a single GKE ingress point offers a simpler, more scalable solution. This blog post will guide you through the process of setting up a single ingress point for your GKE cluster, reducing management overhead and improving consistency.

Need of GKE Ingress

While GKE Ingress simplifies configuration by using a single point of entry per cluster or namespace, it operates entirely within GKE. This eliminates the need for separate network or load balancing configurations by your network team.

However, traditional load balancers remain a viable option and can achieve similar routing functionalities. The choice between them might depend on your organization’s existing roles — SRE/Kubernetes experts might favor the native GKE Ingress for its simplicity within the cluster, while established network teams might prefer the familiarity and control offered by load balancers.

GKE Ingress Options

Google Kubernetes Engine (GKE) offers several options for managing ingress traffic to your applications running in the Kubernetes cluster. Here are the main options for configuring ingress in GKE:

1. GKE Ingress Controller

GKE provides a built-in ingress controller based on Google Cloud Load Balancer. This controller allows you to create ingress resources in your Kubernetes cluster that are automatically translated into Google Cloud Load Balancer configurations.

Setup: Deploy an ingress resource in your cluster, and GKE will handle the creation and management of the load balancer.

Example: Deploying the GKE Ingress Controller

2. Create a simple ingress resource:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: example-ingress

annotations:

kubernetes.io/ingress.class: “gce”

spec:

rules:

- host: example.com

http:

paths:

- path: /

pathType: Prefix

backend:

service:

name: example-service

port:

number: 80

3. Apply the ingress resource:

kubectl apply -f example-ingress.yaml

4. NGINX Ingress Controller

The NGINX Ingress Controller is a popular, open-source ingress controller for Kubernetes. It is highly configurable and supports a wide range of features.

Setup: You can deploy the NGINX Ingress Controller in your GKE cluster using either Helm charts or YAML manifests. Here’s a breakdown of both methods:

5. Deployment with Helm:

Helm offers a package manager for Kubernetes, simplifying the deployment and management of applications like the NGINX Ingress Controller.

Steps:

  1. Add the NGINX stable repository to Helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm repo update

  1. Install the NGINX Ingress Controller:

helm install nginx-ingress ingress-nginx/ingress-nginx

2. Optional Configurations:

  • You can specify additional parameters during installation using the — set flag to customize various aspects of the NGINX Ingress Controller, such as:
  • The type of load balancer (e.g., controller.service.type=LoadBalancer)
  • Resource requests and limits for the controller pod (e.g., resources.requests.memory=100Mi)
  • Enabling features like SSL termination

3. Deployment with YAML Manifest:

This method involves deploying the NGINX Ingress Controller resources directly using a YAML manifest file.

Steps:

  1. Download the NGINX Ingress Controller manifest file:

2. Customize the manifest (optional):

  • The downloaded manifest might contain default configurations. You can edit the file to adjust settings like:
  • The service type (e.g., by modifying the type field in the service resource definition)
  • Image used for the controller pod (by modifying the image field in the deployment resource definition)

Apply the manifest file to your GKE cluster:

kubectl apply -f path/to/nginx-ingress.yaml

Conclusion

Choosing the right ingress controller for GKE depends on your specific requirements, including integration with cloud services, feature needs, and management preferences. Each option has its pros and cons, so carefully evaluate them based on your use case.

--

--