Configuring Internal Ingress GKE

Nikhil YN
5 min readSep 13, 2023

--

Introduction: In the world of container orchestration and microservices, Google Kubernetes Engine (GKE) stands out as a leading platform for managing and scaling containerized applications. An integral part of deploying applications in a GKE cluster is configuring ingress, which allows you to control external access to your services. While external ingress is common, there are scenarios where you may need to establish internal ingress, a powerful feature that enables secure communication between services within your GKE cluster without exposing them to the public internet.

In this article, we will explore the concept of internal ingress in GKE and the reasons why you might need it. We will dive into the technical details of how to set up and configure internal ingress for your applications, ensuring that they can communicate securely and efficiently while maintaining the integrity of your internal network.

Whether you are a DevOps engineer, a Kubernetes administrator, or a developer working with GKE, understanding how to configure and manage internal ingress is a valuable skill. By the end of this article, you’ll have a comprehensive understanding of internal ingress in GKE and the confidence to implement it in your own containerized applications, enabling secure and efficient communication within your Kubernetes cluster.

BRIEF STEPS:

  1. Configure internal ingress using the below yaml file —
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce-internal
kubernetes.io/ingress.regional-static-ip-name: internal-static-ip-name-created-in-same-region-as-gke-cluster
name: your-internal-ingress-name
namespace: your-app-deployed-namespace
spec:
rules:
- host: your-app-one-internal-hostname # eg: nginx.internal.com
http:
paths:
- backend:
service:
name: app-one-backend-svc
port:
number: app-one-service-port
path: /
pathType: Prefix
- host: your-app-two-internal-hostname # eg: nginx2.internal.com
http:
paths:
- backend:
service:
name: your-app-two-service-name
port:
number: app-two-service-port
path: /
pathType: Prefix

2. Create a Private DNS and add A record to your internal domain.

Go to Cloud DNS under Load Balancer. Click on Create Zone
Click on Zone Type as Private. Give any Zone name of your choice. In options, select default. Give your Domain’s suffix in DNS name. For example, if nginx.internal.com is your internal domain, you can give internal.com.
Select your VPC network from in Networks. This would ensure that your domain is accessible ony within the VPC network mentioned here.
After the zone is created, click on it. Click on “Add Standard”
Enter your DNS name and add internal IP address of the internal domain. Resource record type must be A. You can give TTL of your choice.

3. Create a proxy only subnet, since the internal ingress in GKE creates regional internal load balancer. Regional internal load balancer requires proxy only subnet. One proxy only subnet must be created for one VPC. Its a VPC level resource.

gcloud compute networks subnets create proxy-only-subnet \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--region=your-gcp-region \
--network=your-vpc-network \
--range=10.129.0.0/23

4. Once the proxy only subnet is created, you need to create a firewall rule with source as your proxy IP and destination as all instances in the network. This is to allow probes from proxy to reach your pods via ingress. Ingress connection occurs via proxy network in case of internal regional ingress/load balancers in GCP. The ports which must be opened are service ports in the backends of internal ingress that is “app-one-service-port” and “app-two-service-port” from step 1. The IP range for proxy only subnet is fixed — “10.129.0.0/23”.

gcloud compute firewall-rules create allow-proxy-connection \
--allow=TCP:CONTAINER_PORT \
--source-ranges=10.129.0.0/23 \
--network=lb-network

CONTAINER_PORT is the service ports of the deployments mentioned in internal ingress yaml files. In our case it would be “app-one-service-port” and “app-two-service-port” from step 1. From console, more than one port can be added, from gcloud command only one port can be added.

Note: The Ingress controller does not create a firewall rule to allow connections from the load balancer proxies in the proxy-subnet. You must create this firewall rule manually. However, the Ingress controller does create firewall rules to allow ingress for Google Cloud health-checks.

5. Testing: This can be done by accessing the internal ingress urls from a VM in the same VPC or from inside the pod using the curl command.

Conclusion:

Configuring internal ingress in Google Kubernetes Engine (GKE) is a fundamental aspect of building secure and efficient microservices architectures within your Kubernetes clusters. In this article, we’ve explored the concept of internal ingress, its benefits, considerations, and the steps involved in its configuration.

Key advantages of internal ingress in GKE include:

1. **Understanding Use Cases**: Internal ingress is a powerful tool for creating secure communication channels between services within a GKE cluster. It’s particularly valuable when you need to maintain network isolation, ensure service-to-service security, and restrict external access.

2. **Benefits and Considerations**: While internal ingress offers benefits such as improved security and network isolation, it’s important to consider when to use it. External ingress might be more suitable for services that require public access, while internal ingress is ideal for internal communication.

3. **Configuration Process**: We’ve walked through the process of configuring internal ingress in GKE, which includes defining ingress resources, setting up backend services, and configuring routing rules. It’s important to follow best practices for ingress resource configuration.

4. **Security and Access Control**: Security is a top priority when configuring internal ingress. Implement access controls, such as Network Policies, to ensure that only authorized services can communicate. Additionally, consider encryption and identity and access management (IAM) to enhance security.

5. **Monitoring and Troubleshooting**: Effective monitoring and troubleshooting are essential for maintaining the reliability of internal ingress. Utilize GKE monitoring tools, logging, and metrics to gain visibility into ingress traffic and diagnose issues promptly.

In conclusion, internal ingress in GKE empowers you to design and operate secure and efficient microservices architectures within your Kubernetes clusters. As you continue to work with containerized applications and orchestration in GKE, internal ingress becomes a valuable tool in your toolkit, enabling you to achieve network isolation, maintain security, and facilitate seamless communication among your services.

By following best practices, staying informed about the evolving GKE features, and regularly reviewing your ingress configurations, you can harness the full potential of internal ingress and build robust, secure, and scalable microservices environments in GKE.

For more such articles, please follow me at https://medium.com/@nikhil.nagarajappa

Refernces:

  1. Configuring internal ingress — https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress
  2. Theory on internal ingress — https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb

--

--