3 Ways to Expose Applications Running in Kubernetes Cluster to Public Access

Sean Lin
6 min readAug 20, 2023

--

When we create a service to run application in Kubernetes, it provides a stable IP address and DNS name for accessing the application. But services in Kubernetes by default cannot be accessed from public internet, it’s only allowed to access inside the cluster. So how could we make the service in Kubernetes accessible by the internet?

In this article, I will introduce three approaches for exposing a service onto an external internet, which is outside of your cluster.

Node Port Service

In Kubernetes, a NodePort service is a way to expose a set of pods to the outside world. It allows external traffic to reach services running inside the Kubernetes cluster.

When you create a NodePort service, Kubernetes assigns a static port on each node in the cluster. This port is known as the NodePort. This NodePort is then mapped to the port that the service exposes within the cluster. This means that any traffic that reaches the specified NodePort on any node in the cluster is automatically forwarded to the corresponding port on the service.

Now, let’s create a Kubernetes deployment and NodePort service to expose a nginx web server application.

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: NodePort
ports:
- nodePort: 32000
port: 80
targetPort: 80
selector:
app: app-server

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: app-server
labels:
app: app-server
spec:
selector:
matchLabels:
app: app-server
template:
metadata:
labels:
app: app-server
spec:
containers:
- name: web-server
image: nginx
ports:
- containerPort: 80

After deploying the application and service, just find the public IP of a node in the Kubernetes cluster, and then access the application using that IP and the NodePort you specified.

curl http://<Node_IP>:32000

Load Balancer

LoadBalancer service provides external access to applications by automatically provisioning a load balancer. LoadBalancer services are particularly useful when you need to distribute traffic across multiple pods in your application and you want to expose your application to the internet or to other networks. It’s typically used when deploying applications to cloud platforms that support load balancer provisioning, such as AWS, Google Cloud Platform, and Azure.

Let’s create a LoadBalancer service to expose the Nginx deployment externally.

apiVersion: v1
kind: Service
metadata:
name: lb-service
labels:
app: lb-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend-container
image: nginx

After creating the LoadBalancer service, it might take some time for the external IP address to be assigned, depending on your Kubernetes environment and cloud provider. In this demo, I use AWS ELB to set up the Load Balancer. You can check the status using:

kubectl get svc lb-service

Wait until the “EXTERNAL-IP” field is populated, then you can access the Nginx service with the external IP from a web browser or using a tool like curl:

curl http://<EXTERNAL_IP>

Ingress

An Ingress is an API object that manages external access to services within the cluster. It provides a way to route and manage HTTP and HTTPS traffic from outside the cluster to services running inside the cluster. Ingress acts as a layer 7 (application layer) load balancer and allows you to define rules for how incoming traffic should be directed to different services based on the request’s host, path, and other criteria.

Ingress is particularly useful when you have multiple services running in your cluster and you want to expose them through a single external IP address while maintaining granular control over how traffic is routed.

  • Ingress Controller

Kubernetes itself does not implement Ingress functionality. Instead, you need an Ingress controller to handle and fulfill Ingress resources. An Ingress controller is a separate component or application that watches for Ingress resources and configures the underlying load balancer or proxy accordingly. Common Ingress controllers include Nginx Ingress Controller, Traefik, and HAProxy Ingress.

Again, let’s do a simple demo to learn how to use Ingress in the cluster. In this demo, we’ll use the Nginx Ingress Controller. Follow the official installation guide for the Nginx Ingress Controller.

# Add the official Ingress Nginx repository
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml

Make sure the Ingress controller’s pods are running by check the pods in cluster:

kubectl get pods -n ingress-nginx

Then let’s deploy two simple applications for this demo: an “app” and an “api”. Create the “app” deployment and “api” deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app-container
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api-container
image: nginx:alpine
ports:
- containerPort: 80

Also create services for both deployments:

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- protocol: TCP
port: 80
targetPort: 80

Apply the deployment and service configurations to the Kubernetes cluster.

Next, we are ready to create Ingress Resource. Here is how we can create an Ingress resource to route traffic to the “app” and “api” services:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
spec:
rules:
- host: demo.example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80

In this example, requests to demo.example.com/app will be routed to the "app" service, and requests to demo.example.com/api will be routed to the "api" service.

To configure hostname for local testing, add an entry to your local machine’s hosts file to simulate the hostname:

127.0.0.1 demo.example.com

Now we are able to access the services using the defined paths:

  • Open your web browser and navigate to http://demo.example.com/app to access the "app" service.
  • Open your web browser and navigate to http://demo.example.com/api to access the "api" service.

In real world systems, you can build Ingress through using AWS Application Load Balancer (ALB). The overall architecture would look like:

Summary:

  • NodePort is able to expose services to the outside world when you prefer not to use cloud-specific load balancers or Ingress controllers. It’s simple to set up, works in most Kubernetes environments, does not rely on cloud providers. However, it is not recommended for production use due to security concerns and the node IP can be changed.
  • LoadBalancer is suitable for exposing services to the internet and requires external access from various sources. It supports load balancing and health checks, suitable for production environments. The availability depends on cloud provider support.
  • Ingress is suitable for exposing multiple services using a single external IP address, with advanced routing and host/path-based traffic management. Use Ingress when you need advanced routing, SSL termination, virtual hosting, and more control over external access.

Each method has its own use cases and trade-offs, so the choice depends on your specific requirements, infrastructure, and level of control you need over external access to your services.

Thank you for your reading. Happy coding :) 💪

Reference:

https://kubernetes.io/docs/concepts/services-networking/

--

--