Kubernetes Services simply visually explained

Kim Wuestkamp
Oct 8, 2019 · 6 min read

Parts

TL;DR

There are four main services, with ClusterIP being the holy grail:

I would like you to imagine that if you create a NodePort service it also creates a ClusterIP one. And if you create a LoadBalancer it creates a NodePort which then creates a ClusterIP. If you do this, k8s services will be easy. We will walk through this in this article.

Services and Pods

Services point to pods. Services do not point to deployments or replicasets. Services point to pods directly using labels. This gives great flexibility because it doesn’t matter through which various (maybe even customized) ways pods have been created.

We’ll start with a simple example which we extend step by step with different service types to see how those are build on top of each other.

No Services

We start without any services.

We have two nodes, one pod. Nodes have external (4.4.4.1, 4.4.4.2) and internal (1.1.1.1, 1.1.1.2) IP addresses. The pod pod-python has only an internal one.

Now we add a second pod pod-nginx which got scheduled on node-1. This wouldn’t have to be the case and doesn’t matter for connectivity. In Kubernetes, all pods can reach all pods on their internal IP addresses, no matter on which nodes they are running.

This means pod-nginx can ping and connect to pod-python using its internal IP 1.1.1.3.

Now let’s consider the pod-python dies and a new one is created. (We don’t handle how pods might be managed and controlled in this article.) Suddenly pod-nginx cannot reach 1.1.1.3 any longer, and suddenly the world bursts into horrific flames… but to prevent this we create our first service!

ClusterIP

Same scenario, but we configured a ClusterIP service. A service is not scheduled on a specific node like pods. For this article it is enough to assume a service is just available in memory inside the whole cluster.

Pod-nginx can always safely connect to 1.1.10.1 or the dns name service-python and gets redirected to a living python pod. Beautiful. No flames. Sunshine.

We extend the example, spin up 3 instances of python and we now display the ports of the internal IP addresses of all pods and services.

All pods inside the cluster can reach the python pods on their port 443 via http://1.1.10.1:3000 or http://service-python:3000. The ClusterIP service-python distributes the requests based on a random or round-robin approach. That’s what a ClusterIP service does, it makes pods available inside the cluster via a name and an IP.

The service-python in the above image could for have this yaml:

apiVersion: v1
kind: Service
metadata:
name: service-python
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 443
selector:
run:
pod-python
type: ClusterIP

Running kubectl get svc :

NodePort

Now we would like to make the ClusterIP service available from the outside and for this we convert it into a NodePort one. In our example we convert the service-python with just two simple yaml changes:

apiVersion: v1
kind: Service
metadata:
name: service-python
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 443
nodePort: 30080
selector:
run: pod-python
type: NodePort
external request over node-2

This means our internal service-python will now also be reachable from every nodes internal and external IP address on port 30080.

external request over node-1

A pod inside the cluster could also connect to an internal node IP on port 30080.

Running kubectl get svc shows the same cluster ip. Just the different type and additional node port:

Internally the NodePort service still acts as the ClusterIP service before. It helps to imagine that a NodePort service creates a ClusterIP service, even though there is no extra ClusterIP object any more.

LoadBalancer

We use a LoadBalancer service if we would like to have a single IP which distributes requests (using some method like round robin) to all our external nodes IPs. So it is built on top of a NodePort service:

Imagine that a LoadBalancer service creates a NodePort service which creates a ClusterIP service. The changed yaml for LoadBalancer as opposed to the NodePort before is simply:

apiVersion: v1
kind: Service
metadata:
name: service-python
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 443
nodePort: 30080
selector:
run: pod-python
type: LoadBalancer

All a LoadBalancer service does is it creates a NodePort service. In addition it sends a message to the provider who hosts the Kubernetes cluster asking for a loadbalancer to be setup pointing to all external node IPs and specific nodePort. If the provider doesn’t support the request message, well then nothing happens and the LoadBalancer would be equal to a NodePort service.

Running kubectl get svc shows just the addition of the EXTERNAL-IP and different type:

The LoadBalancer service still opens port 30080 on the nodes internal and external IPs as before. And it still acts like a ClusterIP service.

ExternalName

Finally the ExternalName service, which could be considered a bit separated and not on the same stack as the 3 we handled before. In short: it creates an internal service with an endpoint pointing to a DNS name.

Taking our early example we now assume that the pod-nginx is already in our shiny new Kubernetes cluster. But the python api is still outside:

Here pod-nginx has to connect to http://remote.server.url.com , which works, for sure. But soon we would like to integrate that python api into the cluster and till then, we can create an ExternalName service:

This could be done using this yaml:

kind: Service
apiVersion: v1
metadata:
name:
service-python
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 443
type: ExternalName
externalName: remote.server.url.com

Now pod-nginx can simply connect to http://service-python:3000, just like with a ClusterIP service. When we finally decide to migrate the python api as well in our beautiful stunning Kubernetes cluster, we only have to change the service to a ClusterIP one with the correct labels set:

Python api still reachable at http://service-python

The big advantage when using ExternalName services is that you can already create your complete Kubernetes infrastructure and also already apply rules and restrictions based on services and IPs, even though some services might still be outside.

Recap

Today is not the day for much of a recap, I do fear so fellow reader.

The Startup

Medium's largest active publication, followed by +588K people. Follow to join our community.

Kim Wuestkamp

Written by

www.wuestkamp.com | killer.sh (CKA CKAD Simulator) | Software Engineer, Infrastructure Architect, Certified Kubernetes, Certified Symfony

The Startup

Medium's largest active publication, followed by +588K people. Follow to join our community.

More From Medium

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade