Quick Note on Creating Services and ‘Linking’ Pods in Kubernetes

When discussing how to move a local development container into a production service, the most common thing I find some developers unfamiliar with, either, using Kubernetes in production, or how to shift from an environment that does not use it, confused by is how to do common things they might do all the time in development with Docker like linking and exposing ports, etc. with Kubernetes, which of course, has this functionality, but through other methods and slightly different terminology.

Take the example of a service with two containers:

docker run -d --name backend backend-image
docker run -d --name web-ui --link backend -p 80:80 web-image

There are many cool ways to convert Docker compose files like Kompose, into usable Kubernetes YAML configs (and other DSLs that make the barrier to entry much lower, like ksonnet), however, converting this to a native Kubernetes YAML, is pretty straightforward.

Moving your containers from a local development environment to a production-ready distributed environment requires a little more configuration for deployment, and in its simplest form you’ll be managing groups of containers (pods, sort of like services in Docker compose) and services (which can do things like expose pods to different parts of the distributed environment with the kube-system, which in our example, will rely on Kube DNS).

Take the example of this docker-compose.yml :

version: '2'
services:
backend:
image: backend-image
web:
image: web-image
links:
- backend
depends_on:
- backend
ports:
- "80:80"
- "443:443"

You’ll create two pods (the basic unit wrapping a container):

---
apiVersion: v1
kind: Pod
metadata:
name: web-ui
labels:
app: web-ui
spec:
containers:
- name: web-ui
image: web-ui
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: backend
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Always
ports:
- containerPort: 3000

to define each (to use Docker compose’s parlance) service (which are, in this context, pods; more advanced management for pods include things like deployments and replication controllers).

Running kubectl create -f your_pods.yaml will create the pods per the spec above.

In production, these pods are being managed independently in this setup, so much like you might scale a service in a compose file, you can do the same with the deployment and RC formats as well. At this point, you’ve created the services, so to expose the web-ui service, you can use things like a LoadBalancer or an ingress to allow for an external load balancer. For example, if you are running K8s on a provider that has support for their load balancers in Kubernetes (like AWS’ Elastic Load Balancer), you can do something like this:

---
kind: Service
apiVersion: v1
metadata:
name: web
namespace: default
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
type: LoadBalancer

where the LoadBalancer targets your pods with the web: app label, and on the ports you specified.

After you create the above service:

kubectl create -f your_service.yaml

you can get your ELB address using the following to try it in your browser:

jmarhee: ~/repos 💄  kubectl get services -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-i 10.107.205.187 address.us-west-1.elb.amazonaws.com 80:31655/TCP 1d app=web-ui
...

However, you’ll notice anything relying on your backend container (a database, for example) will fail to connect because it cannot resolve backend as a service name like it did when you --link in the Docker CLI or use the link key in compose.

Within a pod, you can run both of these containers, and their ports are exposed to each other via localhost , but between pod groups (so they can have their resources, scaling groups, and access from services managed independently of one another, in this case) a service name can be used to emulate that familiar linking behavior, and not specific to the Docker runtime.

You can define a service resolvable, and expose that hostname through Kube DNS, the same way using a similar method as you did above:

---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
ports:
- port: 3000
targetPort: 3000
selector:
app: backend

then create this service as well.

What you did here was create a service, backend , that can be reached at that name, and without creating a new load balancer or an ingress to this service (exposing it only to other pods), but it uses the same configuration as your load balancer definition. Your output from kubectl get services will demonstrate the new backend service as well, albeit without an External IP address.