4 differently-colored macarons photographed up-close
(Source)

Deploying to a local Kubernetes cluster

Geert Gerritsen
5 min readJun 23, 2020

--

Intro

A while ago, I wanted to do get some more experience with Kubernetes. Even though managed cloud environments can be great and very helpful, I just wanted to get down to business and run the thing locally.

I’ve spent some time reading and researching the different possibilities and in the end decided to run k3d.

Why?

K3d is k3s in Docker. k3s is k8s with some unnecessary stuff stripped out (“it’s 5 less complex/size-heavy than k8s”). K3s runs on linux, but not on MacOS. So to run it on a MacBook, running it in Docker would work. Luckily, that’s why the good people at Rancher have created k3d (Docker is where the “d” in k3d comes in).

Goal

The goal here is to get an app consisting of a few (micro)services deployed to your local Kubernetes cluster.

Getting a k3d/k3s cluster running locally

The first step is to get the cluster running.

Assuming you have Docker and kubectl installed, getting a local k3s cluster running in Docker is straightforward (following the Readme and the docs):

  1. Install: wget -q -O — https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=v1.7.0 bash
  2. Create cluster: k3d create --api-port 6550 --publish 8090:80 --workers 3
  3. Check whether it’s running properly: export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"; kubectl cluster-info
  4. To see all running things, run: kubectl get all -o wide

Looks great, right? With just a few commands, you have a local Kubernetes cluster up and running!

Deploying a single service

So what’s next? Let’s deploy a single service.

First, you need to make sure you have a Docker container that contains a service.

You can use a container from this demo project:

https://hub.docker.com/r/ggerritsen1/k8s-tryout-2020_api

Then, write a small yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-tryout-2020-api-deployment-frontend
spec:
replicas: 2
selector:
matchLabels:
app: k8s-tryout-2020-frontend
template:
metadata:
labels:
app: k8s-tryout-2020-frontend
spec:
containers:
- name: api-container
image: ggerritsen1/k8s-tryout-2020_api:latest
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
ports:
- name: http
targetPort: 8080
port: 8080
selector:
app: k8s-tryout-2020-frontend

Store this in deployment.yaml and then trigger the deployment with:

kubectl apply -f deployment.yaml

Run kubectl get pods -o wide to see whether the deployment succeeded.

You can tail the logs with kubectl logs -f -lapp=k8s-tryout-2020-frontend --all-containers=true --max-log-requests=10.

Deploying a service with ingress (API)

Ok, so you have a service running now, which is all fine and nice, but it doesn’t do anything.

Let’s make it respond to an actual request.

To do that, you have to make an Ingress. Add the following lines to deployment.yaml:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: k8s-tryout-2020-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- host: api.localhost
http:
paths:
- path: /
backend:
serviceName: api
servicePort: http

K3s comes with traefik as load balancer out of the box. Here we let traefik forward the traffic on port 80 to our api.

The traffic arrives at traefik via port 8090 locally (due to adding --publish 8090:80 when creating the k3s cluster).

To put this to the test, run kubectl apply -f deployment.yaml.

Then, update /etc/hosts to contain:

127.0.0.1       api.localhost

If you visit http://api.localhost:8090/hello now, you should get a response from the actual service, even though it’s an error message.

Connecting the frontend to the backend

Now that we have an API service, accepting traffic from the outside, we should actually serve a proper response back.

For that, our API needs support of 2 other services.

To add those other services, add the following to deployment.yaml :

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-tryout-2020-api-deployment-backend
spec:
replicas: 3
selector:
matchLabels:
app: k8s-tryout-2020-backend
template:
metadata:
labels:
app: k8s-tryout-2020-backend
spec:
containers:
- name: customersvc-container
image: ggerritsen1/k8s-tryout-2020_customersvc:latest
- name: greetsvc-container
image: ggerritsen1/k8s-tryout-2020_greetsvc:latest
---
apiVersion: v1
kind: Service
metadata:
name: customersvc
spec:
ports:
- name: http
targetPort: 8081
port: 8081
selector:
app: k8s-tryout-2020-backend
---
apiVersion: v1
kind: Service
metadata:
name: greetsvc
spec:
ports:
- name: http
targetPort: 8082
port: 8082
selector:
app: k8s-tryout-2020-backend

This will deploy two services (customer-service and greet-service) to the ‘backend’ layer. It also adds two Services, which take care of routing traffic within the cluster.

With a Service, any component deployed to the cluster can reach another component through the name specified in the service, using DNS.

There’s one more addition to do: the API service should actually know where greetsvc and customersvc run.

Therefore, add the following lines to the k8s-tryout-2020-api-deployment-frontend block in deployment.yaml:

          env:
- name: CUSTOMERSVC_HOSTPORT
value: "customersvc:8081"
- name: GREETSVC_HOSTPORT
value: "greetsvc:8082"

So that the full deployment.yaml file looks as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-tryout-2020-api-deployment-frontend
spec:
replicas: 2
selector:
matchLabels:
app: k8s-tryout-2020-frontend
template:
metadata:
labels:
app: k8s-tryout-2020-frontend
spec:
containers:
- name: api-container
image: ggerritsen1/k8s-tryout-2020_api:latest
env:
- name: CUSTOMERSVC_HOSTPORT
value: "customersvc:8081"
- name: GREETSVC_HOSTPORT
value: "greetsvc:8082"
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
ports:
- name: http
targetPort: 8080
port: 8080
selector:
app: k8s-tryout-2020-frontend
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: k8s-tryout-2020-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- host: api.localhost
http:
paths:
- path: /
backend:
serviceName: api
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-tryout-2020-api-deployment-backend
spec:
replicas: 3
selector:
matchLabels:
app: k8s-tryout-2020-backend
template:
metadata:
labels:
app: k8s-tryout-2020-backend
spec:
containers:
- name: customersvc-container
image: ggerritsen1/k8s-tryout-2020_customersvc:latest
- name: greetsvc-container
image: ggerritsen1/k8s-tryout-2020_greetsvc:latest
---
apiVersion: v1
kind: Service
metadata:
name: customersvc
spec:
ports:
- name: http
targetPort: 8081
port: 8081
selector:
app: k8s-tryout-2020-backend
---
apiVersion: v1
kind: Service
metadata:
name: greetsvc
spec:
ports:
- name: http
targetPort: 8082
port: 8082
selector:
app: k8s-tryout-2020-backend

Now, run kubectl apply -f api-deployment.yaml again, check the logs for errors:

  • kubectl logs -f -lapp=k8s-tryout-2020-frontend --all-containers=true --max-log-requests=10
  • kubectl logs -f -lapp=k8s-tryout-2020-backend --all-containers=true

Make sure that /etc/hosts still contains api.localhost and then visit http://api.localhost:8090/hello to see that it works!

In case something doesn’t work, checkout this post with some tips to troubleshoot your k3s setup.

Conclusion

Overall, k3s/k3d is a very nice and lightweight way to run a Kubernetes cluster locally and deploy some applications to it. Troubleshooting can be a bit hard due to multiple layers of indirection, so I would not advise to use this for your normal development flow.

However, when you want to see Kubernetes in action and play around with it, this is an easy way to do it, without having to deploy to the cloud directly.

When I get some time, I’d like to add some improvements/features, such as:

  • Upgrade to latest version of k3d
  • Make smaller containers (from scratch)
  • Try out k8slens
  • Add a database
  • Integrate with an external endpoint/system

If you’d like to improve upon this, feel free to open a PR!

In case of any feedback, please comment below, on HN or find me on Twitter.

--

--

Geert Gerritsen

Freelance backend developer. Follow me on Twitter where I tweet about software, productivity and golang, among other things.