Running a Small Local K8s Cluster for Development

Oliver Azevedo Barnes
Nov 6 · 3 min read

Steps to run liquid-voting-service on a local single-node Kubernetes cluster with Prometheus monitoring.

You should have Docker Desktop running, and you’ll need to install Helm for some of them.

First, clone the project repo and cd into it:

git clone git@github.com:oliverbarnes/liquid-voting-service.git
cd liquid-voting-service

Install the ingress-nginx controller:

helm install stable/nginx-ingress \
— set controller.metrics.enabled=true,controller.metrics.serviceMonitor.enabled=true,controller.stats.enabled=true

Create configuration secrets for the Postgres database:

kubectl create secret generic liquid-voting-postgres \
— from-literal=postgres-username=postgres \
— from-literal=postgres-password=postgres \
— from-literal=postgres-dbname=liquid_voting_dev \
— from-literal=postgres-host=localhost \
— from-literal=postgres-pool-size=10

Then apply the app’s manifest files:

kubectl apply -f k8s/ingress.yaml
kubectl apply -f k8s/database-persistent-volume-claim.yaml
kubectl apply -f k8s/database-service.yaml
kubectl apply -f k8s/database-deployment.yaml
kubectl apply -f k8s/liquid-voting-service.yaml
kubectl apply -f k8s/liquid-voting-deployment.yaml

If all goes well, you should have seen the following output:

ingress.extensions/liquid-voting-ingress created
persistentvolumeclaim/postgres-pvc created
service/db created
deployment.extensions/db created
service/liquid-voting-service created
deployment.apps/liquid-voting-deployment created

These will have you set up a main API service with 3 pods deployed, exposed through an ingress, backed by a Postgres database service using a persistent volume claim. Using kubectl to inspect them, you should see something similar to the following output:

➜  liquid-voting-service git:(master) kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
liquid-voting-ingress * 80 81s
➜ liquid-voting-service git:(master) kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP 10.101.108.253 <none> 5432/TCP 88s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h5m
liquid-voting-service ClusterIP 10.110.85.61 <none> 4000/TCP 87s
➜ liquid-voting-service git:(master) kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Bound pvc-b2e084a8-5b03-45dc-a998-b73d626ac6f0 1Gi RWO hostpath 95s
➜ liquid-voting-service git:(master) kubectl get pods
NAME READY STATUS RESTARTS AGE
db-75fc47475c-x2rtr 1/1 Running 0 119s
liquid-voting-deployment-86c87747f6-hxt4j 1/1 Running 0 119s
liquid-voting-deployment-86c87747f6-s9zbs 1/1 Running 0 119s
liquid-voting-deployment-86c87747f6-zk4hf 1/1 Running 0 119s

Lastly, run the Elixir app’s migrations from within the app deployment:

kubectl get pods
kubectl exec -ti liquid-voting-deployment-pod \
— container liquid-voting \
— /opt/app/_build/prod/rel/liquid_voting/bin/liquid_voting \
eval “LiquidVoting.Release.migrate”

All going well, you should be able to see the app’s GraphQL IDE by pointing your browser to localhost:4000/graphiql.

Setting up monitoring (Prometheus and Grafana):

You might have noticed the ingress controller install command back in the beginning also sets a few options:

controller.metrics.enabled=true,
controller.metrics.serviceMonitor.enabled=true,
controller.stats.enabled=true

These prep the controller to have metrics scraped by Prometheus.

Let’s then install prometheus-operator, which will get you going with both Prometheus and Grafana and setup a k8s operator to watch the above:

helm install stable/prometheus-operator

The metrics won’t be scraped right away. In order to get Prometheus to see them, there are a couple of hoops to jump through.

You’ll need to make the ingress’ servicemonitor (a k8s custom resource introduced by the operator) release label match the one Prometheus expects. A little hacky and manual, but no way around it as far as I know and after extensive googling (see this issue on their repo):

Take a look at your Prometheus instance’s matchLabel.release config:

kubectl get prometheus prometheus-instance -o yaml

Then open your ingress servicemonitor and edit its metadata.labels.release field to match it:

kubectl get servicemonitors
KUBE_EDITOR=your-favorite-editor kubectl edit servicemonitor ingress-controller

That’s it, after this Prometheus should be scraping the metrics. You can check that by looking at localhost:9090/graph.

Let’s now expose the Grafana dashboard:

export POD_NAME=$(kubectl get pods -l “app=grafana” -o jsonpath=”{.items[0].metadata.name}”)
kubectl port-forward $POD_NAME 3000

Open localhost:3000 and you’ll see a login page.

The password secret was generated during the install. To get it, run:

kubectl get secret my-release-grafana \
-o jsonpath=”{.data.admin-password}” \
| base64 — decode ; echo

The login user is admin. Upon logging in, you should see Grafana main screen and a few pre-installed dashboards.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade