Let’s build out the infra for a company for fun . . . : Part 2a

Jack Strohm
8 min readJan 10, 2023

--

Kubernetes, Dashboard, Istio, & Echo Server

Photo by Zach Graves on Unsplash

The previous article in this series can be found here.

For the first step in this project I wanted to spin up Kubernetes with a dashboard, Istio service mesh, and a simple echo server. It’s a bit more than the bare minimum “hello world” but it is a good starting point for where I want to go with this project.

Over the next few articles I’ll go over how I run this both locally and in a managed environment. I will also go over some shell scripts I put together to help me out with this project.

Photo by Gabriel Heinzer on Unsplash

Local Development

There are a few choices you can use for local k8s development. I played with three: Minikube, KinD, and K3D. In the end I went with K3D. The biggest constraint I ran into was my development environment, an early 2016 MacBook with 1.2 Ghz dual-core m5 and 8gb of ram, definately not a powerhouse.

I think for folks with a beefier machine Minikube is probably a great starting point. The tooling and support seems really nice. I used it a few years ago, but wanted to try some other options.

I started with KinD and it seemed a bit faster to work with but took too many resources. In the end I went with K3D (which is really K3S by Rancher wrapped in Docker containers). It’s much less resource intensive. By the end of this project I might still have to switch to using my Windows Gaming machine as it’s much beefier.

Prerequisites

First you should have Docker installed. Follow the directions on their website if you don’t have it already.

You will also need K3D and Istioctl. For me on a Mac I use Homebrew to install them but you can check out their respective websites for more options if that doesn’t work for you.

Create a local cluster

Creating a cluster with K3D is pretty easy, but we will pass in some arguments because we need more than the default setup provides.

  • We want 1 server hosting everything, this is the control plane and single node.
  • We want 0 agents, because my resources are limited and I’m not testing multiple nodes on my local machine.
  • We expose some ports for the loadbalancer that we will need in the future.
  • We disable traefik which is the k3s built in loadbalancer because we will be using Istio instead.

Your command should look something like this:

% k3d cluster create --servers 1 --agents 0 --port 9080:80@loadbalancer --port 9443:443@loadbalancer --api-port 6443 --k3s-arg "--disable=traefik@server:0"
INFO[0000] portmapping '9080:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] portmapping '9443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-k3s-default' (222f2da41110e0bd0801e331198a45b2e4e2a0c99c996aa7029cd311f306da34)
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0001] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0004] Starting new tools node...
INFO[0004] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Starting cluster 'k3s-default'
INFO[0006] Starting servers...
INFO[0007] Starting Node 'k3d-k3s-default-server-0'
INFO[0029] All agents already running.
INFO[0029] Starting helpers...
INFO[0030] Starting Node 'k3d-k3s-default-serverlb'
INFO[0039] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0041] Cluster 'k3s-default' created successfully!
INFO[0042] You can now use it like this:
kubectl cluster-info

As it suggests, you can run the cluster-info command to make sure things look good, it should look like this:

% kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:6443
CoreDNS is running at https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Dashboard

Next we want to install the k8s Dashboard in order to easily see how the cluster is behaving.

% kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper createdThen we will use istioctl to install istio service mesh and ingress lid balancer.

In order to access it, we need to create a user and a token that we can use as needed.

% cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
EOF
serviceaccount/admin-user created

Create the cluster role binding for that service acount:

% cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Now you can generate a token for that user that can be used to login

% kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6InRGRm5ENjhDUVJDQXF1bjZNZzdsUXpOYWhSNEVDbTdEcnNwcy03VG15RDQifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNjczMzI0NjUxLCJpYXQiOjE2NzMzMjEwNTEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYmZkY2MzY2EtM2FiMy00MWMyLWExNTYtN2UwYmIzZjVkMjQ3In19LCJuYmYiOjE2NzMzMjEwNTEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.YwoRvHte8YRze1qeX3ABAAHybuy0Vm0ED2sWjBtSiFcedacZ4U75hrmFIp-4X15ehlCN_kukcCT8bUh-5F0QOcC96IjTK41MzDnY-2vJNuo3U3HdzWjAvR_gNp8ctx_933O7f8yzeZcYM37_mGQY-aKTQBKjvoFK25Sqc26y4vgNdKwEcQ5NI24-q2kL_ndGw4x_X939YI4lJkanE9Y2ZjEeQKwdQSHMzFwB-N2hDrO-7KXg93s65Y9jKeJ5xUa3odyInjtaSWY1UIk1oROyYsmj7SwPCczRELTok4bAy11CIjUl0FBu7cDiGYXlJpLvxXX2L6jQELbLdsE4KoXgRw

You then need to spin up the proxy

% kubectl proxy
Starting to serve on 127.0.0.1:8001

and now visit http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ to access the dashboard and login in using the provided token. Here you can see the status of the various k8s resources in flight.

Istio

Now to install Istio we can use istioctl that we previously installed like this:

% istioctl install --set profile=default -y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.

Thank you for installing Istio 1.16. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6

We also need to enable istio injection on the default namespace for our future services. This makes sure istio’s sidecar is running on services.

% kubectl label namespace default istio-injection=enabled
namespace/default labeled

Echo Server

Now we install an echo server just to make sure it is all wired together. First the deployment:

% cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver-v1
labels:
app: echoserver
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: echoserver
version: v1
template:
metadata:
labels:
app: echoserver
version: v1
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
EOF
deployment.apps/echoserver-v1 created

Next we install the service:

% cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: echoserver
labels:
app: echoserver
service: echoserver
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
name: http
EOF
service/echoserver created

and finally we setup the Gateway to expose it

% cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echoserver
port:
number: 80
EOF
ingress.networking.k8s.io/gateway created

Now we can check that everything spins up and is running with my favorite way to get a quick view of the entire cluster, the kubectl get all -A command:

% kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7b7dc8d6f5-dkkn2 1/1 Running 0 29m
kube-system pod/coredns-b96499967-kv2lj 1/1 Running 0 29m
kubernetes-dashboard pod/dashboard-metrics-scraper-8c47d4b5d-nnl47 1/1 Running 0 21m
kubernetes-dashboard pod/kubernetes-dashboard-67bd8fc546-cg6q4 1/1 Running 0 21m
kube-system pod/metrics-server-668d979685-8qx68 1/1 Running 0 29m
default pod/echoserver-v1-fcd7dc747-n8s6p 1/1 Running 0 7m32s
istio-system pod/istiod-7f8c8bb8c8-l99f8 1/1 Running 0 3m21s
kube-system pod/svclb-istio-ingressgateway-ff6acbf2-kdslx 3/3 Running 0 2m47s
istio-system pod/istio-ingressgateway-546585745f-dngbm 1/1 Running 0 2m48s
default pod/echoserver-v1-fcd7dc747-97554 1/1 Terminating 0 7m32s
default pod/echoserver-v1-fcd7dc747-dmtgk 1/1 Terminating 0 7m32s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 30m
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 30m
kube-system service/metrics-server ClusterIP 10.43.203.14 <none> 443/TCP 30m
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.43.112.106 <none> 443/TCP 21m
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.43.223.112 <none> 8000/TCP 21m
default service/echoserver ClusterIP 10.43.111.131 <none> 80/TCP 6m8s
istio-system service/istiod ClusterIP 10.43.69.161 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3m20s
istio-system service/istio-ingressgateway LoadBalancer 10.43.84.242 172.23.0.3 15021:30414/TCP,80:31658/TCP,443:32687/TCP 2m47s

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/svclb-istio-ingressgateway-ff6acbf2 1 1 1 1 1 <none> 2m47s

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/local-path-provisioner 1/1 1 1 30m
kube-system deployment.apps/coredns 1/1 1 1 30m
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 21m
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 21m
kube-system deployment.apps/metrics-server 1/1 1 1 30m
istio-system deployment.apps/istiod 1/1 1 1 3m21s
istio-system deployment.apps/istio-ingressgateway 1/1 1 1 2m48s
default deployment.apps/echoserver-v1 1/1 1 1 7m32s

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/local-path-provisioner-7b7dc8d6f5 1 1 1 29m
kube-system replicaset.apps/coredns-b96499967 1 1 1 29m
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-8c47d4b5d 1 1 1 21m
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-67bd8fc546 1 1 1 21m
kube-system replicaset.apps/metrics-server-668d979685 1 1 1 29m
istio-system replicaset.apps/istiod-7f8c8bb8c8 1 1 1 3m21s
istio-system replicaset.apps/istio-ingressgateway-546585745f 1 1 1 2m48s
default replicaset.apps/echoserver-v1-fcd7dc747 1 1 1 7m32s

NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 4%/80% 1 5 1 3m20s
istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 26%/80% 1 5 1 2m48s

It looks like it’s good to go, so let’s test it out. We will issue a curl to the docker container on the exposed port that is mapped to our Istio loadbalancer, simply curl http://127.0.0.1:9080 :

curl http://127.0.0.1:9080
CLIENT VALUES:
client_address=('10.42.0.12', 43286) (10.42.0.12)
command=GET
path=/
real path=/
query=
request_version=HTTP/1.1

SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.5.0
protocol_version=HTTP/1.0

HEADERS RECEIVED:
accept=*/*
host=127.0.0.1:9080
user-agent=curl/7.79.1
x-b3-sampled=0
x-b3-spanid=b498f9225af80f1a
x-b3-traceid=9c8fd0423f1bcf1ab498f9225af80f1a
x-envoy-attempt-count=1
x-envoy-decorator-operation=echoserver.default.svc.cluster.local:80/*
x-envoy-internal=true
x-envoy-peer-metadata=ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKHAoMSU5TVEFOQ0VfSVBTEgwaCjEwLjQyLjAuMTIKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjE2LjEKnAMKBkxBQkVMUxKRAyqOAwodCgNhcHASFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKEwoFY2hhcnQSChoIZ2F0ZXdheXMKFAoIaGVyaXRhZ2USCBoGVGlsbGVyCjYKKWluc3RhbGwub3BlcmF0b3IuaXN0aW8uaW8vb3duaW5nLXJlc291cmNlEgkaB3Vua25vd24KGQoFaXN0aW8SEBoOaW5ncmVzc2dhdGV3YXkKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKMAobb3BlcmF0b3IuaXN0aW8uaW8vY29tcG9uZW50EhEaD0luZ3Jlc3NHYXRld2F5cwoSCgdyZWxlYXNlEgcaBWlzdGlvCjkKH3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLW5hbWUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKLwojc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtcmV2aXNpb24SCBoGbGF0ZXN0CiIKF3NpZGVjYXIuaXN0aW8uaW8vaW5qZWN0EgcaBWZhbHNlChoKB01FU0hfSUQSDxoNY2x1c3Rlci5sb2NhbAovCgROQU1FEicaJWlzdGlvLWluZ3Jlc3NnYXRld2F5LTU0NjU4NTc0NWYtZG5nYm0KGwoJTkFNRVNQQUNFEg4aDGlzdGlvLXN5c3RlbQpdCgVPV05FUhJUGlJrdWJlcm5ldGVzOi8vYXBpcy9hcHBzL3YxL25hbWVzcGFjZXMvaXN0aW8tc3lzdGVtL2RlcGxveW1lbnRzL2lzdGlvLWluZ3Jlc3NnYXRld2F5ChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAonCg1XT1JLTE9BRF9OQU1FEhYaFGlzdGlvLWluZ3Jlc3NnYXRld2F5
x-envoy-peer-metadata-id=router~10.42.0.12~istio-ingressgateway-546585745f-dngbm.istio-system~istio-system.svc.cluster.local
x-forwarded-for=10.42.0.1
x-forwarded-proto=http
x-request-id=5b92e668-4a59-40bf-bf0d-a302c3aa37c8

And with that, we are successful!

In my next article, I’ll go over the tooling I built to make this a bit easier and reproducible.

--

--

Jack Strohm

I’m a software engineer whose been programming for almost 40 years. Professionally I’ve used C/C++, Java, and Go the most.