Let’s build out the infra for a company for fun . . . : Part 2b

Jack Strohm
5 min readJan 10, 2023

--

Tooling

Photo by Hunter Haley on Unsplash

In my previous article I talked about setting up local development and getting an echo service running in a minimal local environment.

The commands I presented were actually extracted from some simple scripts I built to aid me in testing. Each script can install or uninstall it’s component individually letting me test variations quickly. They were patterned after the Unix System V init scripts. In my variation, each script can be passed an up or down command to install or uninstall the component. By default, with no arguments, they assume you want to bring the component up.

The git repository with this code can be found here.

These scripts will make it easier to spin up both local and managed cluster environments. In the future, I suspect these will be turned into Helm scripts or maybe an K8S Operator, but I didn’t want to be burdened with learning yet another tool for this project so I could make some realistic progress early on.

Clone the V1 branch like this:

% git clone https://github.com/hoyle1974/synthetic_infra.git
Cloning into 'synthetic_infra'...
remote: Enumerating objects: 18, done.
remote: Counting objects: 100% (18/18), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 18 (delta 4), reused 18 (delta 4), pack-reused 0
Receiving objects: 100% (18/18), done.
Resolving deltas: 100% (4/4), done.

Once you have that in place, you can reproduce the work of my previous article with just a few commands executed in the cloned directory. You will run the k3d.sh, dashboard.sh, istio.sh, and echo.sh commands in that order like this:

k3d.sh up — Create the cluster

% ./k3d.sh up
----- UP -----
INFO[0000] portmapping '9080:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] portmapping '9443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-k3s-default' (222f2da41110e0bd0801e331198a45b2e4e2a0c99c996aa7029cd311f306da34)
INFO[0035] Created image volume k3d-k3s-default-images
INFO[0035] Starting new tools node...
INFO[0036] Starting Node 'k3d-k3s-default-tools'
INFO[0036] Creating node 'k3d-k3s-default-server-0'
INFO[0037] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0037] Using the k3d-tools node to gather environment information
INFO[0040] Starting new tools node...
INFO[0040] Starting Node 'k3d-k3s-default-tools'
INFO[0042] Starting cluster 'k3s-default'
INFO[0042] Starting servers...
INFO[0043] Starting Node 'k3d-k3s-default-server-0'
INFO[0058] All agents already running.
INFO[0058] Starting helpers...
INFO[0059] Starting Node 'k3d-k3s-default-serverlb'
INFO[0067] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0070] Cluster 'k3s-default' created successfully!
INFO[0070] You can now use it like this:
kubectl cluster-info

dashboard.sh up — Install the dashboard

% ./dashboard.sh up
----- UP -----
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

To create a token for logging into the dashboard, run this:
kubectl -n kubernetes-dashboard create token admin-user

and then try:
kubectl proxy

then go to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

istio.sh up — Install Istio service mesh

% ./istio.sh up
----- UP -----
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.

Thank you for installing Istio 1.16. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6
namespace/default labeled

echo.sh up — Install the echo service

% ./echo.sh up
----- UP -----
deployment.apps/echoserver-v1 created
ingress.networking.k8s.io/gateway created
service/echoserver created

And finally, we can check everything is up and ready for us to play with:

% kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7b7dc8d6f5-l62bm 1/1 Running 0 8m52s
kubernetes-dashboard pod/dashboard-metrics-scraper-8c47d4b5d-kk2sf 1/1 Running 0 7m48s
kubernetes-dashboard pod/kubernetes-dashboard-67bd8fc546-b8wzz 1/1 Running 0 7m49s
istio-system pod/istiod-7f8c8bb8c8-n26vb 1/1 Running 0 7m
kube-system pod/svclb-istio-ingressgateway-b140fd65-748ps 3/3 Running 0 6m12s
istio-system pod/istio-ingressgateway-546585745f-lchbh 1/1 Running 0 6m13s
default pod/echoserver-v1-fcd7dc747-h76r7 0/2 PodInitializing 0 5m7s
default pod/echoserver-v1-fcd7dc747-lq79p 0/2 PodInitializing 0 5m7s
default pod/echoserver-v1-fcd7dc747-vvdrf 0/2 PodInitializing 0 5m7s
istio-system pod/istio-ingressgateway-546585745f-bdj49 1/1 Running 0 2m10s
kube-system pod/coredns-b96499967-xv272 1/1 Running 0 8m52s
kube-system pod/metrics-server-668d979685-8mttf 1/1 Running 0 8m52s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 9m10s
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 9m5s
kube-system service/metrics-server ClusterIP 10.43.22.238 <none> 443/TCP 9m4s
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.43.149.251 <none> 443/TCP 7m50s
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.43.248.87 <none> 8000/TCP 7m50s
istio-system service/istiod ClusterIP 10.43.60.123 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 7m
istio-system service/istio-ingressgateway LoadBalancer 10.43.248.20 172.23.0.3 15021:31029/TCP,80:30354/TCP,443:30494/TCP 6m13s
default service/echoserver ClusterIP 10.43.75.90 <none> 80/TCP 5m8s

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/svclb-istio-ingressgateway-b140fd65 1 1 1 1 1 <none> 6m13s

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/local-path-provisioner 1/1 1 1 9m5s
kube-system deployment.apps/coredns 1/1 1 1 9m5s
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 7m49s
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 7m50s
istio-system deployment.apps/istiod 1/1 1 1 7m1s
default deployment.apps/echoserver-v1 0/3 3 0 5m8s
istio-system deployment.apps/istio-ingressgateway 2/2 2 2 6m14s
kube-system deployment.apps/metrics-server 1/1 1 1 9m4s

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/local-path-provisioner-7b7dc8d6f5 1 1 1 8m53s
kube-system replicaset.apps/coredns-b96499967 1 1 1 8m53s
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-8c47d4b5d 1 1 1 7m49s
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-67bd8fc546 1 1 1 7m50s
istio-system replicaset.apps/istiod-7f8c8bb8c8 1 1 1 7m1s
default replicaset.apps/echoserver-v1-fcd7dc747 3 3 0 5m8s
istio-system replicaset.apps/istio-ingressgateway-546585745f 2 2 2 6m14s
kube-system replicaset.apps/metrics-server-668d979685 1 1 1 8m53s

NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 26%/80% 1 5 1 7m2s
istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 49%/80% 1 5 2 6m15s

and with everything looking good we can hit the echo service like this:

% curl http://127.0.0.1:9080
CLIENT VALUES:
client_address=('127.0.0.6', 60451) (127.0.0.6)
command=GET
path=/
real path=/
query=
request_version=HTTP/1.1

SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.5.0
protocol_version=HTTP/1.0

HEADERS RECEIVED:
accept=*/*
host=127.0.0.1:9080
user-agent=curl/7.79.1
x-b3-parentspanid=44ee52529532d401
x-b3-sampled=0
x-b3-spanid=7b532f46823e8e83
x-b3-traceid=8d8349677ac352f644ee52529532d401
x-envoy-attempt-count=1
x-envoy-internal=true
x-forwarded-client-cert=By=spiffe://cluster.local/ns/default/sa/default;Hash=6d5e342cab65abda2a62ea00bd754c872893c79fd5fc78bd74068e1cb18f8b56;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account
x-forwarded-for=10.42.0.1
x-forwarded-proto=http
x-request-id=4ffcc3a9-a72a-44da-83db-08d6c3b87452

We can use the scripts to take down the individual components by passing in a down argument. When we are done we can tear it all down the whole thing by running the k3d.sh script with a down argument.

% ./k3d.sh down
----- DOWN -----
INFO[0000] Stopping cluster 'k3s-default'
INFO[0012] Stopped cluster 'k3s-default'
INFO[0000] Deleting cluster 'k3s-default'
INFO[0000] Deleting 2 attached volumes...
WARN[0000] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually
INFO[0000] Removing cluster details from default kubeconfig...
INFO[0000] Removing standalone kubeconfig file (if there is one)...
INFO[0000] Successfully deleted cluster k3s-default!

Next up we will get this all up and running in a managed Kubernetes environment . . .

--

--

Jack Strohm

I’m a software engineer whose been programming for almost 40 years. Professionally I’ve used C/C++, Java, and Go the most.