Let’s build out the infra for a company for fun . . . : Part 2c

Jack Strohm
6 min readJan 10, 2023

--

Managed Kubernetes

Photo by Eren Namlı on Unsplash

Now that we have built out some simple tooling to make our lives easier (see my previous article) we can move onto the meat and potato’s of our project — spinning this all up in the real world in a managed cloud environment.

The first step is to decide which vendor to use. There are quite a few choices even beyond the big three (Amazon, Google, and Azure). I ended up going with Digital Ocean. I like them for a few reasons:

  • Low cost, especially for smaller projects
  • Simple choices that don’t get confusing. We don’t need anything fancier than a load balancer and a cluster.
  • Up front pricing with a price tag up at the top telling you your current estimated daily spend.

If you want to read a review about these four, I found this site useful. If you want to support me, this Digital Ocean link is a referral that will help fund my expirations.

So let’s get started.

Clone the Depot

% git clone https://github.com/hoyle1974/synthetic_infra.git
Cloning into 'synthetic_infra'...
remote: Enumerating objects: 21, done.
remote: Counting objects: 100% (21/21), done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 21 (delta 6), reused 20 (delta 5), pack-reused 0
Receiving objects: 100% (21/21), done.
Resolving deltas: 100% (6/6), done.

Create cluster in Digital Ocean

If you don’t already have a Digital Ocean account, create one using this referal link. Choose Create/Kubernetes, pick a region close to you or use the one they select. For my cluster I picked:

  • Fixed Size
  • Basic nodes
  • Changed nodes from 3 to 2

Then clicked “Create Cluster”. Follow their directions as the cluster initialized to point your kubectx/kubectl tools to your new cluster. I went with their “doctl” tool but you can also configure it manually.

It will take a few minutes to finish provisioning, but soon you can run the kubectl get pods command to see the status of your cluster.

% kubectl get nodes
NAME STATUS ROLES AGE VERSION
pool-5eevfiyi8-m4lzi Ready <none> 32s v1.25.4
pool-5eevfiyi8-m4lzv Ready <none> 35s v1.25.4

You can also use the Digital Ocean webpage to see your cluster’s resources. The Kubernetes dashboard is automatically installed, so we don’t need to do that stage and you can visit it from the Digital Ocean page detailing your cluster

.Now our cluster is ready to proceed.

A note on cost: For me, I was being charged $48 a month for my 2 node cluster before I turned on the load balancer. What is nice is you will get prorated, so when you are done and destroy the cluster after running it for less than a day your bill will probably be only a couple of dollars.

Install Istio service mesh

Your first step was cloning the git repo with my helper scripts. Now you just need to run that script to install istio into the cluster.

% ./istio.sh up
----- UP -----
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.

Thank you for installing Istio 1.16. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6
namespace/default labeled

This will enable the service/istio-ingressgateway which is a LoadBalancer. This will make Digital Ocean spin up a load balancer and point it at your cluster. This is an extra charge that will automatically be applied to you.

Install Echo service

Now we have the prerequisites installed, let’s install the echo service.

% ./echo.sh up
----- UP -----
deployment.apps/echoserver-v1 created
ingress.networking.k8s.io/gateway created
service/echoserver created

Test out end result

First you need to wait for the load balancer to be up and running, it should look like this:

kubectl get service -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.245.151.218 143.244.197.109 15021:31793/TCP,80:31612/TCP,443:32050/TCP 4m5s
istiod ClusterIP 10.245.180.187 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 4m15s

What you are looking for is the external-ip. Now check that nothing is in a pending or errored state and you are ready to hit the echo server.

% kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/echoserver-v1-77d4665d6-dpgxp 2/2 Running 0 3m34s
default pod/echoserver-v1-77d4665d6-nl4h4 2/2 Running 0 3m34s
default pod/echoserver-v1-77d4665d6-vvt72 2/2 Running 0 3m34s
istio-system pod/istio-ingressgateway-6785fcd48-xn7ql 1/1 Running 0 5m24s
istio-system pod/istiod-65448977c9-5dbj2 1/1 Running 0 5m34s
kube-system pod/cilium-8gmp9 1/1 Running 0 9m49s
kube-system pod/cilium-operator-67bdd94449-52tjb 1/1 Running 0 12m
kube-system pod/cilium-v2v4x 1/1 Running 0 9m52s
kube-system pod/coredns-7697897646-7dz22 1/1 Running 0 9m
kube-system pod/coredns-7697897646-bcdsk 1/1 Running 0 9m
kube-system pod/cpc-bridge-proxy-8rhkk 1/1 Running 0 9m11s
kube-system pod/cpc-bridge-proxy-xbjvj 1/1 Running 0 9m11s
kube-system pod/csi-do-node-f6b7v 2/2 Running 0 8m49s
kube-system pod/csi-do-node-kph69 2/2 Running 0 8m49s
kube-system pod/do-node-agent-jz28v 1/1 Running 0 8m38s
kube-system pod/do-node-agent-rvjrn 1/1 Running 0 8m38s
kube-system pod/konnectivity-agent-jjl5z 1/1 Running 0 9m22s
kube-system pod/konnectivity-agent-qx4p8 1/1 Running 0 9m22s
kube-system pod/kube-proxy-spj6r 1/1 Running 0 9m52s
kube-system pod/kube-proxy-x8l9s 1/1 Running 0 9m49s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/echoserver ClusterIP 10.245.151.226 <none> 80/TCP 3m35s
default service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 12m
istio-system service/istio-ingressgateway LoadBalancer 10.245.151.218 143.244.197.109 15021:31793/TCP,80:31612/TCP,443:32050/TCP 5m24s
istio-system service/istiod ClusterIP 10.245.180.187 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 5m34s
kube-system service/kube-dns ClusterIP 10.245.0.10 <none> 53/UDP,53/TCP,9153/TCP 9m1s

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/cilium 2 2 2 2 2 <none> 12m
kube-system daemonset.apps/cpc-bridge-proxy 2 2 2 2 2 <none> 9m12s
kube-system daemonset.apps/csi-do-node 2 2 2 2 2 <none> 8m50s
kube-system daemonset.apps/do-node-agent 2 2 2 2 2 kubernetes.io/os=linux 8m39s
kube-system daemonset.apps/konnectivity-agent 2 2 2 2 2 <none> 9m23s
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 <none> 12m

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/echoserver-v1 3/3 3 3 3m36s
istio-system deployment.apps/istio-ingressgateway 1/1 1 1 5m26s
istio-system deployment.apps/istiod 1/1 1 1 5m36s
kube-system deployment.apps/cilium-operator 1/1 1 1 12m
kube-system deployment.apps/coredns 2/2 2 2 9m2s

NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/echoserver-v1-77d4665d6 3 3 3 3m36s
istio-system replicaset.apps/istio-ingressgateway-6785fcd48 1 1 1 5m26s
istio-system replicaset.apps/istiod-65448977c9 1 1 1 5m36s
kube-system replicaset.apps/cilium-operator-67bdd94449 1 1 1 12m
kube-system replicaset.apps/coredns-7697897646 2 2 2 9m2s

NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
istio-system horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway <unknown>/80% 1 5 1 5m25s
istio-system horizontalpodautoscaler.autoscaling/istiod Deployment/istiod <unknown>/80% 1 5 1 5m36s

A simple curl http://[ip]/ where IP is the IP address of the load balancer should give you results:

% curl http://143.244.197.109
CLIENT VALUES:
client_address=('127.0.0.6', 56259) (127.0.0.6)
command=GET
path=/
real path=/
query=
request_version=HTTP/1.1

SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.5.0
protocol_version=HTTP/1.0

HEADERS RECEIVED:
accept=*/*
host=143.244.197.109
user-agent=curl/7.79.1
x-b3-parentspanid=27fd24ac8e8029cd
x-b3-sampled=0
x-b3-spanid=37ddb60deb9a0157
x-b3-traceid=dca9e139be6cbb7e27fd24ac8e8029cd
x-envoy-attempt-count=1
x-envoy-external-address=161.35.82.156
x-forwarded-client-cert=By=spiffe://cluster.local/ns/default/sa/default;Hash=1c839de74f5938f3b5161ca537899bd8bcf70f30bae1e09d4b9a0218084fd8b7;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account
x-forwarded-for=161.35.82.156
x-forwarded-proto=http
x-request-id=3b57b35e-4b41-4b5c-b3e0-4036a0bafcb6

And that is it! Now the final stage, PLEASE DO NOT FORGET!

Tear it all down

This is the most important step, go delete your cluster so you won’t incur any more billing charges when you are finished with it. Under the Actions menu item for your cluster choose Destroy and follow the directions. This should also tear down the load balancer that was created as well.

Next steps

Now that I’ve got the basic steps of builidng out a cluster working locally and remotely it’s time to figure out my next steps. I’m guessing it will be to install monitoring tools like Grafana and Prometheus. Until next time.

--

--

Jack Strohm

I’m a software engineer whose been programming for almost 40 years. Professionally I’ve used C/C++, Java, and Go the most.