Let’s build out the infra for a company for fun . . . : Part 5

Jack Strohm
9 min readFeb 5, 2023

--

PingPong

In our last chapter we setup a local registry for use in k3d. Now we need some metrics to scrape.

We have used a prebuilt echo service image in previous tutorials. This time we will build our own, slightly more exciting echo service I call pingpong. Pingpong consist of two services running the same binary with different arguments. One is called Ping, that randomly sends a message to the other called Pong. Pong also randomly sends messages to Ping. They each collect metrics and publish them.

We will build out the executable, create tiny docker images, and then make k8s assets needed for our service to function.

Collecting metrics in Go is pretty simple, first you will need some imports:

import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

and then you can create a variable to track the metrics like this:

var (
pingpongProcessed = promauto.NewCounter(prometheus.CounterOpts{
Name: "pingpong_ops_total",
Help: "The total number of processed events",
})
)

You can increment the metric like simply by calling the Inc() method on the counter:

pingpongProcessed.Inc()

And then, finally, you need to expose the metrics with an http listener on the /metrics endpoint:

http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":80", nil)

You can see the code for pingpong here: https://github.com/hoyle1974/synthetic_infra/tree/v2/pingpong

You will also find docker files for building some very tiny docker images for those services — under 4 megabytes.

Next we need to take those images, push them to a registry, and create services. To build them:

docker build --tag ping . -f ./Dockerfile.ping
docker build --tag pong . -f ./Dockerfile.pong

We tag and push them to the local registry:

docker tag ping:latest k3d-myregistry.localhost:12345/ping:v1
docker tag pong:latest k3d-myregistry.localhost:12345/pong:v1
docker push k3d-myregistry.localhost:12345/ping:v1
docker push k3d-myregistry.localhost:12345/pong:v1

We then create the needed k8s assets:

kubectl create -f ping.yaml
kubectl create -f pong.yaml

These yaml files do the following - create the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: ping
labels:
app: ping
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: ping
version: v1
template:
metadata:
labels:
app: ping
version: v1
spec:
containers:
- name: ping
image: k3d-myregistry.localhost:12345/ping:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000

Create istio ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gateway-ping
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ping
port:
number: 8080

Define the service:

apiVersion: v1
kind: Service
metadata:
name: ping
labels:
app: ping
service: ping
spec:
selector:
app: ping
ports:
- port: 8080
targetPort: 8080
name: http

and finally create the service monitor to scrape the metrics and hand them to Promtheus:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: ping
labels:
name: ping
spec:
selector:
matchLabels:
app: ping
endpoints:
- port: http

This is all wrapped up in my script pingpong.sh

Tie it all together

We can then use the scripts found here to build out our latest k8s cluster locally like so:

$ ./k3d.sh
----- UP -----
INFO[0000] Creating node 'k3d-myregistry.localhost'
INFO[0001] Pulling image 'docker.io/library/registry:2'
INFO[0002] Successfully created registry 'k3d-myregistry.localhost'
INFO[0002] Starting Node 'k3d-myregistry.localhost'
INFO[0002] Successfully created registry 'k3d-myregistry.localhost'
# You can now use the registry like this (example):
# 1. create a new cluster that uses this registry
k3d cluster create --registry-use k3d-myregistry.localhost:12345
# 2. tag an existing local image to be pushed to the registry
docker tag nginx:latest k3d-myregistry.localhost:12345/mynginx:v0.1
# 3. push that image to the registry
docker push k3d-myregistry.localhost:12345/mynginx:v0.1
# 4. run a pod that uses this image
kubectl run mynginx --image k3d-myregistry.localhost:12345/mynginx:v0.1
INFO[0000] portmapping '9080:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] portmapping '9443:443' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-k3s-default' (0536304a33391f07b8eb02d154497c48d89f8edbef445b6afcc13ef281ed449b)
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1'
INFO[0002] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6'
INFO[0009] Using the k3d-tools node to gather environment information
INFO[0009] HostIP: using network gateway 192.168.192.1 address
INFO[0009] Starting cluster 'k3s-default'
INFO[0009] Starting servers...
INFO[0009] Starting Node 'k3d-k3s-default-server-0'
INFO[0015] All agents already running.
INFO[0015] Starting helpers...
INFO[0015] Starting Node 'k3d-k3s-default-serverlb'
INFO[0021] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0025] Cluster 'k3s-default' created successfully!
INFO[0025] You can now use it like this:
kubectl cluster-info

We install prometheus:

$ ./prometheus.sh
----- UP -----
kube-prometheus-main already downloaded
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
namespace/monitoring created
No resources found
alertmanager.monitoring.coreos.com/main created
networkpolicy.networking.k8s.io/alertmanager-main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
networkpolicy.networking.k8s.io/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes-darwin created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
networkpolicy.networking.k8s.io/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
networkpolicy.networking.k8s.io/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
networkpolicy.networking.k8s.io/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
networkpolicy.networking.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
networkpolicy.networking.k8s.io/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
networkpolicy.networking.k8s.io/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
Error from server (AlreadyExists): error when creating "manifests/prometheusAdapter-apiService.yaml": apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io" already exists
Error from server (AlreadyExists): error when creating "manifests/prometheusAdapter-clusterRoleAggregatedMetricsReader.yaml": clusterroles.rbac.authorization.k8s.io "system:aggregated-metrics-reader" already exists
Prometheus expose: kubectl port-forward --address=192.168.181.99 svc/prometheus-operated 9090:9090
Grafana expose: kubectl port-forward --address=192.168.181.99 svc/grafana 3000 -n monitoring
Grafana user/pass is admin/admi

We install istio:

$ ./istio.sh
----- UP -----
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete
Thank you for installing Istio 1.16. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6
namespace/default labeled Making this installation the default for injection and validation.

And the dashboard:

$ ./dashboard.sh
----- UP -----
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
gateway.networking.istio.io/dashboard-gateway created
virtualservice.networking.istio.io/kubernetes-dashboard created
To create a token for logging into the dashboard, run this:
kubectl -n kubernetes-dashboard create token admin-user
and then try:
kubectl proxy
then go to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
If you have this on a remote machine try creating a DNS entry to that machine for: dashboard.example.com
and then try:
https://dashboard.example.com:9443/#/login

And finally, pingpong:

$ ./pingpong.sh
----- UP -----
Sending build context to Docker daemon 24.22MB
Step 1/19 : FROM golang:1.20rc3-alpine@sha256:d78cd58c598fa1f0c92046f61fde32d739781e036e3dc7ccf8fdb50129243dd8 as builder
docker.io/library/golang:1.20rc3-alpine@sha256:d78cd58c598fa1f0c92046f61fde32d739781e036e3dc7ccf8fdb50129243dd8: Pulling from library/golang
Digest: sha256:d78cd58c598fa1f0c92046f61fde32d739781e036e3dc7ccf8fdb50129243dd8
Status: Downloaded newer image for golang:1.20rc3-alpine@sha256:d78cd58c598fa1f0c92046f61fde32d739781e036e3dc7ccf8fdb50129243dd8
---> 3af2636ea21b
Step 2/19 : RUN apk update && apk add --no-cache git ca-certificates && update-ca-certificates && apk add --no-cache upx
---> Using cache
---> ae58762d17ba
Step 3/19 : ENV USER=appuser
---> Using cache
---> 168dc8c60014
Step 4/19 : ENV UID=10001
---> Using cache
---> ffa5751f4a8a
Step 5/19 : RUN adduser --disabled-password --gecos "" --home "/nonexistent" --shell "/sbin/nologin" --no-create-home --uid "${UID}" "${USER}"
---> Using cache
---> 0fbcdeb07a68
Step 6/19 : WORKDIR $GOPATH/src/mypackage/myapp/
---> Using cache
---> 6484fcef94b1
Step 7/19 : COPY . .
---> bf9b1dee842c
Step 8/19 : RUN cd pingpong && go mod download
---> Running in 096fed265e86
Removing intermediate container 096fed265e86
---> 071312e40395
Step 9/19 : RUN cd pingpong && go mod verify
---> Running in 782a37eef83b
all modules verified
Removing intermediate container 782a37eef83b
---> afc2a59a70c5
Step 10/19 : RUN cd pingpong && GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /go/bin/pingpong
---> Running in 1ea7c53bd088
Removing intermediate container 1ea7c53bd088
---> ee43ea6f09ec
Step 11/19 : RUN upx /go/bin/pingpong
---> Running in c9bae4c6a522
Ultimate Packer for eXecutables
Copyright (C) 1996 - 2022
UPX 4.0.1 Markus Oberhumer, Laszlo Molnar & John Reiser Nov 16th 2022
File size Ratio Format Name
-------------------- ------ ----------- -----------
8912896 -> 3404100 38.19% linux/amd64 pingpong
Packed 1 file.
Removing intermediate container c9bae4c6a522
---> 9e874a0e1ff4
Step 12/19 : FROM scratch
--->
Step 13/19 : COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
---> Using cache
---> 3151d9d88e49
Step 14/19 : COPY --from=builder /etc/passwd /etc/passwd
---> Using cache
---> f0cbad1f687f
Step 15/19 : COPY --from=builder /etc/group /etc/group
---> Using cache
---> 219f9e21a2df
Step 16/19 : COPY --from=builder /go/bin/pingpong /go/bin/pingpong
---> 8257c22122ee
Step 17/19 : USER appuser:appuser
---> Running in eceb099023f5
Removing intermediate container eceb099023f5
---> 42e4f7513231
Step 18/19 : EXPOSE 8080
---> Running in 6cae56d5a1ea
Removing intermediate container 6cae56d5a1ea
---> 3c22ea825df8
Step 19/19 : ENTRYPOINT ["/go/bin/pingpong", "--phrase", "ping", "--port", "8080" ]]
---> Running in 1e5ca43ef64f
Removing intermediate container 1e5ca43ef64f
---> 6ce63833339a
Successfully built 6ce63833339a
Successfully tagged ping:latest
Sending build context to Docker daemon 24.22MB
Step 1/19 : FROM golang:1.20rc3-alpine@sha256:d78cd58c598fa1f0c92046f61fde32d739781e036e3dc7ccf8fdb50129243dd8 as builder
---> 3af2636ea21b
Step 2/19 : RUN apk update && apk add --no-cache git ca-certificates && update-ca-certificates && apk add --no-cache upx
---> Using cache
---> ae58762d17ba
Step 3/19 : ENV USER=appuser
---> Using cache
---> 168dc8c60014
Step 4/19 : ENV UID=10001
---> Using cache
---> ffa5751f4a8a
Step 5/19 : RUN adduser --disabled-password --gecos "" --home "/nonexistent" --shell "/sbin/nologin" --no-create-home --uid "${UID}" "${USER}"
---> Using cache
---> 0fbcdeb07a68
Step 6/19 : WORKDIR $GOPATH/src/mypackage/myapp/
---> Using cache
---> 6484fcef94b1
Step 7/19 : COPY . .
---> Using cache
---> bf9b1dee842c
Step 8/19 : RUN cd pingpong && go mod download
---> Using cache
---> 071312e40395
Step 9/19 : RUN cd pingpong && go mod verify
---> Using cache
---> afc2a59a70c5
Step 10/19 : RUN cd pingpong && GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /go/bin/pingpong
---> Using cache
---> ee43ea6f09ec
Step 11/19 : RUN upx /go/bin/pingpong
---> Using cache
---> 9e874a0e1ff4
Step 12/19 : FROM scratch
--->
Step 13/19 : COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
---> Using cache
---> 3151d9d88e49
Step 14/19 : COPY --from=builder /etc/passwd /etc/passwd
---> Using cache
---> f0cbad1f687f
Step 15/19 : COPY --from=builder /etc/group /etc/group
---> Using cache
---> 219f9e21a2df
Step 16/19 : COPY --from=builder /go/bin/pingpong /go/bin/pingpong
---> Using cache
---> 8257c22122ee
Step 17/19 : USER appuser:appuser
---> Using cache
---> 42e4f7513231
Step 18/19 : EXPOSE 8080
---> Using cache
---> 3c22ea825df8
Step 19/19 : ENTRYPOINT ["/go/bin/pingpong", "--phrase", "pong", "--port", "8080" ]]
---> Running in 3d2fbe0ab81e
Removing intermediate container 3d2fbe0ab81e
---> c76ff0545093
Successfully built c76ff0545093
Successfully tagged pong:latest
The push refers to repository [k3d-myregistry.localhost:12345/ping]
d46afc74a9da: Pushed
3242d7cd38f4: Pushed
1e75309fb00a: Pushed
c24d32add15d: Pushed
v1: digest: sha256:0456c3f8619ef42fef40f2211415b9e629184a03602082fd38d53b5e0364fd04 size: 1152
The push refers to repository [k3d-myregistry.localhost:12345/pong]
d46afc74a9da: Mounted from ping
3242d7cd38f4: Mounted from ping
1e75309fb00a: Mounted from ping
c24d32add15d: Mounted from ping
v1: digest: sha256:6d3f28071f539381450a12908bf2c54277cb2b8a4337aa4801698f7681aab81f size: 1152
deployment.apps/ping created
ingress.networking.k8s.io/gateway-ping created
service/ping created
servicemonitor.monitoring.coreos.com/ping created
deployment.apps/pong created
ingress.networking.k8s.io/gateway-pong created
service/pong created
servicemonitor.monitoring.coreos.com/pong created

And now I can expose Grafana:

~/synthetic_infra$ ip=`hostname -I | awk '{print $1}'`
~/synthetic_infra$ kubectl port-forward --address=$ip svc/grafana 3000 -n monitoring
Forwarding from XXX.XXX.XXX.XX:3000 -> 3000

And in my local browser I can go to that machine on port 3000 with my web browser. Login with admin/admin and setup my password. Go to explore and graph the rate of the pingpong_ops_total metric. You should see something like this:

There we have it! Next lesson will be self-hosted git repository!

--

--

Jack Strohm

I’m a software engineer whose been programming for almost 40 years. Professionally I’ve used C/C++, Java, and Go the most.