Important features in a development grade Kubernetes cluster

Nithish Raja.G
cverse-ai
Published in
4 min readJan 12, 2020

Set up docker registry

When creating a new deployment in kubernetes using a custom image, the image needs to be present in every worker node. To overcome this, the image can be pushed to docker hub. However, if internet access is not available, then using a docker registry is the best option.

Running the docker registry in master node makes the most sense since, all config and YAML files for deployments would be placed in master node. Steps for getting a docker-registry up and running are mentioned below.

Create a directory called docker-registry and add the following to the docker-compose file. Create another directory inside docker-registry and name it registry. It is not necessary to create a registry directory. If you choose not to create it, then use an internal volume instead.

version: '2.1'
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
- 5001:5001
volumes:
- /home/ubuntu/docker-registry/registry:/var/lib/registry
# Use custom config for docker registry
- /home/ubuntu/docker-registry/config.yaml:/etc/docker/registry/config.yml

Make a config.yaml inside docker-registry directory and add the following to it.

version: 0.1
log:
level: debug
storage:
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
debug:
addr: :5001
prometheus:
enabled: true
path: /metrics

Now when the docker-registry is started, it will start a separate log server on port 5001 and start pushing logs. Prometheus can be configured to listen to these logs. To start the docker-registry run the following command.

# Start docker registry
sudo docker-compose up -d
# Stop docker registry
sudo docker-compose down

Dynamic volume provisioner

Creating persistent volumes for applications in kubernetes becomes tedious as the number of worker nodes increase. The current version of kubernetes (v1.16) at the time of writing this article does not support dynamic provisioning of hostpath volumes. To overcome this, we deploy a dynamic volume provisioner.

Create a file called dynamic-pv-provisioner.yaml and copy the following into it.

apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.11
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/mnt/disk"]
}
]
}

Make sure there is no default storage class. Run the following command to deploy a dynamic volume provisioner that will automatically provision volumes for the default storage class.

kubectl apply -f dynamic-pv-provisioner.yaml

Refer the documentation provided here to add your configurations.

Prometheus

Prometheus can be set up to monitor applications running in Kubernetes, monitor worker nodes and more. In this case, we use prometheus to monitor worker nodes and docker-registry.

Create a file called node-monitor.yaml and add the following to it.

apiVersion: v1
kind: Namespace
metadata:
name: prometheus
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: prometheus
labels:
name: node-exporter
spec:
selector:
matchLabels:
name: node-exporter
template:
metadata:
labels:
name: node-exporter
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9100"
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- ports:
- containerPort: 9100
protocol: TCP
resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: prom/node-exporter:v0.15.2
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- '"^/(sys|proc|dev|host|etc)($|/)"'
name: node-exporter
volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /

Next create a config file named config.yaml and add the following to it.

global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
communicating with
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
# Prometheus logs
- job_name: 'prometheus_metrics'
scrape_interval: 5s
static_configs:
- targets: ['prometheus.prometheus.svc.cluster.local:9090']
# Docker registry logs
- job_name: 'docker_registry'
scrape_interval: 5s
static_configs:
- targets: ['54.209.50.67:5001']
# Kubernetes node logs
- job_name: 'kubeworker1'
scrape_interval: 15s
static_configs:
- targets: ['35.153.57.109:9100']

Make sure to change the hostnames according to your servers. Finally create a prometheus.yaml file and add the following to it.

apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: prometheus
labels:
app: prometheus
spec:
replicas: 3
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
volumeMounts:
- name: prometheus-config
mountPath: /etc/prometheus/prometheus.yml
subPath: config.yaml
ports:
- containerPort: 9090
volumes:
- name: prometheus-config
configMap:
name: prometheus-config
---
kind: Service
apiVersion: v1
metadata:
name: prometheus
namespace: prometheus
spec:
type: LoadBalancer
selector:
app: prometheus
ports:
- protocol: TCP
port: 9090
targetPort: 9090

To deploy prometheus on Kubernetes, run the following commands

kubectl apply -f node-monitor.yamlkubectl create configmap -n prometheus --from-file=./config.yaml prometheus-configkubectl apply -f prometheus.yaml

Changes to the config file can be made to make prometheus scrape other applications for logs.

--

--