How to Manage Your Full Nodes — Part 2: Managing Containers with Kubernetes

Oliver Wee
MW Partners
Published in
12 min readAug 30, 2018

This article is the part 2 of 3 series on How to manage your full nodes.

Today, we will be talking about — Part 2: Managing Containers with Kubernetes .

Kubernetes Setup Diagram

Learning Objectives

  1. Create a single master kubernetes cluster.
  2. Basic understanding of Kubernetes Services and Deployments.
  3. Deploy our Bitcoin and Ethereum containers into the kubernetes cluster.

What is Kubernetes ?

Kubernetes is an open source container and cluster management system developed by Google. Kubernetes ships with powerful features such as self-healing, service discovery, horizontal scaling and more.

Learn more about Kubernetes on their official website below:

A quick beginner’s guide to Kubernetes can be found here: https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

Prerequisites

Docker image for Bitcoin and Ethereum

In this tutorial, we will be using the two images built in the previous post, however you can also substitute it with other Docker images.

Installed Kubelet, Kubectl, Kubeadm

Recommended Hardware requirements: 4 CPU Cores, 6.5 GB of RAM

Kubernetes uses 2 CPU Cores and roughly 2GB of RAM. We will provision up to 2GB and 2.5GB of RAM for the Bitcoin and Ethereum Container respectively.

Creating Single Master Cluster

  1. Bootstrap the cluster with kubeadm with
$ sudo kubeadm init

Kubeadm will now help us to bootstrap a brand new cluster.

I0810 08:40:20.627773    8504 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0810 08:40:20.639260 8504 kernel_validator.go:81] Validating kernel version
I0810 08:40:20.639326 8504 kernel_validator.go:96] Validating kernel config

[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-10-0-0-103 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.103]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [ip-10-0-0-103 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-10-0-0-103 localhost] and IPs [10.0.0.103 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 40.557841 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node ip-10-0-0-103 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node ip-10-0-0-103 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-10-0-0-103" as an annotation
[bootstraptoken] using token: uxdg68.gt830ifp7w9ra2bp
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.0.0.103:6443 --token uxdg68.gt830ifp7w9ra2bp --discovery-token-ca-cert-hash sha256:78d4f0387768a6c747f54b0b5241731397a9617882e9ccc5a4e5a6847a85a043

Save the last line into your favorite text editor: you will need this command later if you plan to join this master node from other machines.

Next, we need to run the following commands to use Kubernetes from our regular user.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

2. Check the status of our cluster with

$ kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-g2pfp 0/1 Pending 0 11m
coredns-78fcdf6894-tnwwc 0/1 Pending 0 11m
etcd-ip-10-0-0-103 1/1 Running 0 10m
kube-apiserver-ip-10-0-0-103 1/1 Running 0 10m
kube-controller-manager-ip-10-0-0-103 1/1 Running 0 10m
kube-proxy-s2k7h 1/1 Running 0 11m
kube-scheduler-ip-10-0-0-103 1/1 Running 0 10m

3. Add a pod network by installing the Weavenet CNI Plugin.

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Wait for the weave-net and core-dns pods to be in the running state before continuing.

$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-g2pfp 1/1 Running 0 13m
coredns-78fcdf6894-tnwwc 1/1 Running 0 13m
etcd-ip-10-0-0-103 1/1 Running 0 12m
kube-apiserver-ip-10-0-0-103 1/1 Running 0 12m
kube-controller-manager-ip-10-0-0-103 1/1 Running 0 12m
kube-proxy-s2k7h 1/1 Running 0 13m
kube-scheduler-ip-10-0-0-103 1/1 Running 0 12m
weave-net-bfpkm 2/2 Running 0 26s

A full tutorial to all the cluster creation options and configuration can be found here:

4. For the purpose of this tutorial, we will un-taint the Kubernetes master so that we can deploy pods on the master machine.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

Managing Containers with Kubernetes

Kubernetes manages containers through the use of Kubernetes objects. These are persistent entities that possess a state and a specification; spec for short. The specification is provided by you and gives Kubernetes the desired state you want for your setup. Command line tools such as kubectl take your specifications as input and apply the changes required in Kubernetes to achieve your desired state.

Single Master, Single Node, Sample Blockchain Client Setup

Image Credits:

Bitcoin Logo: https://pngimg.com/imgs/logos/bitcoin/ See License (https://pngimg.com/license)

Ethereum Logo: https://www.ethereum.org/assets

Kubernetes Pod

The pod is the smallest unit in Kubernetes.It represents a single running process / application and wraps the container with a unique network IP, storage resources and other policies / run-time configuration the container accepts. We will adopt the simplest one container per pod model, and use Deployments to control higher level functions such as number of replicas needed, restart and image pull policies.

Kubernetes Services

Next, we will introduce Kubernetes services. Services allow us to target a group of pods / deployments by using selectors and / or targeting a list of ports. Services provide us the following nifty features:

  1. Service discovery : Pods running in the same cluster can access services via their DNS name : my-svc:<some_port>. When pods are destroyed , their unique IP changes however they can still be accessed at the same DNS provided by the service.
  2. Port forwarding: Services can map different ports from the cluster to the target ports on the group of pods selected by the service.
  3. Load balancing: Services will split the incoming traffic among pods that have the same labels selected and / or same target ports.
  4. Canary releases: Launch new pods with upgraded software and evaluate their performance by exposing the same target ports and labels for selection as older releases.

btc-live-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: btc-live-svc
spec:
selector:
app: btc-live
ports:
- protocol: TCP
port: 8333
targetPort: 8333
name: btc-live-net
- protocol: TCP
port: 30001
targetPort: 8332
name: btc-live-jsonrpc

First, we define the service with apiVersion: v1 , kind: Service and metadata: name : btc-live-svc . Next, in the spec we define that we are selecting any Kubernetes Deployments / Replicasets / Pods that fulfill the label app:btc-live . Then, under ports , we list the ports that we are exposing in the service and the targetPorts that we will find and forward to in the objects found by the selector. We expose the port 8333:8333 ( Live Bitcoin port to listen and forward ) and 300001: 8332 ( Bitcoin JSON RPC server ).

We can deploy the above Bitcoin Live Service with:

kubectl create -f btc-live-svc.yaml

And receive, the following output:

service/btc-live-svc created

Verify that the service has been created with:

kubectl get services

Kubernetes will list the active services:

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
btc-live-svc ClusterIP 10.96.245.220 <none> 8333/TCP,30001/TCP 1d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

Similarly, below is the service file for Ethereum.

apiVersion: v1
kind: Service
metadata:
name: eth-live-svc
spec:
selector:
app: eth-live
ports:
- protocol: TCP
port: 30303
targetPort: 30303
name: eth-live-net
- protocol: TCP
port: 30005
targetPort: 8545
name: eth-live-jsonrpc

Kubernetes Deployments

Next, we look at another Kubernetes object, deployments. In our previous tutorial, we can run our Bitcoin container after building the Docker image with the following command:

docker run --name testing-btc-live -v /data/btc-live:/app/data -p 18332:8332 -p 8333:8333 -td test-btc-img

We can specify the above configuration in the Kubernetes deployment file below.

btc-live-deploy.yaml


apiVersion: apps/v1
kind: Deployment
metadata:
name: btc-live
spec:
selector:
matchLabels:
app: btc-live
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: btc-live
spec:
containers:
- name: btc-live
image: test-btc-img
imagePullPolicy: Never
ports:
- containerPort: 8333
hostPort: 8333
protocol: TCP
name: btc-live
- containerPort: 8332
name: btc-json-rpc
volumeMounts:
- name: btc-live-data
mountPath: /app/data
resources:
requests:
memory: "1024M"
cpu: "750m"
limits:
memory: "2048M"
cpu: "1000m"
volumes:
- name: btc-live-data
hostPath:
path: /data/btc-live
type: Directory

In the first section, we describe our deployment to Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
name: btc-live

Next, we specify the criteria for selecting pods in the selector app: btc-live , , the number of replicas we need replicas: 1 and the deployment strategy for pod type:recreate , which does not allow new and old versions of the pod to run together.

spec:
selector:
matchLabels:
app: btc-live
replicas: 1
strategy:
type: Recreate

The template section, describes our pod template which is used to create our full node pods.

template:
metadata:
labels:
app: btc-live
spec:
containers:
- name: btc-live
image: test-btc-img
imagePullPolicy: Never
ports:
- containerPort: 8333
name: btc-live
- containerPort: 8332
name: btc-json-rpc
volumeMounts:
- name: btc-live-data
mountPath: /app/data
resources:
requests:
memory: "1024M"
cpu: "750m"
limits:
memory: "2048M"
cpu: "1000m"
volumes:
- name: btc-live-data
hostPath:
path: /data/btc-live
type: Directory

The metadata section describes what metadata will be attached to the pod. This allows the pods created to be selected and managed by the deployment and discovered by the service.

metadata:
labels:
app: btc-live

The spec section describes our pod.

name: btc-live
image: test-btc-img
imagePullPolicy: Never
ports:
- containerPort: 8333
name: btc-live
- containerPort: 8332
name: btc-json-rpc
volumeMounts:
- name: btc-live-data
mountPath: /app/data
resources:
requests:
memory: "1024M"
cpu: "750m"
limits:
memory: "2048M"
cpu: "1000m"

We define the Docker image that we will use for the pod test-btc-img and set the imagePullPolicy:never so that it does not try to pull an image if it is not present on the machine ( since we are using our own image ). Next, we specify the ports to expose on the container 8333 (btc port), 8332 (json rpc server) . Then, we mount our data volume as /app/data and put limits on the pod with the resources section. The pod will request a maximum of 2048MB of RAM memory: "2048M" and use a maximum of 1 CPU core cpu: "1000m" and start with 1024MB of RAM memory: "1024M" and 0.75% of a CPU core cpu: "750m" .

Lastly, we need to define the volume that is being mounted with:

volumes:
- name: btc-live-data
hostPath:
path: /data/btc-live
type: Directory

Running the Bitcoin deployment

$ kubectl create -f btc-live-deploy.yaml
deployment.apps/btc-live created

Check the deployment is running correctly:

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
btc-live 1 1 1 1 51s

Check that the pods are deployed:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
btc-live-6b999dcfb5-jdb7t 1/1 Running 0 1m

We can get more details of the pod from the kubectl command describe

$ kubectl describe pod btc-live-6b999dcfb5-jdb7tName:               btc-live-6b999dcfb5-jdb7t
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: ip-10-0-0-103/10.0.0.103
Start Time: Sun, 12 Aug 2018 08:33:35 +0000
Labels: app=btc-live
pod-template-hash=2655587961
Annotations: <none>
Status: Running
IP: 10.32.0.4
Controlled By: ReplicaSet/btc-live-6b999dcfb5
Containers:
btc-live:
Container ID: docker://1017245fad91385967b864230f21bf129636a2b23ffe8c4d0d9adca2fcbbdf0a
Image: test-btc-img
Image ID: docker://sha256:3132efdefd1ce9bcd6d9e0a1a3b01ec17ffac7b5d8b23e0b958c63c50b16e16d
Ports: 8333/TCP, 8332/TCP
Host Ports: 8333/TCP, 0/TCP
State: Running
Started: Sun, 12 Aug 2018 08:33:36 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2048M
Requests:
cpu: 750m
memory: 1024M
Environment: <none>
Mounts:
/app/data from btc-live-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-th8pj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
btc-live-data:
Type: HostPath (bare host directory volume)
Path: /data/btc-live
HostPathType: Directory
default-token-th8pj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-th8pj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned default/btc-live-6b999dcfb5-jdb7t to ip-10-0-0-103
Normal Pulled 1m kubelet, ip-10-0-0-103 Container image "test-btc-img" already present on machine
Normal Created 1m kubelet, ip-10-0-0-103 Created container
Normal Started 1m kubelet, ip-10-0-0-103 Started container

So far, everything looks good. Let us try to access the JSON RPC server we set up and query the Bitcoin full node status.

$ sudo cat /data/btc-live/.cookie

__cookie__:3213d9a206e39d69ec2e1556f744c9ebd8fbee6f5885fe5e1f8a24e6e9eb3de4

Next, we need to base64encode the cookie file to get our Basic Authorization token.

// Grab the Authorization Token from the cookie file
sudo cat /data/btc-live/.cookie | base64 -w 0

Make a CURL request to our Bitcoin Full Node Service:

curl http://10.96.245.220:30001 
-H "Content-Type: application/json"
-H "Authorization: Basic X19jb29raWVfXzozMjEzZDlhMjA2ZTM5ZDY5ZWMyZTE1NTZmNzQ0YzllYmQ4ZmJlZTZmNTg4NWZlNWUxZjhhMjRlNmU5ZWIzZGU0" --data '{"jsonrpc": "2.0", "method": "getblockchaininfo", "id": 1}'

Substitute 10.96.245.220 with your ClusterIP that was created when you deployed the Service in the earlier section.

You should get the following response:

{"result":{"chain":"main","blocks":0,"headers":122000,"bestblockhash":"000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f","difficulty":1,"mediantime":1231006505,"verificationprogress":2.813522581816168e-09,"initialblockdownload":true,"chainwork":"0000000000000000000000000000000000000000000000000000000100010001","size_on_disk":293,"pruned":false,"softforks":[{"id":"bip34","version":2,"reject":{"status":false}},{"id":"bip66","version":3,"reject":{"status":false}},{"id":"bip65","version":4,"reject":{"status":false}}],"bip9_softforks":{"csv":{"status":"defined","startTime":1462060800,"timeout":1493596800,"since":0},"segwit":{"status":"defined","startTime":1479168000,"timeout":1510704000,"since":0}},"warnings":""},"error":null,"id":1}

Congratulations, you have deployed your Bitcoin full node on Kubernetes !

The steps for deploying Ethereum full node would be the same as the one for the Bitcoin full node.

We can get the synchronization status of the Ethereum full node with the following command:

$ curl -X POST http://10.101.73.101:30005 -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}'

Response:

{
"jsonrpc":"2.0",
"id":1,"result":{
"currentBlock":"0x5bef8e",
"highestBlock":"0x5d95ef",
"knownStates":"0xb9893c9",
"pulledStates":"0xb9893c9",
"startingBlock":"0x5bef8c"
}
}

eth-live-deploy.yaml


apiVersion: apps/v1
kind: Deployment
metadata:
name: eth-live
spec:
selector:
matchLabels:
app: eth-live
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: eth-live
spec:
containers:
- name: eth-live
image: test-eth-img
imagePullPolicy: Never
ports:
- containerPort: 30303
name: eth-live
- containerPort: 8545
name: eth-json-rpc
volumeMounts:
- name: eth-live-data
mountPath: /app/data
resources:
requests:
memory: "2048M"
cpu: "750m"
limits:
memory: "2560M"
cpu: "1000m"
volumes:
- name: eth-live-data
hostPath:
path: /data/eth-live
type: Directory

Additional Resources:

We have created a public Github repository with sample files for Docker and Kubernetes that you can use to learn more, as well as few scripts for convenience.

Try out the interactive mini-kube tutorials at Kubernetes official website to learn the basics in your web browser.

Summary

We have successfully managed to deploy our Dockerized Bitcoin and Ethereum nodes in a single master Kubernetes cluster. We can now access the full nodes within the cluster with their service names or within the same network using the external IP as host. This concludes part 2 of our 3 part series of running containerized full nodes.

Want to learn more?

Follow us on our Twitter and Medium for more updates. If you are a developer who is keen to contribute towards building a developer community alongside MW Partners, kindly reach out to hello@mwpartners.io.

--

--