Deploying Hyperledger Fabric v1.2 on Kubernetes Cluster

Debut Infotech
Sep 14, 2018 · 6 min read

As a tenacious blockchain developer, you’re likely to be familiar with Hyperledger Fabric and Kubernetes — two of the most prominent open source ecosystems. Just in case you’re not, considering the fact you are yet to embark on your Blockchain journey, let us give you a brief overview about the same.

Hyperledger Fabric is an open-source enterprise-grade Distributed Ledger Technology (DLT) platform designed primarily for permissioned blockchains. It provides developers a framework for building blockchain applications.

Kubernetes, on the other hand, is a portable, extensible open-source platform for automating deployment, scaling, and management of containerized applications. It supports multitenancy, which makes it possible for developers to develop and test Blockchain applications efficiently.

Used collectively, Hyperledger and Kubernetes offer a powerful, secure platform for processing blockchain transactions.

Through this blog post, we will give you a detailed walkthrough on how you can deploy your Hyperledger Fabric v1.2 instance on Kubernetes cluster. So, without further ado, let’s get started!

To begin with, we will create a cluster with three nodes (2 worker and 1 master). Make sure ‘swap’ is turned off, as Kubernetes doesn’t work with swap on. You can use any cloud for worker nodes or physical nodes. We will be using VMs in VirtualBox.

To make the process simple and easy for you to understand, we will use just one organization, one channel, one peer, one orderer, and one CouchDB.

Installing Pre-requisites on all the nodes:

  • Install docker by using the following command:
sudo apt-get update \&& sudo apt-get install -qy docker.io
  • Install Kubernetes apt repo by using the commands mentioned below:
$ sudo apt-get update \&& sudo apt-get install -y apt-transport-https \&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -OK$ echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” \| sudo tee -a /etc/apt/sources.list.d/kubernetes.list \&& sudo apt-get update
  • Now we need to update our packages, which can be done by using the command mentioned below:
sudo apt-get update`
  • Now we need to install kubelet, kubeadm & kubernetes-cni. The kubelet is responsible for running containers on your hosts. kubeadm is a convenience utility for configuring various components that make up a working cluster. And, kubernetes-cni represents the networking components. Use the following command to install these three components:
sudo apt-get install -y \kubelet \kubeadm \Kubernetes-cni

By default, Docker Swarm provides an overlay networking driver, but with kubeadm, this decision is left to us. We will be using Flannel by CoreOS in Kubernetes for Overlay network.

Initialize the cluster on master node by using the following command:

sudo kubeadm init — pod-network-cidr=10.244.0.0/16

Note: — pod-network-cidr is required for the flannel driver which specifies an address space for containers.

The init command, as mentioned below, will take some time to run as it will have to pull several docker images.

After the initialization of cluster, you will get the following output:

In continuation with the above code …

To start using your cluster, you need to run the following code as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node as root:kubeadm join 192.168.0.127:6443 — token oyu3i2.md9znv0p31f5b7ju — discovery-token-ca-cert-hash sha256:2960349cff48d1041ff087735b3dbe995642dad557098bca64717f06959890e7

You’ll be required to copy and paste these three lines so that it can copy the master certificates in $HOME/.kube

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

At the end of logs, you will get the following command:

kubeadm join 192.168.0.127:6443 — token oyu3i2.md9znv0p31f5b7ju — discovery-token-ca-cert-hash sha256:2960349cff48d1041ff087735b3dbe995642dad557098bca64717f06959890e7

This command will help you join to other nodes.

Below is a detailed overview of how you can do that, with the help of an example.

Joining mini01 as worker node:

Joining mini02 as worker node:

After joining the nodes, you can check the nodes on the master node by using the following command:

$ kubectl get nodes

It will show something like this:

As we can see, our nodes aren’t ready yet and are showing “NotReady” Status.

Now we need to apply the flannel plugin to add network layer.

We can do so by using the following command on the Kubernetes master node:

kubectl apply -f

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

After applying this code, here’s what we will get as an output:

After applying flannel, if you check your nodes again — they will be in the Ready state.

The nodes are now ready to join the Kubernetes cluster. But first, we need to deploy the Kubernetes dashboard.

To get that done, use the following command:

kubectl create -f

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

After applying this code, you will get the following output:

To access the dashboard, run Kubernetes proxy using the command Kubectl proxy

It will display something like this:

Starting to serve on 127.0.0.1:8001

You can now access your Kubernetes dashboard on the following URL:

It will show something like this:

Now our cluster is ready!

We will now deploy CouchDB by using the following command:

kubectl apply -f deployments/couchdb.yaml

Then, we will deploy CouchDB service using the following command:

kubectl apply -f services/couchdb0.yaml

services/couchdb0.yaml uses a nodePort. So, CouchDB will be available at 192.168.8.104:30005 as our nodePort is 30005 and mini01 node IP is 192.168.8.104.

You can also check your pod status on the dashboard:

When we try to access CouchDB on <node-ip>:nodePort, here’s what we will get as a result:

We will now deploy peer, orderer & CLI in the same way we deployed CouchDB.

After doing that, we can check the status of pods from the dashboard.

Now we need to get to the CLI container for further setup.

For that, use the following command:

akshay@akshaysood:~/WS/fabric-kub$ kubectl exec -it cli-7df5b467c5-kxk8p bashroot@cli-7df5b467c5-kxk8p:/opt/gopath/src/github.com/hyperledger/fabric/peer#

Now we will follow the default steps of Hyperledger fabric to set up our network.

Use the following commands:

You can see the CouchDB data:

The source code, deployments, services, and PVC can be found on the public repository on the below URL:

The above-mentioned steps will help you deploy your Fabric v1.2 instance on Kubernetes cluster with minimal configuration and efforts. Feel free to leave your thoughts and questions in the comments. For more information regarding the deployment, get in touch with us.

Debut Infotech

Written by

We deliver custom solutions for your business. Our development strategy caters all segment of mobile, web & BlockChain technologies. Visit goo.gl/88eA3a