Deploying hyperledger fabric on Kubernetes (Raft consensus)

Omid Asadpour
7 min readSep 17, 2019

--

This is a simple guide to help you implement a complete Blockchain solution using Hyperledger Fabric v1.4.2 on Kubernetes .

This solution uses also CouchDB as peer’s backend , Raft consensus for the orderers and a NFS Server (Network file system) to share data between the components .

INTRODUCTION

We’re going to build a complete Hyperledger Fabric v1.4.2 environment with configtxgen and cryptogen tools and 2 Organizations. In order to achieve scalability and high availability and CFT on the Orderer we’re going to be using Raft. Each Organization will have 2 peers, and each peer will have it’s own CouchDB instance.

ARCHITECTURE

infrastructure view :

For this environment we’re going to be using a 3-node Kubernetes cluster ( 1 master + 2 workers ) and a NFS server .

All the machines are going to be in the same network . For Kubernetes cluster and NFS server we’ll have the following machines :

1 ) master

2 ) worker1

3 ) worker2

4 ) NFS

The image below represents the environment infrastructure:

note: orderers , organizations have access to the NFS shared storage as they need the artifacts that we’re going to store there .

IMPLEMENTATION

step 1 : checking environment

First let’s make sure we have Kubernetes environment up & running:

kubectl get nodes

Step 2: Setting up shared storage

Now, assuming the NFS server is up & running and with the correct permissions, we’re going to create our PersistentVolume. First lets create the file fabric-pv.yaml like the example below :

Note: NFS Server is running on NFS and the shared filesystem is /nfs/fabric. We’re using fabricfiles as the name for this PersistentVolume.

Now let’s apply the above configuration:

kubectl apply -f ./fabric-pv.yaml

After that we’ll need to create a PersistentVolumeClaim. To do that, we’ll create file fabric-pvc.yaml as below:

Note: We’re using our previously created fabricfiles as the selector here.

Now let’s apply the above configuration:

kubectl apply -f ./fabric-pvc.yaml

Step 3: Loading the config files into the storage

1— Crypto-config

Now lets use the file crypto-config.yaml for our network configuration :use commands on this link to generate the artifacts :

https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/crypto-config.yaml

2 — Configtx
Now we’re going to use the file configtx.yaml for our network configuration :
https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/configtx.yaml

use commands on this link to create channel.tx , genesis.block , Org1MSPanchors.tx , Org2MSPanchors.tx

3 — copy

now lets copy channel.tx , genesis.block , Org1MSPanchors.tx , Org2MSPanchors.tx and crypto-config directory in /nfs/fabric (shared filesystem)

Step 4: Setting up Fabric Orderer

Create the file orderer-deploy with the following Deployment description:

Let’s apply the configuration:

kubectl apply -f orderer-deploy

Create the file orderer-svc with the following Service description:

Now, apply the configuration:

kubectl apply -f orderer-svc

after that using these Deployment and svc descriptions below to create orderer2 , orderer3 , orderer4 and orderer5 :

kubectl apply -f orderer2-svc

kubectl apply -f orderer2-deploy

kubectl apply -f orderer3-svc

kubectl apply -f orderer3-deploy

kubectl apply -f orderer4-svc

bectl apply -f orderer4-deploy

kubectl apply -f orderer5-svc

bectl apply -f orderer5-deploy

Step 5: Setting up Fabric Orgs

Create the file org1peer0-deploy with the following Deployment:

kubectl apply -f org1peer0-deploy

Create the file org1peer0-svc with the following Deployment:

kubectl apply -f org1peer0-svc.yaml

after that using these Deployment and svc descriptions below to create org1peer1 , org2peer0 , org2peer1 :

kubectl apply -f org1peer1-deploy
kubectl apply -f org1peer1-svc.yaml

kubectl apply -f org2peer0-deploy
kubectl apply -f org2peer0-svc.yaml

kubectl apply -f org2peer1-deploy
kubectl apply -f org2peer1-svc.yaml

Step 6: adding host name

note : The pods will start but they cannot communicate to each other since domain names are unknown

after running all pods and services , we have to earn IP of the services and add them to the /etc/hosts of every pods (to see each other) .

first off all let’s make sure all of our pods are running :

now lets get IP of the services :

note : The IPs are internal ClusterIPs of related services. Important point here is, as opposed to pod ClusterIPs, service ClusterIPs are stable, they won’t change if service is not deleted and re-created

then create a text file like below and add the host names to /etc/hosts of all pods :

Step 7: Create Channel

note : to use every Orgs peer you have to set suitable environment variables .

first lets get in peer0.org1.example.com pod :

kubectl exec -it org1peer0–77bff4675c-rm529 — bin/bash

then set environment variables :

CORE_PEER_MSPCONFIGPATH=/fabric/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp

CORE_PEER_LOCALMSPID=”Org1MSP”

CORE_PEER_ADDRESS=peer0.org1.example.com:7051

CORE_PEER_TLS_ROOTCERT_FILE=/fabric/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt

export CHANNEL_NAME=mychannel

then let’s create mychannel :

peer channel create -o orderer:7050 -c $CHANNEL_NAME -f /fabric/channel.tx — tls — cafile /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

let’s join all 4 peers to channel based on mychannel.block in /nfs/fabric directory (shared filesystem)

peer channel join -b mychannel.block

note : remember you have to set suitable ENV for every peer to work correctly .

step 8: Update the anchor peers

let’s update anchor peers ( peer0.org1 and peer0.org2 ) :

peer channel update -o orderer:7050 -c $CHANNEL_NAME -f /fabric/Org1MSPanchors.tx — tls — cafile /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

peer channel update -o orderer:7050 -c $CHANNEL_NAME -f /fabric/Org2MSPanchors.tx — tls — cafile /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

note : remember you have to set suitable ENV for every peer to work correctly .

Step 9: install chaincode

after joining all peers to mychannel you have to install chaincode on every peer you want .

first let’s copy chaincode directory to /nfs/fabric/ and then for instance let’s install chaincode_example02 (node.js) on every 4 peers :

peer chaincode install -n mycc -v 1.0 -l node -p /fabric/config/chaincode/chaincode_example02/node/

note : remember you have to set suitable ENV for every peer to work correctly .

Step 10 : instantiate chaincode

let’s instantiate chaincode on mychannel (it will take a few minute based on your infrastructure ) :

peer chaincode instantiate -o orderer:7050 — tls — cafile /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc -l node -v 1.0 -c ‘{“Args”:[“init”,”a”, “100”, “b”,”200"]}’ -P “AND (‘Org1MSP.peer’,’Org2MSP.peer’)”

note : note : remember you have to set suitable ENV for every peer to work correctly .

step 11 : Query and Invoke

congrats you have deployed Hyperledger fabric V 1.4.2 on Kubernetes .

now you can query or Invoke with two commands below :

peer chaincode query -C $CHANNEL_NAME -n mycc -c ‘{“Args”:[“query”,”a”]}’

peer chaincode invoke -o orderer:7050 — tls true — cafile /fabric/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc — peerAddresses peer0.org1.example.com:7051 — tlsRootCertFiles /fabric/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt — peerAddresses peer0.org2.example.com:9051 — tlsRootCertFiles /fabric/peer/cryptoconfig/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c ‘{“Args”:[“invoke”,”a”,”b”,”10"]}’

step 12 : monitoring your pods

you can monitor your pods with kubernetes dashboard :

Hope this article has been helpful to you

Reference Links

1 ) Hyperledger-fabric

2 ) hyperledger-fabric-kubernete

3 ) PIVT

--

--