Fabric on Google Cloud Platform

Daz Wilkin
Google Cloud - Community
7 min readJun 4, 2018

With thanks to the IBM engineers that wrote a Helm Chart to deploy Fabric as part of the IBM Blockchain Platform. It was relatively straightforward to port this to Kubernetes Engine although — at present — this is just a working Fabric network and I need to now get the tools (Composer) configured.

Kubernetes Engine

Create yourself a Kubernetes. I’m trying to pioneer and so am using a Regional Cluster and I’ve enabled the new Kubernetes Monitoring functionality.

Update: So, I’m not so pioneering, Regional Clusters are now GA :-) Yay!

This all appears to work just fine, something like:

PROJECT=[[YOUR-PROJECT]]
REGION=[[YOUR-REGION]]
CLUSTER=[[YOUR-CLUSTER]]
BILLING=[[YOUR-BILLING]]
LATEST=1.10.2-gke.3
WORKDIR=[[YOUR-WORKDIR]]mkdir -p /tmp/${WORKDIR} && cd /tmp/${WORKDIR}gcloud projects create ${PROJECT}gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable container.googleapis.com \
--project=$PROJECT
gcloud beta container clusters create $CLUSTER \
--username="" \
--cluster-version=${LATEST} \
--machine-type=custom-1-4096 \
--image-type=COS \
--num-nodes=1 \
--enable-autorepair \
--enable-autoscaling \
--enable-autoupgrade \
--enable-stackdriver-kubernetes \
--min-nodes=1 \
--max-nodes=2 \
--region=${REGION} \
--project=${PROJECT} \
--preemptible \
--scopes="https://www.googleapis.com/auth/cloud-platform"
gcloud beta container clusters get-credentials $CLUSTER \
--project=${PROJECT} \
--region=${REGION}
kubectl create clusterrolebinding $(whoami)-cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value account)
kubectl create clusterrolebinding kube-dashboard-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:kubernetes-dashboard

You should be able to:

kubectl cluster-info
kubectl get nodes

The Helm Chart requires a Kubernetes PersistentVolume that’s ReadWriteMany. This is not currently as easy as it ought be on Google Cloud Platform … cough… Google… cough…

So, we’re going to use NFS to provide the read-many capability. Ironically, the NFS solution we’re going to use is itself backed by Google Persistent Disk. Let’s create the NFS server because the Helm Chart depends upon it.

NFS

With thanks to mappedinn, I used their repo kubernetes-nfs-volume-on-gke to get this setup. This uses Google’s volume-nfs image and works great.

Create the underlying Persistent Disk:

ZONE=${REGION}-cgcloud compute disks create nfs-disk \
--project=${PROJECT} \
--zone=${ZONE} \
--type=pd-standard \
--size=10GB

Then apply the following Deployment and Service your Kubernetes cluster. The Deployment creates the NFS service using Google’s hosted volume-nfs image and binds the service to the Persistent Disk:

NB If you’d prefer to use SSD Persistent Disk instead of Standard (HDD) Persistent Disk, replace line #7 “default” with “ssd” in the Deployment script and before applying the Deployment, apply the following file to your cluster to register SSD as a storage class:

Kubernetes Engine: Storage (‘ssd’ and ‘standard’)

You should:

kubectl apply --filename=nfs-deployment.yaml
kubectl apply --filename=nfs-service.yaml

This will yield an NFS service that’s accessible through the Kubernetes’ Service’s DNS name: nfs.default.svc.cluster.local.

OK. You should not create a PersistentVolume or PersistentVolumeClaim for NFS because these will be created using the Helm Chart. Here’s a sneak-peek of the PersistentVolumeClaims *after* the Helm Chart has been deployed. You’ll do that in the next step:

Kubernetes Engine: Storage

Helm

Download and unzip the latest Helm binary (releases), add it to your $PATH and install Helm’s Tiller into the Kubernetes cluster. Assuming we’re in ${WORKDIR} and that you unzipped Helm into ${WORKDIR}/linux-amd64:

PATH=$PATH:$PWD/linux-amd64
helm version

NB Helm has binaries for OSX and Windows, I’ll leave it to you to work out the specific instructions for non-Linux.

If — as is likely — you’re using an RBAC-based Kubernetes (Engine) cluster, I recommend the following steps to install Helm’s Tiller to the cluster:

kubectl create serviceaccount tiller \
--namespace=kube-system
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller

This should return:

$HELM_HOME has been configured at /usr/local/google/home/dazwilkin/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Then, clone my GitHub repo (with all credit to the folks at IBM to doing 98% of all this work for us):

git clone https://github.com/DazWilkin/ibm-blockchain-network.git

But do not change into the ibm-blockchain-network directory created by the clone; remain in the parent (${WORKDIR}).

Optional: It’s good practice to run the Helm Linter over the Chart before deploying helm lint ibm-blockchain-network

Optional: It’s good practice to ‘dry run’ the deployment before applying it to your cluster. This is a useful feature of Helm and provides you with a way to see the Kubernetes specs that would be applied to your cluster: helm install --dry-run — debug ibm-blockchain-network.

When you’re confident in the results:

helm install ibm-blockchain-network

Helm will apply the Chart to your Kubernetes and provide you with an enumeration of its work:

You may check the Cloud Console for Kubernetes Engine Workloads:

https://console.cloud.google.com/kubernetes/workload

Kubernetes Engine: Workloads

And Services:

Kubernetes Engine: Services

There’s a bunch of logs created by the various containers that are deployed. Here’s the bootstrap container created by the ibm-blockchain-network-utils Pod:

Cloud Logging

For configtxgen:

Cloud Logging

For cryptogen:

Cloud Logging

For ca:

Cloud Logging

For orderer:

Cloud Logging

And org1peer1 which is similar to org2peer1:

Cloud Logging

Helm List|Delete

You may list and delete deployed Charts with:

NB: There’s not currently an easy way to grab dynamically generated Chart names (such as solitary-possum) in this example. So, to delete a Chart, you’ll need to do the list in order to identify its name.

Delete NFS Service

Don’t forget to delete the NFS Deployment when you’re done with it too:

kubectl delete --filename=nfs-deployment.yaml
kubectl delete --filename=nfs-service.yaml

Delete Kubernetes

You can whack your cluster:

gcloud beta container clusters delete ${CLUSTER} \
--region=${REGION} \
--project=${PROJECT}

Stackdriver Kubernetes Monitoring

Since we covered logging, I’d mentioned that I deployed the cluster enabling support for the new Stackdriver Kubernetes Monitoring.

Container-Optimized OS (COS)

Apart from needing to create the NFS service, the only other change that was needed for the Chart to deploy to Kubernetes Engine was a tweak to the Composer configuration. The ibm-blockchain-network-utils Workload creates 3 pods including bootstrap. bootstrap has a volume called composer-credentials that was mounted onto the Node’s (host’s) root (/) directory.

Container-Optimized OS is built to be secure and the OS includes the minimal amount of extraneous tooling. You can see here that the root (/) directory is mounted as read-only “to maintain integrity”. For this reason, the Chart was revised to use /tmp/composer instead of /composer. You can see this change in blockchain-utils.yaml lines 20–22:

- name: composer-credentials
hostPath:
path: /tmp/composer

and also in blockchain-debug-nfs.yaml. Wait what?

Debugging

I provided some examples recently of ways you may debug Kubernetes Deployments. In this case, I wanted to ensure that the NFS service was working correctly. If it were working correctly, containers would be able to access its volume mounts.

To confirm this, I added a template to the Helm Chart. This template is called blockchain-nfs.debug.yaml. There are two advantages for including this in the Chart. The first is that it gets to use Helm’s variable replacement. The second is that it keeps everything together and ensures the debugging Deployments is created|deleted with the Chart.

blockchain-debug.nfs.yaml:

The NFS volume is referenced by the PersistentVolumeClaim defined in lines 34–36. You’ll note that there’s a second (and non-NFS) volume called composer-credentials that I added when debugging why this was failing when deployed to Kubernetes Engine (because of COS and not permitting “/” explained previously).

What’s neat is that, once deployed, we can grab the resulting Pod’s (!) name and exec into its (Alpine) shell and enumerate the contents of the directories:

kubectl exec \
--stdin \
--tty \
$(\
kubectl get pods \
--selector=name=ibm-blockchain-network-debug-nfs \
--output=jsonpath="{.items[0].metadata.name}"\
) \
--container=debug \
-- /bin/ash -c "ls -la /shared && ls -la /home/composer"

This returns:

Conclusion

Helm is a good tool and it’s easier to use than I’d expected.

I’m working on a Helm Chart for Trillian too.

IBM’s Helm Chart is designed to deploy Fabric to IBM’s Kubernetes service but, because Kubernetes is Kubernetes is Kubernetes, as you can see, it’s trivial to convert this Chart to work on Kubernetes Engine too.

I’m hoping to learn more from the IBM folks about how to integrate Hyperledger Compose into this deployment and will update this post then.

--

--