Helm with YugabyteDB : GKE(Google Kubernetes Engine)

Harsh Verma
12 min readApr 2, 2019

--

This blog will provide quick overview of helm that helps to manage kubernetes applications, providing startup guide with detailed installation steps and practical hands on of using YugabyteDB.

Before going into installation and details, follow the quick guide to get the overview of aforementioned database, GK-Engine and Helm.

YugabyteDB : https://www.yugabyte.com/ , https://github.com/YugaByte/yugabyte-db

GKE : https://cloud.google.com/kubernetes-engine/

Helm : https://helm.sh/ , https://github.com/helm/helm

What is Helm ?: In brief, Helm is a Package Manager for Kubernetes. Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Some common terminology when using helm :

Tiller is a server that runs inside your Kubernetes Cluster anytime you install Helm. Tiller manages installations of your Helm Charts. As Tiller installs containers into your Kubernetes Cluster on your behalf, security around this process should be a high priority for you.

Once you have Helm installed in your Kubernetes Cluster and everything going, you can add big functionality with a single line of code.

What is YugabyteDB ?: In their own astounding definition; YugaByte DB isan open source, cloud native, distributed SQL database. Powered by a high-performance, globally-distributed document store that is built ground-up with inspiration from Google Spanner, YugaByte DB aims to make applications agile like never before”.

What is GKE ? : As per official site; Google Kubernetes Engine provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The environment GKE provides consists of multiple machines (specifically, Google Compute Engine instances) grouped together to form a cluster.

Helm Manager
  • Before Starting the Helm installation, create a service account in google cloud platform and provide respective “editor” permission for the project.

Let’s dig into swing ride of installation and details.

Easy Drive to Helm

To Install and initiative helm, follow the steps below using GKE(google kubernetes engine) :

  1. Ensure that you have enabled the Google Kubernetes Engine API. ENABLE GOOGLE KUBERNETES ENGINE API
  2. Click the above link and Tap on ENABLE button to activate the API
  3. Setup the cluster in GCP with required zones and config using google cloud platform.
  4. Click the link :https://console.cloud.google.com/kubernetes/list?project=ringed-robot-229102&folder&organizationId

4.1) This will open the following below window of GCP :

4.2) Click on Create Cluster Button to the get the cluster config window :

4.3) Choose the standard Cluster and keep the default config, allow http services for cluster and click on Create button

  • ->> This will take approx. 2–5 min to initiate the Kubernetes cluster and then the below cluster state is observed in GCP automatically

4.4) To enable shell(terminal) of cluster click on Connect button, you will be directed to cluster cloud-shell SSH terminal.

4.5) Check if the cluster is running using command :

gcloud container clusters list

This will provide the sample output :

harshverma59@cloudshell:~ (ringed-robot-229102)$ gcloud container clusters listNAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUSstandard-cluster-1 us-central1-a 1.11.7-gke.4 35.193.74.122 n1-standard-1 1.11.7-gke.4 3 RUNNING

Check the Kubernetes cluster info using command :

kubectl cluster-info

->> Kubernetes master is running at https://35.225.152.43

Helm Installation Guide:

Now install Helm 2.13.0 version on same cluster machine :

1. wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.0-linux-amd64.tar.gz

->> This will download the helm on machine as tar setup

2. Check the same using command : ls

harshverma59@cloudshell:~ (ringed-robot-229102)$ ls

Ouput :

helm-v2.13.0-linux-amd64.tar.gz README-cloudshell.txt

3. Now un-tar the folder and move the helm to bin :

tar -zxvf helm-v2.13.0-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm

4) Helm is successfully setup run command to verify:

helm help

5) Now, lets create a Service account associated with created cluster and deploy the service account template :

kubectl create serviceaccount — namespace kube-system tillerOutput :harshverma59@cloudshell:~ (ringed-robot-229102)$ kubectl create serviceaccount — namespace kube-system tiller
serviceaccount/tiller created

now, lets create cluster role binding associated with tiller service account

kubectl create clusterrolebinding tiller-cluster-rule — clusterrole=cluster-admin — serviceaccount=kube-system:tillerOutput :harshverma59@cloudshell:~ (ringed-robot-229102)$ kubectl create clusterrolebinding tiller-cluster-rule — clusterrole=cluster-admin — serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule createdkubectl patch deploy — namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:”tiller”}}}}’

6) Initialize both client and server, Upgrade the created service account :

helm init — service-account tiller — upgrade

7) You can check the version of both client and server using command :

helm versionOutput :harshverma59@cloudshell:~ (ringed-robot-229102)$ helm versionClient: &version.Version{SemVer:”v2.13.0", GitCommit:”79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:”clean”}Server: &version.Version{SemVer:”v2.13.0", GitCommit:”79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:”clean”}

YugaByte DB Setup with Helm :

  1. Now the helm client and server are initiated clone the YugaByte DB:

For creating the yugabyte cluster, you have to first clone the yugabyte-db and then create a YugaByte service account in your Kubernetes cluster.

Use command to clone :

git clone https://github.com/YugaByte/yugabyte-db.gitoutput :harshverma59@cloudshell:~ (ringed-robot-229102)$ git clone https://github.com/YugaByte/yugabyte-db.gitCloning into ‘yugabyte-db’…remote: Enumerating objects: 745, done.remote: Counting objects: 100% (745/745), done.remote: Compressing objects: 100% (418/418), done.remote: Total 71070 (delta 285), reused 505 (delta 268), pack-reused 70325Receiving objects: 100% (71070/71070), 56.24 MiB | 18.35 MiB/s, done.Resolving deltas: 100% (50222/50222), done.Checking out files: 100% (8316/8316), done.

2. Go to YugaByteDB directory :

cd ./yugabyte-db/cloud/kubernetes/helm/

3. Create YugaByteDB yaml file :

kubectl create -f yugabyte-rbac.yaml

Sample Output :

harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm
(ringed-robot-229102)$ kubectl create -f yugabyte-rbac.yaml
serviceaccount/yugabyte-helm
createdclusterrolebinding.rbac.authorization.k8s.io/yugabyte-helm created

4. The next step is to initialize Helm :

Initialize helm with the service account but use the — upgrade flag to ensure that you can upgrade any previous initializations you may have made.

Command :

helm init — service-account yugabyte-helm — upgrade — waitOutput:harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ helm init — service-account yugabyte-helm — upgrade — wait$HELM_HOME has been configured at /home/harshverma59/.helm.Tiller (the Helm server-side component) has been upgraded to the current version.Happy Helming!

After successful Helm Setup lets Install YugaByte DB :

Install YugaByte DB in the Kubernetes cluster using the commands below:

1) helm install yugabyte — namespace yb-demo — name yb-demo — wait2) helm install yugabyte — set resource.master.requests.cpu=0.1,resource.master.requests.memory=0.2Gi,resource.tserver.requests.cpu=0.1,resource.tserver.requests.memory=0.2Gi — namespace yb-demo — name yb-demoOutput :NAME: yb-demoLAST DEPLOYED: Tue Mar 19 15:38:22 2019NAMESPACE: yb-demoSTATUS: DEPLOYEDRESOURCES:==> v1/Pod(related)NAME READY STATUS RESTARTS AGEyb-master-0 0/1 Pending 0 0s==> v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.11.240.249 <pending> 7000:31547/TCP 0s

Installing YugaByte DB with YSQL:

  1. YugabyteDB provide YSQL support for SQL DB version support
helm install yugabyte — wait — namespace yb-demo — name yb-demo — set “enablePostgres=true”

2. If you are running in a resource-constrained environment or a local environment such as minikube, you will have to change the default resource requirements by using the command below. See next section for a detailed description of these resource requirements.

helm install yugabyte — set resource.master.requests.cpu=0.1,resource.master.requests.memory=0.2Gi,resource.tserver.requests.cpu=0.1,resource.tserver.requests.memory=0.2Gi — namespace yb-demo — name yb-demo — set “enablePostgres=true”

3. Initialize the YSQL API (after ensuring that cluster is running — see “Check Cluster Status” below)

kubectl exec -it -n yb-demo yb-tserver-0 bash — -c “YB_ENABLED_IN_POSTGRES=1 FLAGS_pggate_master_addresses=yb-master-0.yb-masters.yb-demo.svc.cluster.local:7100,yb-master-1.yb-masters.yb-demo.svc.cluster.local:7100,yb-master-2.yb-masters.yb-demo.svc.cluster.local:7100 /home/yugabyte/postgres/bin/initdb -D /tmp/yb_pg_initdb_tmp_data_dir -U postgres”Sample Output :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ kubectl exec -it -n yb-demo yb-tserver-0 bash — -c “YB_ENABLED_IN_POSTGRES=1 FLAGS_pggate_master_addresses=yb-master-0.yb-masters.yb-demo.svc.cluster.local:7100,yb-master-1.yb-masters.yb-demo.svc.cluster.local:7100,yb-master-2.yb-masters.yb-demo.svc.cluster.local:7100 /home/yugabyte/postgres/bin/initdb -D /tmp/yb_pg_initdb_tmp_data_dir -U postgres”The files belonging to this database system will be owned by user “root”.This user must also own the server process.The database cluster will be initialized with locale “C”.The default database encoding has accordingly been set to “SQL_ASCII”.The default text search configuration will be set to “english”.okperforming post-bootstrap initialization … I0319 21:26:40.250283 72 mem_tracker.cc:240] MemTracker: hard memory limit is 3.070368 GBW0319 21:27:20.557303 78 outbound_call.cc:350] RPC callback for RPC call yb.tserver.TabletServerService.Read -> { remote: 10.8.0.9:7100 idx: 6 protocol: 0x00007fb35d1ee7f8 -> tcp } , state=FINISHED_SUCCESS. blocked reactor thread for 56757usoksyncing data to disk … ok

4) Connect using PSQL client as shown below :

kubectl exec -n yb-demo -it yb-tserver-0 /home/yugabyte/postgres/bin/psql — -U postgres -d postgres -h yb-tserver-0.yb-tservers.yb-demo -p 5433Sample Output :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ kubectl exec -n yb-demo -it yb-tserver-0 /home/yugabyte/postgres/bin/psql — -U postgres -d postgres -h yb-tserver-0.yb-tservers.yb-demo -p 5433psql (11.2)Type “help” for help.postgres=#

5) Now we are inside Postgres shell lets dump a big data file and create respective schema DB.

lets, load the data file to cluster directory

wget https://raw.githubusercontent.com/curran/data/gh-pages/vegaExamples/airports.csv

Create New Database :

CREATE DATABASE airportdb;

Connect to Airport DB :

\c airportdbYou are now connected to database “airportdb” as user “postgres”.

Create Respective table schema :

CREATE TABLE airport(iata VARCHAR , name VARCHAR , city VARCHAR ,state VARCHAR , country VARCHAR , latitude VARCHAR, longitude VARCHAR);

Insert Data from Local :

psql -d airportdb — user=postgres -c “\copy airport FROM ‘airports.csv’ delimiter ‘,’ csv header”

View Data :

select * from airport limit 5;airportdb=# select * from airport limit 5;iata | name | city | state | country | latitude | longitude — — — + — — — — — — — — — — — + — — — — — — — — — + — — — -+ — — — — -+ — — — — — — -+ — — — — — — — 00M | Thigpen | Bay Springs | MS | USA | 31.95376472 | -89.2345047200R | Livingston Municipal | Livingston | TX | USA | 30.68586111 | -95.0179277800V | Meadow Lake | Colorado Springs | CO | USA | 38.94574889 | -104.569893301G | Perry-Warsaw | Perry | NY | USA | 42.74134667 | -78.0520805601J | Hilliard Airpark | Hilliard | FL | USA | 30.6880125 | -81.9059438901M | Tishomingo County | Belmont | MS | USA | 34.49166667 | -88.20111111

Now exit the DB shell and check Cluster Status and monitoring using Helm:

Helm monitoring Manager
helm status yb-demoOutput :LAST DEPLOYED: Fri Oct 5 09:04:46 2018NAMESPACE: yb-demo STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE yb-tservers ClusterIP None <none> 7100/TCP,9000/TCP,6379/TCP,9042/TCP 7s yb-masters ClusterIP None <none> 7100/TCP,7000/TCP 7s yb-master-ui LoadBalancer 10.106.132.116 <pending> 7000:30613/TCP 7s ==> v1/StatefulSet NAME DESIRED CURRENT AGE yb-master 3 3 7s yb-tserver 3 3 7s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE yb-master-0 0/1 Pending 0 7s yb-master-1 0/1 Pending 0 7s yb-master-2 0/1 Pending 0 7s yb-tserver-0 0/1 Pending 0 7s yb-tserver-1 0/1 Pending 0 7s yb-tserver-2 0/1 Pending 0 7s

Check the Service pods :

$ kubectl get pods — namespace yb-demoSample Output :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ kubectl get pods — namespace yb-demoNAME READY STATUS RESTARTS AGEyb-master-0 1/1 Running 0 1hyb-master-1 1/1 Running 0 1hyb-master-2 1/1 Running 0 1hyb-tserver-0 1/1 Running 0 1hyb-tserver-1 1/1 Running 0 1hyb-tserver-2 1/1 Running 0 1h

Check the created service states:

kubectl get services — namespace yb-demoOutput :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ kubectl get services — namespace yb-demoNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEyb-master-ui LoadBalancer 10.11.240.249 35.188.191.223 7000:31547/TCP 1hyb-masters ClusterIP None <none> 7100/TCP,7000/TCP 1hyb-tservers ClusterIP None <none> 7100/TCP,9000/TCP,6379/TCP,9042/TCP,5433/TCP 1h

You can even check the history of the yb-demo helm chart. :

helm history yb-demoOutput :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ helm history yb-demoREVISION UPDATED STATUS CHART DESCRIPTION1 Tue Mar 19 15:38:22 2019 DEPLOYED yugabyte-latest Install complete

Configure Cluster using helm:

Configuration Manager Helm

CPU, Memory & Replica Count

The default values for the Helm chart are in the helm/yugabyte/values.yaml file. The most important ones are listed below. As noted in the Prerequisites section above, the defaults are set for a 3 nodes Kubernetes cluster each with 4 CPU cores and 15 GB RAM

helm upgrade — set resource.tserver.requests.cpu=8,resource.tserver.requests.memory=15Gi yb-demo ./yugabyteOutput :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ helm upgrade — set resource.tserver.requests.cpu=8,resource.tserver.requests.memory=15Gi yb-demo ./yugabyteConnect to one of the tablet server: kubectl exec — namespace yb-demo -it yb-tserver-0 bash5. Run CQL shell from inside of a tablet server: kubectl exec — namespace yb-demo -it yb-tserver-0 bin/cqlsh yb-tserver-06. Cleanup YugaByte Pods helm delete yb-demo — purge NOTE: You need to manually delete the persistent volume kubectl delete pvc — namespace yb-demo -l app=yb-master kubectl delete pvc — namespace yb-demo -l app=yb-tserver

Upgrade Cluster using helm:

helm upgrade yb-demo yugabyte — set Image.tag=1.1.0.3-b6 — wait

Delete Cluster using helm:

helm del — purge yb-demoOutput :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ helm del — purge yb-demorelease “yb-demo” deletedkubectl delete pvc — namespace yb-demo — allOutput :harshverma59@cloudshell:~/yugabyte-db/cloud/kubernetes/helm (ringed-robot-229102)$ kubectl delete pvc — namespace yb-demo — allpersistentvolumeclaim “datadir0-yb-master-0” deletedpersistentvolumeclaim “datadir0-yb-master-1” deletedpersistentvolumeclaim “datadir0-yb-tserver-0” deletedpersistentvolumeclaim “datadir0-yb-tserver-1” deletedpersistentvolumeclaim “datadir0-yb-tserver-2” deleted

More read on Setting up YugabyteDB in Docker 3 node cluster mode :

How to Approach easily for YugabyteDB In Docker Mode

First install Docker :

https://docs.docker.com/install/linux/docker-ce/debian/

yb-docker-ctl:

Whats yb-docker-ctl ??

yb-docker-ctl is a simple command line interface for administering local Docker clusters. It manages the yb-master and yb-tserver containers to perform the necessary administration.

$ mkdir ~/yugabyte && cd ~/yugabyte$ wget https://downloads.yugabyte.com/yb-docker-ctl && chmod +x yb-docker-ctl$ ./yb-docker-ctl -h

Create a 3 node local cluster with replication factor 3.

YugabyteDb with Docker

Each of these initial nodes run a yb-tserver process and a yb-master process. Note that the number of yb-masters in a cluster has to equal to the replication factor for the cluster to be considered as operating normally and the number of yb-tservers is equal to be the number of nodes.

$ ./yb-docker-ctl create — rf 3

Check Status of Nodes :

$ ./yb-docker-ctl status

Add one More Node :

$ ./yb-docker-ctl add_node

Remove a node:

./yb-docker-ctl remove_node 3

Destroy cluster:

$ ./yb-docker-ctl destroy

Monitoring :

./bin/yb-ctl status

Find the below references to know more about Helm use cases, YugabyteDB blog, GKE Engine and other articles in this series.

References :

--

--

Harsh Verma

Machine/Deep Learning | Stanford GSB | Senior Staff Software Engineer @Palo Alto Networks