Kubernetes Webminar ~ Deploying Stateful Sets

Rough Notes

In this video, we create a scaled wordpress installation that has an SQL cluster in DB layer.


The SQL db used is the one provided by percona, which supports etcd based node discovery. Each node of the cluster is recognized by a number or an ordinal. Each mysql Pod goes to a specific node and another etcd pod runs on the same node — so this is node affinity.

Q) Though, I am not sure why we did not put two images inside the same POD?

We also use an NFS file system to persist the wordpress static files. HPA scaling is used to autoscale wordpress frontend.

Q) Do you remember any specific details of the YAML files?

Very little. I will have to revisit it when I am creating a similar installation.

I remember that we initially started by installing a persistence claim volume and a claim volume. These had similar naming conventions. I am sure that we later mount them on SQL images, but I don’t exactly remember how. (hmm… not very photographic memory).

Q) Do you create a separate PV and PVC for each of the SQL storage files?

Yes. Each has a separate name (actually an ordinal number). So we create a separate yaml file for each in the example. Most other code is repeated.

Q) What is PVC vs PV?

PVC is something that db admins do. And PV is something that end users can do. An admin can perhaps create a bunch of claims and later these can be used by different users to allocate space for their disks.

We create a 3 node cluster for mysql. The kubernates runs on the master. We can ssh to kubernates master and see /etc/fstab

This is the volume mounting directory. I am not sure what we will see here?

We can go the the dashboard to see all three nodes. We can verify that PVs and PVCs are created. We can verify what volumes is mounted into pods.

— Min 22

NFS is mounted across the cluster on all nodes (node-1, node-2, node-3)

The master has volume mounts for each of the PVs we created. Why is that? These will be used by mysql nodes. So they have to be mounted on each node that has mysql.

Next, create etcd discovery service. Essential to discover the nodes for percona.

We create 3 PODs . ETCD-0 — bootstrap the etcd cluster.

In development env, we can expose etcd to world with a nodePort.

nodeSelector:

name: node-1

First pod will be on node-1 .

Total 3 pods and 5 services.

One is external for debugguing purposes.

kubectl get svc

→ etcd, etcd-0…2, etcd-client

How are pods chosen? something with nodeSelector …

— Min 32

We have created 3 PVs + 3 PVCs, we have created my percona mysql nodes and checked everything was mounted, we have create 5 etcd nodes (3 with node affinity with mysql, 1 is master, 1 is client — for debugging with nodePort).

Does this single master become a single point of failure?

— Min 33

It looks like headless service has no cluster IP and only has an external IP. It is a service and exposes mySql to other clusters. The wordpress is installed in a different cluster. The internal cluster IPs are said to be virtual IPs.

When we look at pods. Etcd is not in the same pod as mysql … strange … Pods can go to same or different node. Why not put etcd and myself in the same pod?

— Min 45

Finally we scale the wordpress frontend.

Like what you read? Give Naval Saini a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.