Multi-tenant wordpress applications on single Google Kubernetes Engine cluster using Google Cloud Platform services.

Anil Saravade
Oct 14 · 8 min read
Credits: digitalplanet /www.dpcinc.com

Recently, I was working on a project for migrating the entire WordPress applications infrastructure for one of our clients on Google Cloud Platform. One of the requirements they requested from us was to containerize their applications and deploy them in a cluster in Google Kubernetes Engine. Additionally, multiple WordPress applications should be deployed using a single Load Balancer. This article will go through the process of actually deploying WordPress on the Google Cloud Platform. Specifically, this article will go over how to deploy WordPress, an open-source, content management system using Google Kubernetes Engine. Here’s the architecture of their application.

Fig 1.Overall architectural diagram

Why microservices?

A microservice architectural style is an approach to develop a single application as a suite of small services. Each runs in its own process and communicates with lightweight mechanisms. They have some distinct advantages over monolithic architecture such as better organization, decoupled, performance.

Why Kubernetes?

Kubernetes is a software that allows us to deploy, manage and scale applications. The applications will be packed in containers and kubernetes will group them into units. It allows us to span our application over thousands of servers while looking like one single unit. It has benefits such as reducing resources cost, portability, and modularity, scalability, etc.

Why Cloud Filestore?

Cloud Filestore is a managed file storage service for applications that require a file system interface and a shared filesystem for data. Filestore gives users a simple, native experience for standing up managed Network Attached Storage (NAS) with their Google Compute Engine and Kubernetes Engine instances. The ability to fine-tune Filestore’s performance and capacity independently lead to predictably fast performance for your file-based workloads. For this project, GFS is used as a common NFS mount point to create multiple GKE deployments on a single NFS file system using nfs-client provisioner.

Why Cloud SQL?

Cloud SQL is a fully managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform. For this project, Cloud SQL is used to create a common MYSQL database management system for WordPress application.

Why Cloud Memorystore?

Cloud Memorystore for Redis provides a fully managed in-memory data store service built on scalable, secure and highly available infrastructure managed by Google. For this project, Cloud Memorystore is used on GKE for WordPress application caching and session handling.

Why Google Cloud Storage?

Google Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. You can use Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download. For this project, Cloud Storage is used to upload and serve your WordPress media using buckets in Google Cloud Storage.

Things to set up for the WordPress application.

  • Create Google Kubernetes Engine.
  1. Select the preferred region for your deployment.
  2. Access Scope: All access to required Cloud APIs.
  3. Enable VPC native: In the same VPC as Cloud SQL, Cloud Memorystore, Cloud Filestore, and Memorystore are present.
  • Create Google Cloud SQL.
  1. Select the preferred region (Recommended to keep in the same region as GKE)
  2. Set Connectivity to Private IP and in the associated network section, select the VPC in which your GKE is deployed to create VPC peering to allow Cloud SQL to communicate with GKE.
  3. Create a user for accessing the database. For this project let’s create database username as user-1 and password as pass-1.
  • Create Google FileStore.
  1. Select the preferred zone. (Recommended to keep in the same region as GKE)
  2. In authorize network section, select the VPC in which your GKE is deployed to create VPC peering to allow GFS to communicate with GKE.
  • Create a Google Cloud Storage.
  1. Create a bucket in a preferred zone. (Recommended to keep in the same region as GKE)
  2. Set the access control model to bucket-level permissions.
  3. In the IAM section create a new service account and attach storage admin role to it.
  4. Make the bucket public.

a) In the list of buckets, click on the name of the bucket that you want to make public.

b) Select the Permissions tab near the top of the page.

c) Click the Add members button.

d) In the New members field, enter “allUsers”.

e) In the Roles drop down, select the Storage sub-menu, and click the Storage Object Viewer option.

f) Click Save.

Wordpress Deployment on Google Kubernetes Engine.

Note: All the .yaml files are present at the following repository.

https://github.com/swapsstyle/my-wordpress.git
Fig 2. NFS filesystem on Google Kubernetes Engine using Cloud Filestore.
  • We can use Cloud Filestore as a common NFS filesystem using 2 approaches.

A) Approach I: Create nfs-client storage class using NFS client provisioner.

1] Connect to the GKE cluster and Install Helm using following link provided below.

https://docs.google.com/document/d/19QhsqxHbYneu_yFu24HalstyiwA8g1RgTECvpJSZI4o/edit?usp=sharing

2] Inside the GKE cluster create NFS-provisioner to create storage class for using FileStore as an NFS file system using the following command provided below.

helm install stable/nfs-client-provisioner — name nfs-cp — set nfs.server=<fileshare_ip> — set nfs.path=<fileshare_path>

To check the storage class use the following command:

kubectl get storageclass

3] Create a Persistent Volume Claim.

We can create Persistent Volume Claim using the nfs-client storage class which was created before.

To check the Persistent Volume Claim use the following command:

kubectl get pvc <pvc-name>

OR

B) Approach II: Create a Persistent Volume and Persistent Volume Claim.

1] Create a Persistent Volume using the Cloud Filestore IP address and Filestore path using the nfs block.

To check the Persistent Volume use the following command:

kubectl get pv <pv-name>

2] Create a Persistent Volume Claim by using the Persistent Volume which was created before and match the label.

To check the Persistent Volume Claim use the following command:

kubectl get pvc <pvc-name>
  • Create a secret.

Secrets are used to pass the Cloud SQL database credentials to the WordPress deployment.

Note, the credentials need to be converted into base64 before passing in the yaml file. Below command can be used to convert a plain text to bas64 and vice versa.

echo <plain-text> | base64

To check the Persistent Volume Claim use the following command:

kubectl get secrets <secret-name>
  • Create a Deployment.

We can attach the Persistent Volume Claim using the Volumes block in spec and attach the Persistent Volume Claim using VolumesMounts block in spec.containers

To connect with Cloud SQL, secrets are used to pass the DB Hostname, DB Username, and DB Password securely in the env block.

To check the Persistent Volume Claim use the following command:

kubectl get deployments
  • Create a service.

Service of type NodePort is created to expose and make the WordPress deployment accessible.

To check the Persistent Volume Claim use the following command:

kubectl get svc
  • We can create an Ingress Controller using 2 approaches.

A) Approach I: Ingress controller using Google Kubernetes Engine.

Fig 3. Kubernetes Ingress Controller.

1] Create a TLS certificate for the HTTPS site.

Note: Now head over to your DNS provider and point <preffered-domain-name> at the Ingress LoadBalancer IP address. Allow enough time for DNS propagation before proceeding to the next step. It takes 15–20 minutes to provision an SSL certificate.

To check the Persistent Volume Claim use the following command:

kubectl get ManagedCertificate <certificate-name>

2] Create a Kubernetes Ingress Controller using HTTPS Load Balancer.

OR

B) Approach II: Nginx controller using cert-manager and nginx-ingress on GKE to obtain SSL certificates from Lets Encrypt.

Fig 4. Nginx Ingress Controller.

1] We’re going to deploy the cert-manager package.

Note we’re going to disable ingress-shim as we are going to use Nginx class instead.

git clone https://github.com/jetstack/cert-managergit checkout v0.2.3helm install --name cert-manager contrib/charts/cert-manager --set ingressShim.extraArgs='{--default-issuer-name=letsencrypt-prod,--default-issuer-kind=ClusterIssuer}' --set ingressShim.enabled=false --namespace kube-systemkubectl -n kube-system get pods

2] Once the cert-manager pod is running we can go ahead and deploy the nginx-ingress helm package, providing a service with Load Balancer type and the controller.

helm install — name ingress-my-test-app stable/nginx-ingress — set rbac.create=’true’

To increase the replicas count of nginx-controller use the following command:

helm upgrade ingress-my-test-app nginx-ingress-1.6.4 — set controller.replicaCount=3

To preserve the client source IP inside Google Kubernetes Engine.

Set the externalTrafficPolicy to Local to preserve the client source IP inside the nginx-controller service by updating the values.controller.service.externalTrafficPolicy to Local in values.yaml for helm chart and execute the following command for upgrading the changes.

helm upgrade -f new-values.yaml {release name} {package name or path}

3] Now head over to your DNS provider and point <preffered-dns-name> at the IP address returned above. Allow enough time for DNS propagation before proceeding to the next step. We can now create our Issue (Lets Encrypt prod) and Certificate resources.

To check the Issuer use the following command:

kubectl get issuer

4] Create a Certificate.

To check the Issuer use the following command:

kubectl get certificate <certificate-name>

5] The next step is to deploy our Ingress resource.

To check the Nginx resource use the following command:

kubectl get ingress <ingress-name>

Set up the Wordpress application.

  • Changes in Wordpress Application.

In the wp-config.php file change the table prefix to use the existing database or change the database name for creating a new database for the WordPress deployment.

  • Complete the WordPress installation.

On this page, fill in the fields for:

-Site Name
-Username
-Password (needs to be entered twice)
-Email address (login information will be sent to this email address)
-Select whether or not to have the search engines index the site

Click Install Now.

  • Install the WP-Stateless plugin.

1] Search, install, and activate the WP-Stateless plugin via your WordPress dashboard.

2] Installation and setup are now complete. Visit Media -> Stateless Settings for more options.

3] In the Bucket section paste the name of your Google Cloud Storage bucket.

4] In the Service Account JSON section paste the service account which we’ve created before while creating GCS bucket.

  • Install the Redis Object Cache plugin.

1] Search, install, and activate the Redis Object Cache plugin via your WordPress dashboard.

2] Install and activate the plugin.

3] Enable the object cache under Settings -> Redis, or in Multisite setups under Network Admin -> Settings -> Redis.

Note: If your server doesn’t support the Wordpress Filesystem API, you have to manually copy the object-cache.php file from the /plugins/redis-cache/includes/ directory to the /wp-content/ directory.

4] Adjust connection parameters.

Add the following snippet code to wp-config.php.

/* Google Memorystore connection parameters. */define('WP_REDIS_HOST', '<Memorystore-IP>');
define('WP_REDIS_PORT', '6379');

Thanks to Sekhar Mandapati and Rohit Ayare for encouraging me.

  • More related links:

CronJob to Backup MySQL on GKE:

https://medium.com/searce/cronjob-to-backup-mysql-on-gke-23bb706d9bbf?source=friends_link&sk=ac552c7fdad38078e3f1ecdc9f058d91

Questions?

If you have any questions, I’ll be happy to read them in the comments. Follow me on medium or LinkedIn.

Searce Engineering

We identify better ways of doing things!

Anil Saravade

Written by

Cloud Engineer @Searce | AWS & GCP Certified | DevOps solutions | Kubernetes & Docker | ML | Former AWS Employee

Searce Engineering

We identify better ways of doing things!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade