Multi-tier architecture using Kubernetes on Google Cloud using Terraform

In this project, I am creating an IaC using Terraform for launching the WordPress site on GKE for quick scaling & deployment, and management of containerized applications. The database is in the SQL Service of GCP to utilize its full potential of being fully managed.

Daksh Jain
Oct 8, 2020 · 7 min read
Google Cloud Platform

Deploying your websites in the public domain requires a lot of planning.
1 second of downtime means a lot of loss.
The client is not able to connect means loss of business.
Not so secure database means a data breach and indeed a loss of business.

Everything ends up being related to business and money.

It is very important to be pre-planned about the way you are setting up your business in the public domain.
Here is my plan :

Create 2 projects in different regions.
Follow the rule
“No Root Account”. Create a service account with appropriate roles and powers for proper management.
Then enable a few
SQL Server in 1 VPC in 1 project.
Create the
Google Kubernetes Cluster in another VPC in the other project.
In the cluster deploy a
WordPress deployment. Now this is an intelligent deployment and will be taken care by the fully managed GKE Service.
Then finally
put the IP of the service (Load Balancer) of the deployment in the SQL Server so that no other IP can hit the SQL Server.
This makes the SQL Server tight and secure.

Let’s start to create the setup -

Step 1 — Create 2 projects

New Project

Step 2— Create service accounts in each project

Click on Navigation pain > IAM & Admin > Service Accounts
Create a New Service Account.
Click on create a key and download the key file, it will be used later for logging using the service account.

Do the same process for the other project as well:

  • Create a Service account
  • Create key and download the key file

Step 3— Specify the roles for the service accounts

Click on Navigation pain > IAM & Admin > IAM
Select your Service Account, and edit the permissions.
Then select the permissions as shown below.

Similarly, select the role for the other service account as shown below.

Step 3 — Enable APIs as per use case

Production Project — GKE

In this project enable the following APIs:

Developer Project — SQL

In this project enable the following APIs:


Now for good management of the code, I will share with you my folder tree.

In the parent folder, I have created 1 main file.
This file will call the 2 modules that I have created.

Then inside are 2 folders: 1 for the code of SQL and the other for the code to setup GKE.

Inside each folder, there is 1 key.json file. This is the key downloaded for each service account.

This will be now used while creating the code.

When we call the resource module we need to tell the account that terraform will use to login to the GCP Platform and create resources as required.

Let’s start building the code -

Step 0 —

Here I have used modules. The 2 folders that I have created for management purposes need to be called many times. So I have used them as modules.
The syntax to use it can be seen in the above code. The path to the folder needs to be specified in the variable source.

The rest of the variables in each module is from the other module because we need these values in preparing the code with correct dependencies.

Step 1 — Setting the providers:

Here I am setting the provider “google” and specifying the project, region and the location of the key.json file.

Step 2 — Create the VPC for each project

Here I will be creating the VPC & Subnets. Since the VPC will be in different regions so an important thing to be done will be VPC Peering.
  • google_compute_network is used to create the VPC.
    A name has to be specified and auto_create_subnetworks is set to be false because I am creating VPC on my own.
  • google_compute_subnetwork is used to create the subnet.
    Name, region & VPC ID are specified. An appropriate CIDR is also required.
    private_ip_google_access is set to true so that it is accessible within the resources using the Private IP.
  • google_compute_network_peering is used for VPC Peering.
    In the network variable specify the VPC ID of this VPC.
    In peer_network specify the VPC ID of the other VPC.

Step 3 — Code for GKE

Using google_container_cluster resource I have created the Google Kubernetes Cluster.

  • Name, location, & initial_node_count have to be specified.
  • remove_default_node_pool is set to be true because I am creating my own node pool.
  • In the network variable, I have specified the VPC Name.
  • In the subnetwork variable, I have specified the name of the subnet to use.

Using the resource google_container_node_pool I am creating the node pool. The main thing to specify here is the name of the cluster (that we have just created).
Next, I have specified the node_config in which the type of machine and disk is specified.

This final null resource is prepared to get the credentials of the Kubernetes cluster that is created.

gcloud container clusters get-credentials                   ${}                                   --zone       ${google_container_cluster.wp_gke.location}                                  --project    ${data.google_project.prod_project.project_id}

Using this command you can get the credentials. The values that are required to pass are zone, project & the name of the cluster.

Step 4 — Code for SQL in GCP

Using google_sql_database_instance resource I have to configure the settings of the Instances that will run as SQL Database in GCP.

The name, region & the databse_version are specified.
We can even choose the type of instance we want to create.
Then in ip_configuration, we can even put SSL to be true and then pass the certificates. Then the setup will be much more secure.

🔴NOTE — Right now I am setting the authorized networks as which means everyone will be able to connect to the SQL Database, including clients that you did not intend to allow.

🔴I am still researching on how to change this to a more secure setup.

Using the google_sql_database resource I am creating the SQL Database using the instance created above.

Then finally using the google_sql_user resource I have set the username & password for the database.

The password can also be stored in another secure file for security purposes

These are the variables and will be used by the other module where I will have to tell the username, password, public IP & the Database name to the WordPress site. So this is how we can set the variables as output, then it will be used by the other module.

Step 5 — Create the WordPress site in the GKE cluster

First I have specified the variables so that these can be used to create the WordPress deployment.

Next, the kubernetes provider is set which will be used to deploy the WordPress site in the GKE Cluster.

I have used a static_ip from GCP so that a fixed IP is provided to the WordPress site and later on will help to provide this IP to the SQL Database for a secure connection.

This is the WordPress deployment in which the image is specified. This image is first pulled from the Internet and then deployed in the GKE Cluster.

Then using the variables I have specified the IP, Username, Password & Database name to be used in the WordPress configuration.

The container_port is specified as 80.

Then finally using the kubernetes_service resource a Load Balancer is used in front of the WordPress site so that it is accessible to the outside world using the Public IP.

All the codes are present on my GitHub. You can even contribute…

Output -

We can run the complete setup using the command:

terraform apply
terraform apply

Now connect to the site -

WordPress site launched from the Load Balancer External IP

Proof that the WordPress site is connected to the same SQL Instance that I have created.

That’s all folks!!

LinuxWorld Informatics Pvt. Ltd.

Making India, Future Ready!

LinuxWorld Informatics Pvt. Ltd.

LinuxWorld Informatics Pvt. Ltd.(‘LW’) is a fast-growing ISO 9001:2008 Certified Organisation, Fully governed by young & energetic technocrats, dedicated to Linux, Security & Open Source Technology…

Daksh Jain

Written by

Automation Tech Enthusiast || Terraform Researcher || DevOps || MLOps ||

LinuxWorld Informatics Pvt. Ltd.

LinuxWorld Informatics Pvt. Ltd.(‘LW’) is a fast-growing ISO 9001:2008 Certified Organisation, Fully governed by young & energetic technocrats, dedicated to Linux, Security & Open Source Technology…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store