Hybrid WebApp Deployment Architecture Using Terraform

kuldeep rajpurohit
8 min readSep 10, 2020

--

A hybrid setup, web app deployed on Kubernetes inside Local Area Network and database deployed on AWS using Terraform

What is Kubernetes?

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

What is AWS RDS?

Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications.

What is Terraform?

Terraform is a multi-cloud (IaC) Infrastructure as Code software by HashiCorp written in Go Language using (HCL) HashiCorp Config Language. An open-source command-line tool that can be used to provide an infrastructure on many different platforms and services such as IBM, AWS, GCP, Azure, OpenStack, VMware, and more.

Prerequisites :

  • Minikube or MultiNode Kubernetes Cluster inside LAN.
  • Kubectl should be installed
  • AWS CLI 2 should be installed
  • Terraform should be installed
  • AWS user with programmatic access
  • Basic knowledge of Kubernetes, Cloud, and Terraform

In this article, I will show you how to deploy a web app in a hybrid environment, where we will deploy a frontend web app inside LAN Local Area Network on top of Kubernetes and will deploy the database for the frontend web application on the AWS cloud using RDS (Relational Database Service).

The Most Common Usecase for such architecture would be where you need that WebAPP can only be accessible on the Company’s LAN(Local Area Network),i.e it can be a global deployment web application that only needs be accessible for the company’s LAN and not from the Internet.

The question arises that when we only need to access the web app from the local area network then why we just setup database server on LAN. The answer to this question is you can set up a database server on LAN but nowadays tech companies have multiple offices spread across multiple region and multiple countries they need to access the app like global deployment system from all offices and they also need data to be consistent and durable across all regions.

If we set up a database server on LAN on each office and do some synchronization between all the database server so data can be consistent it would take too much time to sync also very hard to manage, and you don't want that developers from the USA office roll out an update and it is not reflected on India's office and developer from India is unaware about the update and also it is very hard to manage and scale database across multiple regions rather than this we can use AWS RDS which is fully managed database service by AWS easy to scale across multiple regions.

Even if Employee wants to access the web app from a remote location in it can be accessed after setting up a VPN server.

For this article, we will use WordPress as a frontend web app and MySQL as a database but with the same approach, any web application can be deployed.

The most interesting thing is we will create this whole infrastructure using terraform with just a single command whole infrastructure that will be created and ready to use.

Here I am using minikube for Kubernetes on my machine, but in production, your Kubernetes cluster will be on the company’s data center. This same terraform code work their just you have to provide Kubernetes master node IP in terraform Kubernetes provider

Steps :

  • create rds module to create rds DB instance
  • create wpkube module to deploy web app on Kubernetes
  • use both modules in main.tf

Let’s get Started…

rds module

First, we will create a terraform module with a name rds that will be used to create AWS RDS DB instance with custom configuration.

Since this is a module it is good practice to make the module dynamic as much as possible using variables, by this, we can reuse the same module with different configurations.

As you can see in the above snippet from line 2 to line 18 I had declared multiple variables that will be used in rds resource.

Line 20 to 24 we are using VPC resource for referencing default VPC this resource will nor be created neither be destroyed this is just to reference default VPC id.

Then we are creating a security group for rds here I am allowing all protocol and port for simplicity but you should not do this in a production environment you should only allow the verified IP, here we are using VPC id reference on line 29

Then we are creating an RDS DB instance using the aws_db_instance resource In this resource I am using all the variables we created earlier, now we can pass all variables value from the root module and the same module can be used multiple time with a different configuration.

Understanding the configuration for the rds module is pretty easy, but the interesting part is that you can see from line 70 to 72 we are printing the output for the rds endpoint, a logical question arise here is why we are printing output in a submodule.

And the answer is this will not be printed when calling from the root module this is just for reference for the wpkube module which we will create further in this article, in wpkube we will need a database endpoint and we can directly access the submodule variables from the root module, so we are doing an output for the same then it can be accessed from root module.

wpkube module

Now we will create another module that will be used to create Kubernetes resources and to deploy WebApp, for this article I use WordPress as a frontend application but similarly, you can create a container image for your application and use it to deploy your front end.

Just how we created variables in the rds module to make it dynamic here also we will declare multiple variables so the same module can be reused with different configurations.

You can see in the above snippet from line 21 to 36 we are creating PVC (Persistent Volume Claim) for pods so we do not lose data if a pod fails.

Now we will create deployment just how we create yml file for deployment in Kubernetes, in a similar way we can create a deployment in terraform, we will use all variables we created earlier in the deployment so configurations can be changed from the root module.

The import thing to notice in deployment is in container specs we are passing environment variables from here itself via variable so we don't need to manually configure the database for WordPress.

After deployment, we will create Kubernetes service and expose node port, this will achieve load balancing among pods and web app can be accessed

And the last thing just how to printed output in the rds module for reference in root module here we will output the node port so it is easy to access the web app else we have to manually find the port for service using kubectl command.

root variable.tf

When we are working on a huge terraform code it is good practice to write all variables in a different file, it will be a lot easier to manage you will thank yourself in future

You can see in the above snippet here we have declared all the variable with some value, the default data type for the variable is a string we have to change it for different datatype

I always give the name to the root variable with the same name as in the submodule just a prefix with r_ here r means root by this it becomes easy to read code and with the prefix r_ I got to know which variable is from the root module and which is from the submodule.

main.tf

This the root main.tf file where we will declare all our providers.

Here we will use 2 providers AWS and Kubernetes, Aws for provisioning RDs and Kubernetes for Kubernetes resources for WordPress, Before this you need to configure AWS CLI 2 with user profile and kubctl.

You can see in above snippet we are also using an alias for both providers, alias is the most powerful way to manage thing in terraform for example consider you have a module which launches ec2 instance but you need to launch ec2 in 2 different regions here you can create 2 aws provider with different and mention which provider to use for which module using an alias in the module.

Since I am using minikube as a single-node Kubernetes cluster I will just pass config_context as minikube, there are many more options in Kubernetes provider where you can declare hostname, password, config files path, etc for multinode Kubernetes cluster.

Then we are using our both sub module rds and wpkube and also passing the alias so the module knows which provider to use and also passing all the variable values from the root var.tf file

Do you remember that we have output a few variables in the submodule and I said it is for reference because we can access submodule variables directly from root module? You can see that online no 35 where we used “module.rds.rd_host” to assign database endpoint in wpkube module.

Using the same thing I had printed the database endpoint and node port using the output for our reference.

Now we are just required to fire one command which will automatically provision this hybrid architecture, configures it, and deploy our web app.

Before this make sure minikube or if you use a multinode Kubernetes cluster is up and running

terraform apply

Access webapp

Now you can simply open a browser and go to minikube IP which is by default 192.168.99.100 with port number as node port

That’s it for this Article. I hope you learned something new.

If you liked this article, please drop a clap so it can reach more people. You can follow me on Twitter at @callbyrefrence or find me on LinkedIn or take a look at my work on GitHub

--

--