Understanding HTTP(S) Load Balancer in GCP

Larry Nguyen
5 min readJan 15, 2022

--

Load Balancer is a very common topic for whoever starting to work with the Cloud such as AWS or GCP. However, it is done differently in different provider. If you could setup a Load Balancer in AWS, you might find some difficulties like I did when trying to setup the same in GCP. Let’s take a look to see how it is done in GCP.

In our example, we will setup a simple Load Balancer for our 2 web servers as below.

Assuming the Instance Group ‘my-group’ and the 2 VMs have already been setup. These 2 VMs only have internal IP addresses, so we must have our Load Balancer with external IP address to route the requests to the servers.

First, we will try to achieve this using the GCP Console. In the Load Balancer view, create a new HTTP(S) Load Balancing

Choose Classic HTTP(S) Load Balancer

Give it a name say ‘my-http-lb’ and click on Create a Backend Service

Give it a name say ‘my-http-backend’, in the section New backend, choose Instance group ‘my-group’ from the dropdown. Keep everything as default for now.

Under Health check, click on Create a Health Check to create a new Health check ‘my-http-health-check’ with all the default values.

Create on Create to create the backend.

In the section Host and path rules and Frontend configuration, just keep it as default.

Click on Create to create the Load Balancer. Once it is created, we will see a green tick next to the backend. This is the tick from the health check. We will still have to wait for a few minute to test our Load Balancer.

Click on Frontends to get the external IP and use this IP to test our web-server. We can try to refresh a few times to see the Load Balancer switching between server ‘first’ and ‘second’.

We can see that using the GCP Console, we could setup the Load Balancer very quickly and straight-forward. However, if we want to do the same using the CLI or Terraform, we will notice that, there are quite a few extra steps we have to do. All these steps are done behind the scene when we use the Console as above.

If we go the Advance Menu, we will be able to see how the Load Balancer is setup by looking at forwarding rules and target proxies.

Below is the route when a request arrives at the external IP.

  1. HTTP Request arrives at the external IP (e.g. http://34.149.173.34)
  2. The Forwarding Rule will forward the request to the HTTP Proxy. This Forwarding Rule is created based on our Frontend configuration. A new external IP address is created along the way.
  3. The HTTP Proxy is created automatically with the Forwarding Rule above. The role of the proxy is to verify the request and then pass to the URL Map.
  4. URL Map is normally used to map different URL to different backend. For our example here, the map is one-to-one. All the requests will be map to our one backend. URL map is automatically created by the system as well.

We can see that there are quite a few resources get created automatically along the way. The good thing is that, when we want to delete the Load Balancer, we have the option to delete all these resources as well.

In summary, if we want to manually create the Load Balancer using CLI, we will have to do the below

  1. Create the ‘health-check’ (if it does not exist yet)
  2. Create the ‘backend service’ pointing to the instance group with the health-check above
  3. Create the ‘url map’ pointing to the backend service
  4. Create the ‘http proxy’ pointing to the url map
  5. Create a ‘external IP address
  6. Finally, create the ‘forwarding rule’ using the external IP address and point to the http proxy

As we can see, there is no such resource called Load Balancer. The item that we see in the Console is actually the URL Map. So in our step (3) as above, we should name our url map to sound like a Load Balancer such as ‘my-http-lb’ not ‘my-url-map’.

In conclusion, this understanding is very important if we try to provision a Load Balancer using CLI or any Infrastructure-as-code tool such as Terraform.

--

--