Let’s look at GCP Load Balancer deeper and learn how to use it

Ray Lee | 李宗叡
Learn or Die
Published in
6 min readAug 4, 2019
Photo by Martin Sanchez on Unsplash

My Blog

Chinese version

Introduction

Why Load Balancer?

  1. It split the traffic of multiple services among multiple machines
  2. When one node dies, you still have another one
  3. It supports auto-scaling when a pre-set benchmark is reached (Not covered in this article)
  4. With proper CI / CD and health-check, it could achieve rolling upgrade
  • In this article, we are going to build an unmanaged Load Balancer, the easiest one.
  • Detailed explanation for every component

The concept in Graph (Image source: Google ):

  1. Users from IPv4 and IPv6 make requests to our service
  2. The IPv4 and IPv6 forwarding rules lead the request to HTTP(s) proxy
  3. When requests reach HTTP(S) proxy, it will be led to some backend-service according to the url-map we set. For example, the request with domain 'test1' would be led to backend-service 1, and 'test2' would be led to backend-service 2
  4. backend-service consist of instance group. For example, we could specify that backend-service A lead the request to port 8000 of instance group A, and backend-service B lead the request to port 6000 of instance group B
  5. instance group, as its name, consist of instance. If we set up instance group and backend-service properly, requests will reach backend-service, and be led to the designated port of instance via instance group and its balance condition
  6. Every backend-service could have a health-check, and health-check would periodically prob specified port and get responses. If there is no response, or the response is slower than the benchmark we've set, then the instacnce will be diagnosed as unhealthy. Requests would not be led to an unhealthy instance
  7. If SSL is needed, SSL certificate could be created and added in HTTPS proxy
  8. Let’s get our hands dirty now!

Google Cloud SDK Installation

In this article, we are going to use Google Cloud SDK on every section, so before we start, let's install it first. Installation way varies per your Operating System. We could refer to Official Documentation

Create an instance

  • Build two instances
  1. Build two machines, named test-01 and test-02
  2. Boot-drive capacity is 30 GB
  3. Pull image from ubuntu-os-cloud
  4. Use ubuntu-1804-lts with version of the image
  5. The disk-types type is pd-standard, you could also check all disk-types by running gcloud compute disk-types
  6. The machine-types is f1-micro, you could also check all machine-types by running gcloud compute machine-types list
  7. As an identifier of instance, which we are going to use with later when creating firewall-rules
  8. zone specify the zone of the instance. Be aware that some resources are limited with zone and region
  9. Reference official documentation

Instance Environment Installation

The followings are about instance environment. You could simply skip it because it doesn’t have much to do with our subject.

Create firewall-rules

We need to create firewall-rules per the port you use and so request could be led to the instance.

We could refer to official documentation

Static External IP

We are going to create a static IP for further usage, we could refer to official documentation

Instance group

Instance group can consist of multiple instance, and it the bulk of backend-service, we could refer to the official documentation

  • With instance group, we could create different backend-service later on Load Balancer
  • Set named port, which could be used on different backend-service
  • Add existing instance into instance group

Health check

health-check could prob specified port with specified frequency. If there is no response from the specified port, health-check would diagnose this port as unhealthy, and backend-service would not send requests to anunhealthy destination.

We could refer to the official documentation

Backend service

The --port-name here is what we set up above. In this example, different backend-service would send requests to a different port. instance group has not been mentioned yet? Don't worry, next step, we will add instance group into backend-service

Also, health-check will be added into backend-service because backend-service is going to decide which instance the request should be sent to. We could refer to the official documentation

  • Build backend-service

Next, we are going to add instance group into the backend-service we just built. Since we've designated port when creating backend-service, backend-service would lead requests to the designated port of the instance group

Besides setting which ports the request to be sent, we are going to set up the benchmark of instance utilisation. UTILIZATION means the percentage of usage. When it reaches 80%, backend-service would stop sending requests to this instance

capacity-scaler means 1 * 0.8, so if you have multiple backend-service using one instance-group, and you want to keep more utilisations capacity for some other backend-service, then you could give a lower capacity-scaler. So when this backend-service already use capacity-scaler * max-utilization, request from this backend-service would not be sent to this instance-group, which save the utilisations capacity of the instance-group for other backend-service

We could refer to the example below, and also the official documentation

  • Add instance group into backend-service

URL map

We’ve covered backend-service above, and url-map is what leads request to backend-service

Firstly, let’s create a url-map, and specify a default backend-service. It means that if a destination is not specified, requests would be sent to this default backend-service

We could refer to the example below, also the official documentation

  • Create a url-map

After we created a url-map, and specified a default backend-service, now we could specify more rules with which the request should be led to backend-service

We should use path-matcher to specify the path as an example below:

path-matcher: create a path-matcher and specify the rule

new-hosts: The request requesting the host sunday.com.tw is going to be applied for this rule.

It said that the request requesting sunday.com.tw would be led to backend-service-port1

We could refer to the example below, also the official documentation

  • Add path-matcher

Here you could find a new component called path-rules

When the requested host is monday.com.tw, and the default path is /, the request would be sent to backend-service-port2

When the requested path is happy, like monday.com.tw/happy, the request would be sent to backend-service-port1

When the requested path is unhappy, like monday.com.tw/unhappy, the request would be sent to backend-service-port2

When the requested path is sad, like monday.com.tw/sad, the request would be sent to backend-service-port3

example the same as above, the request to tuesday.com.tw would be sent to backend-service-port3

Create an SSL certificate

In order to let our service to support HTTPS, we need to create ssl-certificates. It could be either self-managed or google-managed

self-managed means the SSL certificate managed on your own. In the following example, we will use google-managed

We could refer to the official documentation

HTTP proxy

All the HTTP request will get here, and be sent to backend-service via url-map

We could refer to the official documentation

  • Create an HTTP proxy

HTTPS proxy

All the HTTPS request will get here, and be sent to backend-service via url-map

Also, we are going to add the ssl-certificates we just created here so target-https-proxies can support HTTPS

可參考官方文件 We could refer to the official documentation

  • Create HTTPS proxy

Check Static External IP

List the addresses we’ve created

We could refer to the official documentation

Forwarding rules

When the requested address and port match the forwarding-rules, lead the request to designated target-http-proxy

Replace the [LB_IP_ADDRESS] below with the static external IP we just created

We could refer to the official documentation

  • Create HTTP forwarding-rules

When the requested address and port match the forwarding-rules, lead the request to designated target-https-proxy

Replace the [LB_IP_ADDRESS] below with the static external IP we just created

We could refer to the official documentation

  • Create HTTPS forwarding-rules

Conclusion

Above-mentioned is the flow of Load Balancer of GCP as follows: request => forwarding-rules => target-http(s)-proxy => url-map => backend-service => instance-group => instance

Follow the example above, you should be able to run services on your instance and get, process, respond requests properly.

I spent quite a lot of time writing this article, hoping it will help whoever in need. If you’ve got here, I would like to thank you for it.

Finally, if you find this article is helpful, your clap is the best reward to me.

Also, if you find anything incorrect, feel free to let me know.

Write Medium in Markdown? Try Markdium!

--

--

Ray Lee | 李宗叡
Learn or Die

It's Ray. I do both backend and frontend, but more focus on backend. I like coding, and would like to see the whole picture of a product.