Application Capacity Optimizations with Global Load Balancing
Get Cooking in Cloud
Authors: Stephanie Wong, Priyanka Vergadia
Introduction
“Get Cooking in Cloud” is a blog and video series to help enterprises and developers build business solutions on Google Cloud. In this series we plan on identifying specific topics that developers are looking to architect on Google cloud. Once identified we create a mini series on that topic.
In this miniseries, we will go over Google Cloud load balancing.
- Choosing the right load balancer
- Application Capacity Optimizations with Global Load Balancing (this article)
- Capacity Management with load balancing
- Load Balancing to GKE network endpoint groups
In this article we will show you how to set up an HTTP load balancer with a target pool of a Compute Engine managed instance group.
Check out the video
Review
In the last blog we introduced Beyond Treat and their growing e-commerce site for vegan dog treats. They continue to face high amounts of traffic and their website backend is facing traffic from all over the world.
We covered the types of load balancers on Google Cloud and when it’s appropriate to use each one. Some factors included global vs. regional traffic, and the type of traffic being served. For Beyond Treat, they’re primarily delivering HTTP(S) traffic to their end users all over the world. Thus, they’ve landed on using the Global HTTP(S) Load Balancer for their traffic needs. The video above explains how global load balancing employs the Waterfall by Region Algorithm to seamlessly overflow traffic to the next closest region with available backends. Let’s walk through an example to set up an HTTP load balancer targeting a backend instance group.
What you’ll learn, and use
- Launch a demo web application on a regional managed instance group
- Configure a global load balancer that directs HTTP traffic across multiple zones
The sequence of events in the diagram is:
- A client sends a content request to the external IPv4 address defined in the forwarding rule.
- The forwarding rule directs the request to the target HTTP proxy.
- The target proxy uses the rule in the URL map to determine that the single backend service receives all requests.
- The load balancer determines that the backend service has only one instance group and directs the request to a VM in that group.
- The VM serves the content requested by the user.
You’ll be using:
- The default VPC network
- A Compute Engine managed instance group
- A default URL map
- A reserved external IP address
This Solution for application capacity optimizations with Global Load Balancing.
Creating a managed instance group
To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. We’ll be creating a managed instance group with Linux VMs that have Apache running and then set up load balancing. The managed instance group provides VMs running the backend servers of an external HTTP load balancer.
- In a Google Cloud project, create an instance template by entering the following in the Google Cloud Shell.
--region=us-east1 \--network=default \--subnet=default \--tags=allow-health-check \--image-family=debian-9 \--image-project=debian-cloud \--metadata=startup-script='#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2'
You can verify the template was created on the Templates page on the Google Cloud Console:
2. Create the managed instance group based on the template.
gcloud compute instance-groups managed create lb-backend-example \--template=lb-backend-template --size=2 --zone=us-east1-b
You can verify the managed instance group was created on the Managed Instance Groups page on the Google Cloud Console:
Configuring a firewall rule
Next, create the fw-allow-health-check firewall rule. This is an ingress rule that allows traffic from the Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag allow-health-check to identify the VMs.
- In the Cloud Shell, enter:
gcloud compute firewall-rules create fw-allow-health-check \--network=default \--action=allow \--direction=ingress \--source-ranges=130.211.0.0/22,35.191.0.0/16 \--target-tags=allow-health-check \--rules=tcp
Reserving an external IP address
- Now that your instances are up and running, set up a global static external IP address that your customers use to reach your load balancer.
gcloud compute addresses create lb-ipv4-1 \--ip-version=IPV4 \--global
Setting up the load balancer
- Create a health check.
gcloud compute health-checks create http http-basic-check \--port 80
2. Create a backend service.
gcloud compute backend-services create web-backend-service \--protocol HTTP \--health-checks http-basic-check \--global
3. Add your instance group as the backend to the backend services.
gcloud compute backend-services add-backend web-backend-service \--balancing-mode=UTILIZATION \--max-utilization=0.8 \--capacity-scaler=1 \--instance-group=lb-backend-example \--instance-group-zone=us-east1-b \--global
4. Create a URL map to route the incoming requests to the default backend service.
gcloud compute url-maps create web-map-http \--default-service web-backend-service
5. Create a target HTTP proxy to route requests to your URL map.
gcloud compute target-http-proxies create http-lb-proxy \--url-map web-map-http
6. Create a global forwarding rule to route incoming requests to the proxy.
gcloud compute forwarding-rules create http-content-rule \--address=lb-ipv4-1\--global \--target-http-proxy=http-lb-proxy \--ports=80
You can verify the load balancer was created correctly on the Load Balancing page on the Google Cloud console:
Sending traffic to your instances
Now that the load balancing service is running, you can send traffic to the forwarding rule and watch the traffic be dispersed to different instances.
- Go to the Load balancing page in the Google Cloud Console.
- Click the load balancer you just created.
- In the Backend section, confirm that VMs are healthy. The Healthy column should be populated, indicating that both VMs are healthy (2/2). If you see otherwise, first try reloading the page. It can take a few moments for the Cloud Console to indicate that the VMs are healthy. If they still aren’t listed as healthy, review the firewall configuration and the network tag assigned to your backend VMs.
- You can test your load balancer using a web browser by going to the http://ip-address, where ip-address is the load balancer’s IP address.
- Your browser should render a page with content showing the name of the instance that served the page, along with its zone (for example, Page served from: lb-backend-example-xxxx). If you refresh a few times you should see the backend instance change between the two instances.
Congrats! You just set up a global HTTP load balancer serving traffic to a target pool of compute managed instance groups.
For more about this recipe, check out this solution and this tutorial. For a more complex example with cross-regional load balancing that uses the Waterfall by Region algorithm, stay tuned for the next article in this series.
Next steps and references:
- Follow this blog series on Google Cloud Platform Medium.
- Reference: Application Capacity Optimizations with Global Load Balancing
- Follow Get Cooking in Cloud video series and subscribe to Google cloud platform YouTube channel
- Want more stories? Follow me on Medium, and on Twitter.
- Enjoy the ride with us through this miniseries and learn more about more such Google Cloud solutions :)