Google API Gateway, Load Balancer and Content Delivery Network

Ganesh, Mohan
Oct 21 · 6 min read
Image for post
Image for post

We’re exploring the Google API Gateway product and found it
ideal for organizations/developers to create, publish, maintain,
monitor, and secure APIs. It is realized that when the API Gateway
URLs are produced there are no steps at the moment from GCP to map them to fully qualified domain URL. As the API Gateway product is in beta state, we presume those steps are going to come from Google. The API Gateway produced URL is publicly accessible but
lacks in few commonly needed features for DDoS, WAF, etc. In the
meantime, I explored our approach and just sharing the
same. Especially you would need a load balancer from WAF,
Geofencing, DDoS protection features, considering those views I thought
I would share one of the approaches that did work for us and it seemed like worth a shot.

Last week wrote an article about how to configure GCP API Gateway and we will re-use that article steps to configure the load balancer.

The GCP products that are used to complete this exercise would be
Internet NEG, Load balancer, and API Gateway Endpoints
perhaps you could configure this to any internet-facing HTTP
endpoints. Once the request arrives at the frontend, requests will
be proxied API Gateway services without exposing API Gateway.

Some of the advantages why you would want to use the Google Load Load Balancer and CDN.

  • Use Google Edge infrastructure for terminating your user connections.
  • Protect your endpoints from DDoS protection
  • Support Geofencing
  • Web Application Firewall support such as SQLi, XSS, etc.
  • Direct the connections to your custom origin.
  • Use Cloud CDN for your custom origin.
  • Deliver traffic to your public endpoint across Google’s private backbone, which improves reliability and can decrease latency between client and server.
  • Managed Certificates
  • Optimize for latency across the globe.
  • IP and Region Restriction (Cloud Armor)

The following steps are involved in achieving this task.

  • Creating an internet NEG (Network Endpoint Groups) and attaching a network endpoint defining your INTERNET_IP_PORT or INTERNET_FQDN_PORT endpoint.
  • Associating this Internet NEG with the external HTTP(S) load balancer’s backend service.
  • Adding the forwarding rule for this external HTTP(S) load balancer. If you have only one NEG, then this step is optional as all the requests will have defaulted to the available NEG.

Let’s get started.

You can create an internet network endpoint group, you specify the network endpoint type of fully qualified domain name or Internet Accessible IP Address at the given port INTERNET_FQDN_PORT or INTERNET_IP_PORT .

Within the GCP console from the navigation menu, under the COMPUTE section Compute Engine' click on Network Endpoint Groups. Click on the Create network endpoint group, that action would bring up the screen like below.

Image for post
Image for post

You could also do the same with cloud shell as well.

Initialize the defaults so that you can reuse them subsequently such as project id and region.

export PROJECT_ID=<your-project-id>
export REGION=<your-preferred-region>
gcloud config set project $PROJECT_ID
gcloud enable services compute.googleapis.com

Create an internet NEG, and set the --network-endpoint-type to internet-fqdn-port .

gcloud compute network-endpoint-groups create apigateway-fqdn-neg \--network-endpoint-type="internet-fqdn-port" --global

Attach the backend or on-prem or apigateway endpoint to created Internet NEG.

gcloud compute network-endpoint-groups update apigateway-fqdn-neg \--add-endpoint="fqdn=open-api-gateway-v2-79vjly9e.uc.gateway.dev,port=443" \--global

Now we will need to attach the NEG to HTTP(S) load balancer. Navigate to Networking Services under ‘NETWORK’ section, click on the ‘Load Balancing’.

Now we will need to attach the NEG to HTTP(S) load balancer. Navigate to Networking Services under ‘NETWORK’ section, click on the ‘Load Balancing’.

gcloud compute backend-services create apigateway-lb-backend \
--global \
--enable-cdn \
--protocol=HTTP2
gcloud compute backend-services update apigateway-lb-backend \
--custom-request-header "Host: open-api-gateway-v1-79vjly9e.uc.gateway.dev" --global
gcloud compute backend-services add-backend apigateway-lb-backend \
--network-endpoint-group "apigateway-fqdn-neg" \
--global-network-endpoint-group \
--global
gcloud compute url-maps create apigateway-url-map \
--default-service apigateway-lb-backend \
--global

This step will create the corresponding requested certificate and will be under PROVISIONING state. One would need to complete the DNS entry to resolve to an ACTIVE state. You can find more details about here

gcloud beta compute ssl-certificates create apigateway-ssl \
--domains DOMAIN
gcloud compute target-https-proxies create apigateway-target-https-proxy \
--url-map=apigateway-url-map \
--ssl-certificates=apigateway-ssl \
--global
gcloud compute forwarding-rules create apigateway-forwarding-rule \
--ip-protocol=TCP \
--ports=443 \
--global \
--target-https-proxy=apigateway-target-https-proxy

At this point in time, we have created the load balancer in GCP. To get this activated, we would need to add the assigned IP address to DNS.

gcloud compute forwarding-rules listNAME                        REGION  IP_ADDRESS    IP_PROTOCOL  TARGETapigateway-forwarding-rule          35.190.1.198  TCP          apigateway-target-https-proxy
Image for post
Image for post
Image for post
Image for post
Note of IP address and Certificate status is not green.

Certificate status will be “ACTIVE” state, once the DNS entry is validated by google for the generated IP address.

Testing

Presuming you have assigned the IP address to a domain like apigateway.example.com, when accessing this URL will be proxied to apigateway backend host URL via the load balancer.

https://apigateway.example.com/healthcheck you should see the alive message back from the previous tutorial.

Sometimes for the very first request, you might see a gateway timeout response (HTTP 502). If you see that message, it’s okay for the first time. It's just that components that are being created are getting initialized.
The subsequent request should be fine.

If your service response does meet the CDN cache criteria, the subsequent request for the same request will be served from the google cache edge locations.

To verify if the responses are from caching, go to the logs and filter to load balancer’s forwarding rule name.

More info about load balancer logging can be found here. This concludes the exercise of exposing the API or another external HTTP resource via Google Load Balancer and cache the content if applicable. If someone has set up differently, please leave a comment here, I would like to explore.

Happy Exploring ; )

The Startup

Medium's largest active publication, followed by +730K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store