Geofencing a Globally Load Balanced service on GCP using Cloud Armor

Dan Peachey
Google Cloud - Community
10 min readJan 6, 2022

In this article we will look at using Cloud Armor to geofence a website/service running on GCP using Cloud Run, Google Cloud Storage (GCS) and the Global HTTP(S) Load Balancer.

Geofencing is where we create a virtual perimeter to stop a service being accessed from outside of the perimeter. This is often used for services that are delivering media content where the content is only licensed for particular geographies/territories. For example, an OTT (Over the Top) streaming service may only have rights to stream content within the US and would need to prevent users accessing the service from outside of the US.

To follow along with the article you will need a GCP account and a domain name that you own to test with. This article assumes that you are already familiar with the GCP Cloud Console and tools such as Cloud Shell, gcloud and gsutil.

Our architecture will look like the one in the diagram below. A Load Balancer is actually a set of components (front end, forwarding rule, HTTP(S) proxy, url map + backends), that essentially uses a URL map to direct traffic to specific backends to handle the request. We will run a Cloud Run service serving dynamic content and a GCS bucket for serving static content and a “Service Unavailable” page. The Cloud Armor policy is attached to our backend service and will redirect any request not originating from the US to the “Service Unavailable” page. I’m using “US” for the example as that is where I am located. For testing, use the country code that you are located in.

Step 1: Create the Project and Deploy the Cloud Run Service

First we’ll create a project and a simple Cloud Run service.

Log into GCP Cloud Console and open Cloud Shell and run the following command, replacing <project-name> with your own project name:

gcloud projects create <project-name>
gcloud config set project <project-name>

In Cloud Console, select the newly created project and then choose Billing from the navigation to associate a billing account with the project.

From Cloud Shell run the following to clone the sample code from GitHub:

git clone https://github.com/danpeachey/geofence-example

Now, we’ll enable the necessary GCP services and deploy our example to Cloud Run:

gcloud services enable artifactregistry.googleapis.com \
cloudbuild.googleapis.com \
run.googleapis.com \
compute.googleapis.com
cd ./geofence-example/geofence-servicegcloud run deploy geofenceservice --source . --region us-central1

Respond Y to any prompts during deployment. Make a note of the Service URL provided as output and click it to view our newly deployed service.

You should see a simple web page with a Welcome message.

Step 2: Create the Global HTTPS Load Balancer

Next we’ll set up an external Global HTTP(S) Load Balancer to front the service. First we need to create an external IP address. In Cloud Shell run the following:

gcloud compute addresses create geo-service-lb-ip \
--ip-version=IPV4 \
--network-tier=PREMIUM \
--global

Now run the following command and make a note of the IP address that was created:

gcloud compute addresses describe geo-service-lb-ip \
--format=”get(address)” --global

Now that we have an external IP address, we need to create a DNS A record that points to it. Depending on which service you used for registering your domain name, this will be different. Here are instructions for Google Domains and GoDaddy.

e.g. if your test domain is example.com, create an A record for a sub-domain that we will use for testing the service, such as app.example.com and point it to the IP address that we just created.

Next we will create an SSL certificate for use with the Load Balancer, replacing <my-domain> with your full sub-domain (e.g. app.example.com):

gcloud compute ssl-certificates create geo-service-cert \
--domains=<my-domain> \
--global

The certificate will not become active until we create a Load Balancer and associate the certificate with it. Run the following to view the certificate’s status:

gcloud compute ssl-certificates describe geo-service-cert \
--global \
--format=”get(name,managed.status, managed.domainStatus)”

Load Balancers have several components; forwarding rules that send traffic from the IP/port front end to a HTTP(S) proxy, which then uses a URL map to determine which backend service or backend bucket the traffic should be sent to. The GCP docs have a good overview of Load Balancer architecture as shown in the diagram below (from the GCP documentation):

First we create a Serverless Backend (Network Endpoint Group/NEG) based on our Cloud Run service:

gcloud compute network-endpoint-groups create geo-service-cr-neg \
--region=us-central1 \
--network-endpoint-type=serverless \
--cloud-run-service=geofenceservice

We then create a Backend Service that will contain the Serverless Backend. If we wanted to provide HA/Redundancy we would create multiple instances of our Cloud Run service in different regions and add them all to the Backend Service, but for this example, we are just using one.

gcloud compute backend-services create geo-service-backend-service \
--load-balancing-scheme=EXTERNAL \
--global

Add the Serverless Backend to the Backend Service:

gcloud compute backend-services add-backend \
geo-service-backend-service \
--global \
--network-endpoint-group=geo-service-cr-neg \
--network-endpoint-group-region=us-central1

Create the URL map which will route incoming traffic to the correct Backend Service. For now, we can just use the default routing:

gcloud compute url-maps create geo-service-lb-url-map \
--default-service geo-service-backend-service

Create the HTTPS proxy:

gcloud compute target-https-proxies create \
geo-service-lb-https-proxy \
--ssl-certificates=geo-service-cert \
--url-map=geo-service-lb-url-map

Create Forwarding Rule that forwards traffic to the HTTP(S) proxy:

gcloud compute forwarding-rules create geo-service-https-fw-rule \
--load-balancing-scheme=EXTERNAL \
--network-tier=PREMIUM \
--address=geo-service-lb-ip \
--target-https-proxy=geo-service-lb-https-proxy \
--global \
--ports=443

We have now set up our External Global Load Balancer. Once the DNS change made earlier has propagated and the certificate status has become active we should be able to visit https://<my-domain> and view the same Welcome page that was served directly by Cloud Run. You can check the certificate status with the command below:

gcloud compute ssl-certificates describe geo-service-cert \
--global \
--format=”get(name,managed.status, managed.domainStatus)”

It can take up to several hours for everything to complete, even if it looks like the certificate is active and the DNS is pointing to the right IP, if you’re still seeing a 404 or similar error give it a bit more time. My experience has been that everything completes within 30–45 minutes, yours may vary.

Step 3: Add Custom Headers and the Cloud Armor Policy

Since we are interested in geo-fencing the service, let’s add a custom header to our backend service that will insert the users region and city into the HTTP request headers. We can then use those values in our code to present different views based on location.

Run the following in Cloud Shell to add a custom header:

gcloud compute backend-services update geo-service-backend-service \
--global \
--custom-request-header ‘X-Client-Geo-Location:{client_region},{client_city}’

Give it a minute or two, then revisit your service on https://<my-domain>. You should now see the Welcome message has changed and now includes your country and city.

Currently our service is available to everyone globally. Let’s change that by adding a Cloud Armor policy. In this sample I’m going to deny access if you are located in the US (since that is where I am located). You can change the value ‘US’ to the two character ISO 3166 country code of the country your are located in to test yourself.

gcloud compute security-policies create geofence-armor-policygcloud compute security-policies rules create 100 \
--security-policy geofence-armor-policy \
--expression “origin.region_code==’US’” \
--action deny-403

Now we link the Cloud Armor policy to our backend service:

gcloud compute backend-services update geo-service-backend-service \
--global \
--security-policy geofence-armor-policy

Again, give the change several minutes to propagate. Now when you visit https://<my-domain> you should get a 403 denied error. We’ve successfully blocked users from reaching our service from specific locations, but it’s not the best experience. We’ll look at changing our deny status to a redirect to a nice HTML page letting the user know the service is not available in their area in a moment.

Step 4: Set up the GCS Bucket for Static Content and Create the Redirect Rule

First, let’s remove our Cloud Armor policy so we can see our service again. We update the backend service passing in an empty string for the security policy name:

gcloud compute backend-services update geo-service-backend-service \
--global \
--security-policy ""

Now, let’s set up a GCS bucket to host static content (e.g. images, videos) for our service as well as to host our ‘Service Unavailable’ page. Then we’ll add backends to the load balancer and update our url map to map specific requests to the backend buckets.

Let’s create the bucket for hosting the website, but before we do that you will need to verify your domain with Google. Once that is done run the following to create the bucket and copy up the content:

gsutil mb -l US -b on gs://<my-domain>
cd ~
gsutil cp -r ~/geofence-example/web/* gs://<my-domain>

We need to make the files public, and we’ll lower the default TTL so changes aren’t cached too long:

gsutil iam ch allUsers:objectViewer gs://<my-domain>gsutil setmeta -h "cache-control: max-age=60" \
gs://<my-domain>/content/*
gsutil setmeta -h "cache-control: max-age=60" \
gs://<my-domain>/denied/*

Now we create the backend buckets:

gcloud compute backend-buckets create geofence-bucket-content \
--gcs-bucket-name=<my-domain>
gcloud compute backend-buckets create geofence-bucket-denied \
--gcs-bucket-name=<my-domain>

And update the url map to route any requests to /content to the content bucket, requests to /denied to the denied bucket and all other requests to our default service. (Remember to replace <my-domain> with your domain name):

gcloud compute url-maps add-path-matcher geo-service-lb-url-map \
--path-matcher-name geo-fence-matcher \
--default-service geo-service-backend-service \
--backend-service-path-rules='/*=geo-service-backend-service' \
--backend-bucket-path-rules='/content/*=geofence-bucket-content,/denied/*=geofence-bucket-denied' \
--new-hosts=<my-domain>

Let’s test our changes. If you visit https://<my-domain> you should now see the same Welcome message along with a content image that covers the background.

If you visit https://<my-domain>/denied/noservice.html you should see a page informing you that the service is not available in your area.

Now let’s update our Cloud Armor policy to redirect any attempts to access our service to our “Service Unavailable” page, instead of just giving a 403 denied status.

First let’s delete our current deny rule from the policy:

gcloud compute security-policies rules delete 100 \
--security-policy geofence-armor-policy

And replace it with one to do the redirect:

gcloud beta compute security-policies rules create 100 \
--security-policy geofence-armor-policy \
--expression "origin.region_code=='US'" \
--action redirect \
--redirect-type external-302 \
--redirect-target 'https://<my-domain>/denied/noservice.html'

Finally, re-associate the policy with the backend service:

gcloud compute backend-services update geo-service-backend-service \
--global \
--security-policy geofence-armor-policy

Give the changes a few minutes to propagate and then try going to https://<my-domain>. You should get automatically redirected to the “Service Unavailable” page. To reset, unlink the policy from backend. You can also update the rule to negate the logic (e.g. “origin.region_code!=’US’”) to test allowing access from the US (and blocking access attempts from outside the US).

Conclusion

Hopefully this article has been useful and helped demonstrate how Cloud Armor can be used to protect services to specific regions. Remember to delete the project so you do not continue to get charged for it and remove any unnecessary DNS entries you created during this exercise.

Also, a few caveats and observations before we go. Currently our original Cloud Run service is still available on its original run.app url. This would allow users to bypass our Cloud Armor policy and directly access the service. It’s an easy fix. Update your Cloud Run service to only allow traffic from “internal and load balancer” so that it can only be accessed via the Load Balancer:

gcloud run services update geofenceservice --platform managed \
--ingress internal-and-cloud-load-balancing \
--region us-central1

Also, we haven’t set up any Cloud Armor policies to protect direct access to static content.

e.g. https://<my-domain>/content/cloud.jpg is still available if you have the direct url. You could set up a Cloud Armor edge policy to deny access (Backend buckets only support edge policies, which only support allow/deny actions but not redirect). However, since we made the GCS bucket public, you could still go to the public GCS url and gain access that way. In this case, if you need to lock down the static content, it may be better to use a backend service (such as Cloud Run, GKE, Managed Instance Groups) that uses a service account to securely read from the bucket and passes the object back. Then use Cloud CDN to scale out and reduce traffic to the origin. You could then apply Cloud Armor edge policies to the CDN and backend policies to the backend service to fully protect access to the content.

That about wraps things up for now! Thanks for reading and I hope it was useful. Look out for more OTT related posts coming soon!

--

--