Limiting Access to Google Cloud Storage by IP address

Quinlan Jung
Google Cloud - Community
3 min readMay 18, 2021

At the time of writing, there is currently a feature request to restrict IPs in the bucket policy but this is not available in production yet.

At Expo, we’re building a new service for our customers that requires us to limit all GCP bucket access to a set of IP addresses. Folks at Google are tracking a feature request to specify IP allowlists in the bucket policy, but it’s not available to the general public yet. The way to implement this today is to use VPC Service Controls, and it requires you to make a separate project containing just the bucket and configure a Service Perimeter around it.

A basic architecture using service perimeters

Here is a step-by-step tutorial on how to do this:

0. Create A Separate GCP Project

This project should only contain the buckets you want to restrict access to.

1. Create An Access Level

Conditions can be chained with OR/AND operators

First you must create an Access Level that specifies the IP addresses you’d like to allow. The easiest way to experiment with Access Level policies is to use the Access Context Manager UI . Access Levels will not be enforced until you apply it to a live service perimeter.

Ideally you’ll also want a GCP admin or service account to have access to your IP-restricted project as well. This will be useful if you need to create new resources in the project because you’ll need to give permission to the human creating resources, or to the service account that’s automated the process for you.

It’ll be easier to manage your configurations in Terraform eventually, but it is not required for this setup. In order to codify both the IP address and GCP member allowlist in Terraform, you’ll need to chain them with condition resources like this:

2. Create A Service Perimeter

Create your test perimeter in dry-run mode first!
Create your test perimeter in dry-run mode first!

Before codifying your perimeter to Terraform, I’d highly recommend creating a service perimeter in dry-run mode from the UI first. You can choose the projects and their services you want to protect. Once you apply your dry-run policy, you can look at the audit logs in Logs Explorer to see if it has the expected behavior.

In this basic configuration, we’ve created an organization-wide service perimeter that only protects our project with the buckets. We’ve further restricted it to apply to the Google Cloud Storage service.

3. Create A No-Op Perimeter and Bridge*

This step is necessary if you are moving data from another project into your restricted project. For example, if you are moving files from a bucket located in another project to the bucket in your IP-restricted project, you’ll need to complete this step. If the data being moved can be attributable to another Google Cloud Project, the perimeter will block requests even though you’ve added the appropriate members to your Access Level. You’ll need to explicitly enable communication with a bridge connection to your IP-restricted perimeter.

In the most basic case, you can accomplish this by putting all your other projects in a no-op perimeter. This will let you create a bridge to connect projects from the no-op perimeter to the project in your IP-restricted perimeter.

After you’ve applied all the Terraform configurations, you should now have a IP-restricted bucket that plays well with all your other GCP projects.

Got any questions? Feel free to DM me @quinlanjung on Twitter :)

--

--