Landing Zone Deployment (Google Cloud Adoption Series)

Dazbo (Darren Lester)
Google Cloud - Community
21 min readMay 27, 2024

Welcome back to the Google Cloud Adoption and Migration series. Sorry this one has taken me so long to publish. I ran into a bit of a technical blocker! (More on that later.)

Previously I covered how to establish your core LZ team, how to get the support you need, the workshops you should run, and how to capture and document your design.

Today, we’re into the fun stuff! Let’s actually deploy that LZ!

Me playing Elite Dangerous: A spaceship in a hangar. Well, it’s sort of landing zone deployment, right?

Four Ways to Deploy the LZ

There are broadly three… maybe four approaches to deploying your Google Cloud landing zone. These are:

  1. “ClickOps and Deploy” — This is where you follow the guided click-by-click process in the Google Cloud Console, supported by the Google Cloud Foundation Setup Checklist. A button-press then performs the actual deployment.
  2. “ClickOps, Download and Deploy” — Here, we still follow the UI-based process in the Console. But rather than running the deployment from the Console, we download the Terraform configuration generated by the process. This allows us to store, tweak and run the deployment separately. It also allows us to undeploy!
  3. Cloud Foundation Fabric FASTan enterprise-ready landing zone blueprint implementation, built by pre-aggregating a set of Fabric reference blueprints. It is a Terraform-based solution for bootstrapping and building the GCP LZ, from scratch.
  4. Roll Your Own Terraform — Sure, you can do this. But I wouldn’t recommend it. Google have built FAST from the ground up, and crowd-sourced a huge amount of input from various enterprises. So I’m not going to cover this option any further in this article.

Pre-Reqs

Regardless of the approach you take, it is essential that you have completed the LZ design phase. There’s a bunch of decisions you need to have made.

1 — ClickOps and Deploy

This method uses the guided click-by-click process, through the Google Cloud Console.

Who Is This For?

This is typically aimed at small organisations with a very small Google Cloud platform team, and with little or no Terraform skills.

Pros

  • The overall process is very fast. You can literally configure and deploy your Google organisation and landing zone within an hour or two.
  • No Terraform skills required; the entire process can be executed through the Google Cloud Console.
  • Simple and easy to follow, with a limited set of configuration options.

Cons

  • Limited configurability of the final LZ.
  • No repeatability.
  • No creation of automated CI/CD pipelines, tenant factory or project factory.

Steps Summary

  1. Organisation setup — create the organisation; setup a Cloud Identity tenant linked to your organisation; verify your domain; and setup super admin accounts.
  2. Configure Users and Groups in Cloud Identity — including any synchronisation using GCDS; setup SSO if required; setup admin groups (e.g. for organisation, billing, network, security, logging).
  3. Setup Administrative Access with IAM— i.e. mapping users to groups we established previously; assign Google Cloud IAM roles to the groups.
  4. Setup Billing — billing account; budgets and billing alerts; billing exports.
  5. Create the Resource Hierarchy and Access — the initial resource hierarchy structure, including folders and common projects.
  6. Centralise Logging including export to BigQuery.
  7. Networking — deploy shared VPCs; configure any hybrid connectivity; initial firewall configuration; any egress routes and NAT.
  8. Hybrid Connectivity
  9. Monitoring — configure centralied Cloud Monitoring.
  10. Security — configure organisational policies; enable SCC dashboard.
  11. Support — select the preferred support option, e.g. Basic or Premium support.

The actual deploy stage only happens after step 8 in the setup. Until this point, everything is just configuration to be applied.

Before we look at the steps in detail, first launch the Google Cloud setup checklist. This provides you with step-by-step guidance for completing the process. Each numbered item in the checklist expands to show more detail.

1 — Organisation Setup

First we need to setup Google Cloud Identity (if you’re not already a Cloud Identity or Google Workspace customer), and then link your Cloud Identity account to your Google Cloud organisation.

Assuming you’re not yet a Cloud Identity customer, we will proceed by opening the Cloud Identity Sign-Up page. There are free and premium editions of Cloud Identity, but for this process, I’ll guide you through using the free tier.

Here’s something you might find useful! Let’s say that — like me — you already have a Cloud Identity account and you’ve already created a Google Cloud organisation. But you’d like to create a new Cloud Identity account and a separate organisation, for experimenting with this process. If so, you can do that using subdomains! For example, I’m already using the domain just2good.co.uk as a Google Cloud organisation. But I’ve just created the subdomain gcp-demos.just2good.co.uk for demo purposes.

The Cloud Identity sign-up starts with a screen that looks like this:

Signing up for Cloud Identity

Then you must specify the business domain name. This will also become our Google Cloud organisation name.

Provide business domain name

Then we enter our domain name:

Here I’ve used a subdomain to create a Cloud Identity account

Now we’re prompted for our super admin account details. This is the email address that will be the super-admin for Cloud Identity. It is the email address that our super admin will use to sign in to the Google Cloud Identity Admin Console. (Not to be confused with the Google Cloud Console.) It must be an email associated with the business domain we’ve just specified. For example: if Bob at mydomain.com will be the super admin, I like using an address like: super-bob@mydomain.com.

Note: this is a powerful account, and it is entirely separate from any email addresses or groups that will be associated with the Google Cloud organisation.

Sign-on to the Cloud Identity Admin Console:

Sign on to the Cloud Identity Admin Console with your Super Admin

Now you must verify your domain name.

Verify you own the domain

The domain verification process is quick and easy. Typically, you do this by creating a DNS TXT record, using a value that Google supplies. The Admin Console guides you through the process.

Once your domain is verified, a Google Cloud organisation resource is automatically created. This is your top-level organisation in Google Cloud.

The Admin Console now suggests creating users. However, we want to proceed in Google Cloud Console, rather than the Cloud Identity Admin Console. So click the link to “Setup in Google Cloud Console”.

Logging in to Google Cloud Console with your new Super User account

And we can now proceed with the Cloud Foundation Setup checklist from the Google Cloud Console.

However, at this point — back in the Cloud Identity Admin Console — I would also recommend:

  • Adding one or two additional super admin accounts. (What happens if your one-and-only super admin falls down a giant sinkhole?)
And that was the last I saw of my Super Admin
  • Setting up two-step verification (MFA) for your super admins.
  • Setting up account recovery.
  • Defining a password policy, including password expiration.

Step 1 is now complete!

2 — Configure Users and Groups in Cloud Identity

Here we create user groups and add members to these groups. At this stage of the Checklist, it is recommended to add any users that will be involved in any stages of the Cloud Setup.

Back in the the Google Cloud Setup, we can move on to Step 2:

Step 2 in the Google Cloud Setup ClickOps flow

The Cloud Identity Admin Console allows super admins to provision administrative groups, and then to set up initial users. We can do this by:

  • Manually adding groups (and user) in the Cloud Identity Admin Console.
  • By setting up Google Cloud Directory Sync (GCDS), to replicate identities from an existing LDAP or Active Directory (AD) system. This is a free tool which performs one-way synchronisation of users and groups to Google Cloud. Your existing LDAP/AD system remains your golden source. I describe this in more detail in a previous article in the series.
GCDS
  • But the easiest way: create our groups using the “Create all groups” button from Google Cloud Setup. This automatically provisions the recommended groups in Cloud Identity.
Creating our identity groups

After completing this step, the groups will be provisioned and visible in the Google Admin Console.

Now we can add users (members) to these groups. Click on “Go to the Google Admin Console”:

Then go ahead add users to these groups. I’ve created some sample users:

Create users in the Admin Console

These users will receive automatic emails. Of course… Only if they have valid mail boxes! (For the purposes of a demo like this, I usually set up a forwarding rule, e.g. from *@my-domain to a valid email account that I already have.)

And next, I’ll add them to the appropriate groups:

Add users to groups

Now, switch back to the Google Cloud Console. You’ll see that your groups now have members:

Our groups now have members

We can now click on “Continue to Administrative Access.”

3 — Administrative Access

Next we assign Google Cloud IAM roles to the various groups we created in the previous step. This confers appropriate Google Cloud permissions to our Cloud Identity groups.

The Google Cloud Setup will then propose the roles that will be assigned to each of the previously created groups. You can go ahead and click on “Save and grant access”.

4 — Billing

In this step you will create a billing account (assuming you don’t already have one) and associate it with your new Google Cloud organisation. You can also optionally setup budgets, billing alerts, and billing exports.

The billing account is used to pay for all the Google Cloud resources you consume. Resource consumption (and costs) are accumulated at project level, and each project is associated with the billing account.

Select billing account type

For the purposes of this demo, I’ll proceed with the default billing account type, which is the online (“self-serve”) type. Eligible organisations can always switch to an invoiced billing account later.

Select “Online billing account”, then “Continue”, and then “Create billing account”:

Create the billing account

To complete the billing account setup, you’ll need to enter your payment card information. Don’t worry: you wont be charged anything. Note: initial Google Cloud setup comes with $300 dollars of free credit.

Billing account setup complete

If you now click on “My Billing Account”, you can go ahead and set up budgets and billing alerts. (You can always do this later.)

Billing Account view in the Google Console

Click on Budgets & alerts, then Create a budget. Name your budget, set a time range, and which projects the budget to apply to (the default is “all” projects).

Creating a Budget

Next we specify the thresholds that should trigger alerts, and where the alerts should be sent. In this example, I’ll just send emails to the billing admins group. But we can do more sophisticated things, such as writing to a Pub/Sub topic.

Defining budget alert thresholds

5 — Resource Hierarchy and Access

This is where we start to leverage those early design decisions. In this step, we setup the organisation resource hierarchy.

At this point, you might need to request an increase to the project quota associated with your billing account. This is because the Cloud Setup creates a number of projects during the hierarchy stage. The Setup will guide you through how to create the quota request. Do make sure that the account you’re using to perform the quota request has a valid email address!

You may need to request a project quota incrase

It was the quota request step that blocked me from completing this article for a while. The request should be actioned by Google within 2 working days. But at the time of writing, the Google process for actioning the quota uplift is a bit broken. So I ended requiring a bit of manual intervention! My friends at Google tell me this will be fixed very soon.

Okay, back to the Cloud Setup… We can choose from one of four preset hierarchy blueprints.

Hierarchy blueprints

When you select a hierarchy from the list, the Console shows a preview of what the created hierarchy will look like. Let’s compare them…

Simple, environment-oriented hierarchy:

This is a good one to use if you have a very small organisation. Maybe just a handful of developers.

Org/
├── Common/
│ ├── vpc-host-prod 📦
│ ├── vpc-host-nonprod 📦
│ ├── logging 📦
│ ├── monitoring-prod 📦
│ ├── monitoring-nonprod 📦
│ └── monitoring-dev 📦
├── Prod/
│ ├── app1-prod-svc 📦
│ └── app2-prod-svc 📦
├── Nonprod/
│ ├── app1-nonprod-svc 📦
│ └── app2-nonprod-svc 📦
└── Dev/
  • Projects are shown with the 📦 icon. The other entries are folders.
  • This hierarchy is organised into Common, plus three environments at the top level: Prod, Nonprod, Dev.
  • Note how this is implementing a dual shared VPC design, with one in prod and one in non-prod. In Common, we have a host project for each.
  • This design implements the monitoring design where we have one metrics scope per environment. We have a project to host each metrics scope.

Simple, team-oriented hierarchy:

Org/
├── Common/
│ ├── vpc-host-prod 📦
│ ├── vpc-host-nonprod 📦
│ ├── logging 📦
│ ├── monitoring-prod 📦
│ ├── monitoring-nonprod 📦
│ └── monitoring-dev 📦
├── team-huey/
│ ├── Prod/
│ │ └── huey-prod-svc 📦
│ ├── Nonprod/
│ │ └── huey-nonprod-svc 📦
│ └── Dev/
└── team-dewey/
├── Prod/
│ └── dewey-prod-svc 📦
├── Nonprod/
│ └── dewey-nonprod-svc 📦
└── Dev/
  • Here, the top-level categorisation is by team, rather than environment.
  • Each team is then divided into folders for each of the three environments.

Environment-oriented hierarchy:

Org/
├── Common/
│ ├── vpc-host-prod 📦
│ ├── vpc-host-nonprod 📦
│ ├── logging 📦
│ ├── monitoring-prod 📦
│ ├── monitoring-nonprod 📦
│ └── monitoring-dev 📦
├── Prod/
│ ├── retail-banking/
│ │ ├── huey/
│ │ │ └── rb-huey-prod-svc 📦
│ │ └── dewey/
│ │ └── rb-dewey-prod-svc 📦
│ ├── wealth-mgmt/
│ └── mortgages/
├── Nonprod/
│ ├── retail-banking/
│ │ ├── huey/
│ │ │ └── rb-huey-nonprod-svc 📦
│ │ └── dewey/
│ │ └── rb-dewey-nonprod-svc 📦
│ ├── wealth-mgmt/
│ └── mortgages/
└── Dev/
├── retail-banking
├── wealth-mgmt
└── mortgages
  • This organises around environment at the top level, but then also splits each environment into separate business units.
  • Each business unit contains a number of teams.

Business-unit oriented hierarchy:

Org/
├── Common/
│ ├── vpc-host-prod 📦
│ ├── vpc-host-nonprod 📦
│ ├── logging 📦
│ ├── monitoring-prod 📦
│ ├── monitoring-nonprod 📦
│ └── monitoring-dev 📦
├── retail-banking/
│ ├── huey/
│ │ ├── Prod/
│ │ │ └── rb-huey-prod-svc 📦
│ │ ├── Nonprod/
│ │ │ └── rb-huey-nonprod-svc 📦
│ │ └── Dev/
│ └── dewey/
│ ├── Prod/
│ │ └── rb-dewey-prod-svc 📦
│ ├── Nonprod/
│ │ └── rb-dewey-nonprod-svc 📦
│ └── Dev/
├── wealth-mgmt/
│ ├── huey/
│ │ ├── Prod/
│ │ ├── Nonprod/
│ │ └── Dev/
│ └── dewey/
│ ├── Prod/
│ ├── Nonprod/
│ └── Dev/
└── mortgages/
├── huey/
│ ├── Prod/
│ ├── Nonprod/
│ └── Dev/
└── dewey/
├── Prod/
├── Nonprod/
└── Dev/
  • Here we categorise at business unit, at the top level, then team, then environment.

Note that regardless of our chosen hierarchy, we have the option to configure:

  • The number of business units, and their names. (In this demo, I’ve decided that my organisation is a financial institution / bank, and I’ve created top-level business units for: retail banking, wealth management, and mortgages.)
  • The number of teams, and their names.
  • The names of the three environments.
  • The names of service projects that will be associated with a shared VPC host project.
  • Additional custom projects.

For organisations of any significant size, I tend to like the environment-oriented hierarchy (environment → business unit → team), so I’m going to use this hierarchy for the demo.

Now we need to apply IAM roles to the folders and projects in our hierarchy. The Google Cloud Setup recommends additional roles that should be added to each of our groups. But this time, rather than only applying at the organisational level, we are applying to the resource hierarchy. Recall that IAM policies are inherited down the hierarchy, and permissions are additive. Therefore, the effective access is the union of the of the policies inherited at each level, plus the policy at the lowest level.

The Google Cloud Setup recommendations will look something like this:

Recommended access policies for the hierarchy

We can go ahead and click “Confirm draft configuration.”

6 — Centralise Logging

Here, Cloud Setup helps us setup centralised logging.

Setting up centralised logging

Click on Start Logging Configuration:

Logging configuration in the setup

First, we setup aggregated logging of all audit logs to a centralised logging bucket, which is stored in the logging project we created earlier. (This is as per the best practice I documented here.)

You can also setup logs routing to BigQuery, and routing of archive loggging (e.g. for compliance purposes) to a cheaper GCS bucket.

Go ahead and “Confirm draft configuration.”

Logging configuration complete

7 — VPC Networks

In this step, we set up a pair of shared virtual private cloud (VPC) networks, as per the dual shared VPC pattern: one in prod, and one in non-prod.

Shared VPC network setup

The Google Cloud Setup requires us to configure a pair of subnets for each VPC. This is the minimum; you can configure additional subnets in this phase.

You must configure the subnets for each VPC

I would recommend:

  • Create the first prod subnet in one region, and the second prod subnet in another region. This allows you to deploy dual region architectures, where you require the regional redundancy. It also allows you to configure high availability hybrid connectivity with a 99.99% SLA, as described in my article here.
  • Configure a similar pair of subnets in non-prod. Note: you can choose to use the same IP CIDR ranges for your non-prod VPC as you’ve chosen for your prod VPC. This is because IPs only need to be unique within a given VPC. However, this means you would not be able to peer your prod subnets to your non-prod subnets. But you might want to enforce this separation anyway.
  • Enable Private Google Access on each subnet. This allows VMs without external IP addresses to access the public IP addresses of Google APIs and services.
  • Enable Cloud NAT on each subnet. This allows outbound connections to the Internet for: VMs without an external IP address, Private GKE clusters, Cloud Run through serverless VPC access, and Cloud Functions through serverless VPC access.
  • Leave the recommended VPC firewall rules configured. By default, Google applies firewall rules which allow ICMP from anywhere inside your VPC, and allows SSH or RDP only from the Cloud Identity-Aware Proxy (IAP) range (35.235.240.0/20).
  • By default, the Setup enables firewall rules logging. However, this can be potentially expensive. I would consider only enabling logging on specific rules.
  • By default, the Setup enables VPC flow logs. This records samples of network flows sent or received by VMs in your VPC (including GKE nodes). This is useful for network analysis and forensics. But again, it can be expensive. My recommendation is to leave them on, but then — when you’ve completed the Setup — tune the flow logs to limit the volume of logging. (I will cover this in my FinOps recommendations, later in the series.)

Here is my configuration. (Logging is off to keep this as cheap as possible, for the purposes of this demo).

Sample dual shared VPC config

Go ahead and click “Continue to link service projects”. This takes us to a screen where we can associate projects that we configured earlier, as service projects. This means that these projects will be able to consume the shared VPCs.

Finally, click on “Confirm draft configuration.”

8— Hybrid Connectivity

Next, the Cloud Setup takes us through hybrid connectivity. At the time of writing, this setup task is in Preview. It allows you to configure hybrid connectivity using IPsec VPN.

Setup hybrid connectivity

When we start, we’ll see a screen like this:

Hybrid connectivity overview

For the purposes of this demo, I won’t be setting up any hybrid connectivity.

Deploy or Download

Now we have a significant choice. We can deploy directly from the console, which will apply everything we’ve configured so far.

Alternatively, we can download the Terraform the configuration we’ve configured. If we download the Terraform, then this takes us to…

2 — “ClickOps, Download and Deploy”

Here, all the steps above remain identical. But rather than deploying from the Console, we instead download the Terraform.

Download or deploy your Terraform configuration

Who Is This For?

This is for small to medium sized organisations, who:

  • Want to manage their ongoing LZ and Google Cloud infra resources with Terraform. (And this is always a good idea!)
  • Want to be able to add additional customisation and configuration to the basic Click-Ops setup.
  • Do not necessarily have a strong enough Platform Team to proceed with a more sophisticated enterprise LZ deployment, such as Fabric FAST.

Pros

  • The Terraform configuration is built by the Cloud Console “Click-Ops” process I described above. So this can easily be done in a day.
  • We can then tweak the Terraform, as desired.
  • The Terraform can (and should) be placed under source control, e.g. in GitHub.
  • We can work with the Terraform configuration collaboratively.
  • If we make any future changes to our Terraform configuration, we can simply apply it.
  • If we want to delete our entire LZ, we can do it in seconds, with one line.

Cons

  • Requires a bit of Terraform skill.
  • No creation of automated CI/CD pipelines, tenant factory or project factory.

Download the Terraform from Setup

Let’s begin!

First, click on “Download as Terraform”. Go ahead and select the region for the bucket where we’ll store the Terraform state. It doesn’t actually create a bucket just yet; instead, it results in creation of a unique bucket identifier in the file backend.tf that we’ll download later.

Then download the Terraform configuration. It will be downloaded to your local machine as terraform.tar.gz.

Upload the Terraform to Cloud Shell

Once downloaded, we can run the Terraform in a couple of ways:

  1. From the Google Cloud Shell.
  2. From any machine where the Cloud SDK is installed.

For the purposes of this demo, I’ll use the Cloud Shell for simplicity. It has everything pre-installed that we need to deploy our foundation with Terraform.

Let’s authenticate to Cloud Shell, and make a folder for our Terraform configuration:

# authenticate
gcloud auth list

# make a folder for our Terraform config
mkdir tf-foundation-setup
cd $_

Now upload our Terraform config (the .gz file) trough the Console, then extract it to the folder we created before:

# Extract the file we just uploaded
tar -xzvf ../terraform.tar.gz

So now we’ve got these files in our folder:

The Terraform config has been extracted to a folder in Cloud Shell

Create a Seed Project for our Terraform State

# create a seed project for storing our Terraform state
SUFFIX=$RANDOM
PROJECT_ID=seed-project-$SUFFIX
gcloud projects create $PROJECT_ID
gcloud config set project ${PROJECT_ID}

# Link billing account
gcloud billing projects link $PROJECT_ID --billing-account <YOUR_BILLING_ACCOUNT_ID>

# Enable APIs we'll need to deploy with Terraform
gcloud services enable cloudresourcemanager.googleapis.com
gcloud services enable iam.googleapis.com
gcloud services enable serviceusage.googleapis.com
gcloud services enable cloudbilling.googleapis.com
gcloud services enable cloudidentity.googleapis.com
gcloud services enable orgpolicy.googleapis.com

Let’s have a quick look at the projects we now have in our organisation:

Our new seed-project has been created

You can see the seed-project-28844 we’ve just created.

Now let’s look at the bucket that will be used for persisting state, in backends.tf. I’ve actually changed the name of my bucket to be something more meaningful. Note that it also needs to be a globally unique name.

Configuration of our Terraform backend state, using GCS

We’ll need to create this bucket before we can use it:


gsutil mb gs://tfstate-28844

We can check it has been created:

Our state bucket has been created

Store in GitHub

Now would be a good time to store your Terraform config in GitHub, or perhaps Google Cloud Repos. Here is the process you might follow to store in a private GitHub repo:

# Assuming we're in the tf-foundation-setup folder we created earlier

# Setup git in Cloud Shell, if you haven't done so before
git config --global user.email "bob@wherever.com"
git config --global user.name "Bob"

# Create local git repo.
# Before proceeding, make sure you have created .gitignore file
# to ignore .terraform dirs and local state, plans, etc.
git init
git add .
git commit -m "Initial commit"

# Let's authenticate the GitHub command line tool
# It is already installed on Cloud Shell
gh auth login

# Now let's use gh cli to create a remote private repo in GitHub
gh repo create gcp-demos-foundation-setup --private --source=.
git push -u origin master

Great. Now our code is safely tracked, and available to our team.

Ensure We Have Permission

You’ll need to ensure your organization-admins group has appropriate roles granted in the seed project:

gcloud organizations add-iam-policy-binding $ORG_ID \
--member="group:gcp-organization-admins@gcp-demos.just2good.co.uk" \
--role="roles/storage.admin"

gcloud organizations add-iam-policy-binding $ORG_ID \
--member="group:gcp-organization-admins@gcp-demos.just2good.co.uk" \
--role="roles/compute.xpnAdmin"

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="group:gcp-organization-admins@gcp-demos.just2good.co.uk" \
--role="roles/serviceusage.serviceUsageConsumer"

Terraform

Finally, we’re ready to get Terraforming!!

# Initialise Terraform
terraform init

Here’s the output of the command.

If you refresh the Console view of the GCS bucket, you’ll now see that the Terraform state file has been created in the bucket:

Our Terraform state is now stored in GCS

So far, so good.

# Create (and check) our Terraform plan
terraform plan -out=plan.out

The output looks like this:

And now, the moment of truth. We can finally deploy our landing zone!

# Apply the plan!
terraform apply plan.out

It takes a few minutes. And it works!! Check out the folders and projects that have been created:

Now, if you want to destroy everything you’ve created, you can just do this:

terraform destroy
Terraform destroy has begun

A word of caution: the Terraform config from the Cloud Setup includes a number of hard-coded project IDs. If you destroy your LZ as described above, then these project IDs are not immediately released. As a result, you cannot simply just reapply your Terraform config. You would need to change your project IDs, before you can reapply.

3 — Cloud Foundation Fabric FAST

Who Is It For?

Larger organisations that want a highly configurable, Terraform-based LZ, allowing for separation of duties, multiple tenants, and with out-of-the-box GitOps and CI/CD pipelines.

How?

Actually, I’ve covered this topic extensively in previous articles. So I won’t bother to cover this topic again here.

Wrap-Up

So there you have it! Finally, we’ve this section of articles on the topic of landing zones. We’ve previous covered:

  • LZ design
  • How to establish a LZ core team, for LZ “technical onboarding”

And in this article: I’ve shown you how to actually deploy the LZ.

Now we have a working, enterprise-ready Google Cloud foundation, and we’re ready to start deploying our workloads!

See you in the next installment.

Before You Go

  • Please share this with anyone that you think will be interested. It might help them, and it really helps me!
  • Please give me claps! You know you clap more than once, right?
  • Feel free to leave a comment 💬.
  • Follow and subscribe, so you don’t miss my content. Go to my Profile Page, and click on these icons:
Follow and Subscribe

Links

Series Navigation

--

--

Dazbo (Darren Lester)
Google Cloud - Community

Cloud Architect and moderate geek. Google Cloud evangelist. I love learning new things, but my brain is tiny. So when something goes in, something falls out!