Getting Started with GCloud SDK: Part 1
Path toward Infrastructure As Code with Bash
So you just downloaded GCloud SDK and followed the instructions to authorize the your shell environment to get started on the path to automation funky town.
This is a small guide on how to create 3 GCE systems (instances) using the
gcloud tool and some basic automation with GNU Bourne Again Shell (
bash) version 4.
Prerequisites: These scripts work with Bash v4 that comes with any current Linux distro. Mac users can get Bash v4 using Homebrew by running
brew install bash. Windows users can get Bash v4 by installing MSYS2, which can be downloaded directly or installed using Chocolately by running
choco install msys2.
Set the default project and service account:
export GCP_SERVICE_ACCT=$(gcloud iam service-accounts list \
--filter='email ~ [0-9]*-compute@.*' \
| grep -v EMAIL
)export GCP_PROJECT=$(gcloud config list \
The above snippet assumes that the current Google project and service account are the defaults. If they are not, set those environment variables to what is appropriate in your environment.
The Basic Script
These are the pieces of the basic script.
Step 1: Verify Environment Setup
First let’s check the required environment variables that we set earlier:
Step 2: Set the Scopes
This simply constructs a list of Scope URLs that we use to grant permissions to our virtual instances we will create.
It is easy to maintain an array of scope URL parts, and then use some bashism to construct a list of scope URLs that
gcloud tool expects. With the scope parts, we can use parameter expansion along with a auto-join process on the array and internal field separator (
IFS) to generate a comma separated values string.
Step 3: Distributing Instances across Zones
In this example, one instance will be placed in every zone in the current region. This gives us high available, should one zone (data center) go down, the other two zones will be available. Distributing instances across zones in a region is a common practice.
This example shows three ElasticSearch instances, one in each zone of the
To facilitate this, we create lookup list, associative array (aka hash, map, or dictionary in other languages), indexed by our designated instance names.
Step 4: Creating Systems
Now we can create three systems using
gcloud compute instances create command:
This creates three systems, one on each zone in the region, and all systems are tagged with
elasticsearch. Tagging instances is a common practice, as it allows change configurations solutions like Ansible and Puppet to identify systems to configure.
This also adds some metadata that tells these instances to use the project-ssh-keys, which is the GoogleCloud facility for SSH deploy or admin keys that can be used for configuring the instances at a later point.
Before creating systems, you may wish to first install a SSH key into your project, otherwise the instances will not pick up then key when they are created. The SSH key will allow remote execution tools, like Bolt, Knife, or Ansible, to access the instances.
To Be Continued…
This is a small introduction to what you can do with
gcloud tool and
bash. Though this is some automation, it is not quite IaC (Infrastructure As Code) yet, because the code is hardwired for three instances explicitly named
In a follow-up article, I want to take you to automation funky town, where we dynamically spin up some systems based on instances specified in a descriptive configuration file using JSON.
This type of solution does give you IaC as you manage creation of systems in a deterministic repeatable way (using the JSON file), and not statically buried in code logic.