Google Cloud SDK Dockerfile

The Cloud SDK is the command-line interface to the Google Cloud Platform. It is a flexible utility which uses GCPs own Cloud APIs to perform many different tasks such as deploying code on AppEngine, to creating Compute Engine VMs to checking IAM permissions and so on. It is the command line interface to pretty much every Google Cloud Platform API and service.

This article describes some more ways customers can use it by demonstrating several use cases involving containerizing the Cloud SDK. The following shows Dockerfiles targeted towards various operating systems and some sample usecases (Its nothing new, just a sample set to document the baseline dockerfiles.

EDIT 4/24: Updated blog post with changes from the source github Repo.
EDIT 9/4: Update content to point to official image




To use any of the images above, simply download the image directly (except alpine, you have to create that as shown later)

docker run -t google/cloud-sdk gcloud info
docker run -t google/cloud-sdk:slim gcloud info

If you want to use a prebuilt image, see


docker run -ti google/cloud-sdk:latest

or via tagged version of the SDK:

docker run -ti google/cloud-sdk:159.0.0
docker run -ti google/cloud-sdk:159.0.1
NOTE, with latest image set, all components are installed by default. If you would rather have a minimal image to use, try the slim image



The alpine based images do not yet exist in the official google cloud sdk but I am listing the docker generation file here incase you want to create one

The SHA256 checksum is listed on the SDK documentation page.

Note: you can pass in the ARG value of the sdk version and checksum as overrides:
docker build --build-arg CLOUD_SDK_VERSION=151.0.1 \
--build-arg SHA256SUM=26b84898bc7834664f02b713fd73c7787e62827d2d486f58314cdf1f6f6c56bb -t alpine_151 --no-cache .

Alpine Package

The following describes building and running the Cloud SDK as an Alpine package. In other words, its a package you can use Alpine’s installer to setup instead of sourcing from a base image and then installing Cloud SDK as docker commands.

For more information see:

At the time of writing (3/16), the package is not in the official alpine community repository. Although building a version of the package is pretty straightforward, you can find it also on the private repository shown below.

Note: the package below is pinned to SDK 147.0.0. You can either update gcloud post-install or regenerate a new package.


Running the SDK using an untrusted private repository (

The following describes the abridged steps to creating the image locally in a Dockerfile for alpine

$ docker run -ti alpine sh
$ apk update
$ apk add alpine-sdk
$ adduser bob
$ vi /etc/sudoers
bob ALL=(ALL)
$ sudo addgroup bob abuild
$ mkdir -p /var/cache/distfiles
$ chmod a+w /var/cache/distfiles
$ su - bob
c49f24da743c:~$ id
uid=1000(bob) gid=1000(bob) groups=300(abuild),1000(bob)
$ abuild-keygen -a -i
(you can use the example APKBUILD in this doc)
$ abuild checksum
$ abuild -r 
c49f24da743c:~$ more packages/home/x86_64/
APKINDEX.tar.gz google-cloud-sdk-147.0.0-r0.apk


Containerize local development environment

You’re an App Engine developer and you want to keep your workstation in as much of a consistent state as possible. That means you would rather not install the Cloud SDK or run directly from your laptop. What you would rather do is spin up any components for your local development without having Cloud SDK installed on the laptop.

What you’d like to do is run Cloud SDK and in a container. To do that, you need to run from the Cloud SDK docker image but map your deployable sources to that container.

For example with python, I’m mapping my current source directory to the container (under /apps) and instructing it to run the

docker run \
-p 8080:8080 \
-p 8000:8000 \
-v path_to_your_app.yaml:/apps google/cloud-sdk \
INFO     2016-10-28 19:39:31,206] Skipping SDK update check.
WARNING 2016-10-28 19:39:31,269] Could not read search indexes from /tmp/appengine.None.root/search_indexes
INFO 2016-10-28 19:39:31,272] Starting API server at: http://localhost:44119
INFO 2016-10-28 19:39:31,276] Starting module "default" running at:

You can also configure your container image if you’d like to use Maven You can also do this with maven but you will need to install the dependencies into the extended image itself as shown in the following Dockerfile that sets up your execution environment for maven:

FROM google/cloud-sdk
RUN apt-get install -y maven default-jdk

Build your containerized runtime environment:

docker build -t google/cloudsdk-java .

Then just launch with your app:

docker run -p 8080:8080 -v path_to_your_pom.xml:/apps cloudsdk-java mvn appengine:run

(Note: you’ll need to specify the host/port for the dev_appserver in the pom.xml section for the appengine-maven-plugin)

Run gcloud cli without installing SDK locally

Don’t want to install, update and maintain a local SDK install? Install the SDK requires python installed locally. If you containerize the SDK, all you need to run is docker. If you pull the published google/cloud-sdk:latest image, you are guaranteed to have the latest release of the SDK. Note that once you pull an image down, it will remain cached in your local repository until you delete the local image and pull again. Alternatively, if you want to remain on a given version, you can always specify the docker image to pull. Note: we support only the last two releases of the SDK from ‘latest’.

For example, first initialize the volume to use by authorizing it with your credentials:

docker run -t -i --name gcloud-config google/cloud-sdk \
gcloud auth login

Then reuse that volume but now use any gcloud command:

docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud compute instances list --project your_project
instance-1 us-central1-a n1-standard-1 RUNNING

You can also use this technique to initialize a volume with a service account. This is useful if you want to have several service accounts handy as volumes where each service account is scoped with a different IAM Role.

To setup a service account credential in a volume, first download the JSON certificate file and map a volume to the Cloud SDK container:

In the following example, my service account certificate file is stored on my local system at $HOME/certs/serviceAccountFile.json

docker run -t \
-v $HOME/certs:/data -i \
--name gcloud-config \
google/cloud-sdk \
gcloud auth activate-service-account \
--key-file /data/serviceAccountFile.json
Activated service account credentials for: []
Warning: the volume gcloud-config now has your credentials/JSON key file embedded in it; carefully control access to it!

Then run a new container but specify the volume. You’ll see that the configured credentials already exist

docker run --rm -ti \
--volumes-from gcloud-config google/cloud-sdk \
gcloud config list
Your active configuration is: [default]
disable_update_check = true
account =
disable_usage_reporting = False
project = your_project

Now run some gcloud command on behalf of that service account.

docker run --rm -ti \
--volumes-from gcloud-config \
google/cloud-sdk \
gcloud compute instances list
NAME                              ZONE           MACHINE_TYPE               PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gae-default-20161011t124615-9sc2 us-central1-c custom (1 vCPU, 1.00 GiB) RUNNING

You can continue to do this with other restricted service accounts in volumes. This will allow you to easily control which service accounts and its capabilities you use by having it already defined in a redistributable the container image (vs. using gcloud’s — configuration= parameter in each command).

Run emulators in containers

Running emulators in a container provides an easy, predictable configuration for an emulator. You can always reuse a given configuration without needing to initialize the SDK with credentials. For example, the following starts up the pubsub emulator from a baseline SDK:

docker run --rm -t \
-p 8283:8283 \
google/cloud-sdk \
gcloud beta emulators pubsub start --host-port
Executing: /google-cloud-sdk/platform/pubsub-emulator/bin/cloud-pubsub-emulator --host= --port=8283
[pubsub] This is the Google Pub/Sub fake.
[pubsub] Oct 25, 2016 4:29:31 AM main
[pubsub] INFO: Server started, listening on 8283

You can even extend and link some code you run in an container to connect with this emulator. In this mode, you run your emulator in one container, acquire the container’s internal address, then separately run your application in another container but link to the emulator via docker networking. There are several ways to do this securely too with docker custom networks.

First run the emulator:

docker run -tid --name pubsubemulator google/cloud-sdk gcloud beta emulators pubsub start --host-port

We need to pass in the internal IP address for the emulator into your application container. The following command returns the internal IP address which we will use later.

docker inspect -f "{{ .NetworkSettings.IPAddress }}" pubsubemulator

Now run your application container with credentials but link it back to your emulator by passing in an environment variable for the emulators internal IP address

docker run -ti \
-e PUBSUB_EMULATOR_HOST=`docker inspect -f "{{ .NetworkSettings.IPAddress }}" \
pubsubemulator`:8283 \
-e GOOGLE_APPLICATION_CREDENTIALS=/certs/your_service_account.json -v ~/certs/:/certs/ myapp

Note: you need to pass in GOOGLE_APPLICATION_CREDENTIALS because the PubSub client library tries to acquire an access token before contacting the emulator.

Automate simple devOps tasklets

Suppose you want to run different automated tasks with service account with restricted access. First create a service account and initialize a volume as shown in the “Reuse service account credential” use case. Once you’ve done that, you can invoke any script that uses gcloud-cli. We described some of the scripting you can do with gcloud in a previous blog post here.

As a concrete example, suppose you have a script which lists out the service accounts and their keys:


for project in $(gcloud projects list --format="value(projectId)")
echo "ProjectId: $project"
for robot in $(gcloud beta iam service-accounts list --project $project --format="value(email)")
echo " -> Robot $robot"
for key in $(gcloud beta iam service-accounts keys list --iam-account $robot --project $project --format="value(name.basename())")
echo " $key"

If you want to run that script using credentials initialized and attached to a volume in a container, simply run the gcloud-sdk container, reference the volume with credentials and map your script to the running container. In this example, I initialized gcloud-config with a given service account and then mapped script from my local workstation to the gcloud-sdk image. The entrypoint for the image is the script itself

docker run --rm -ti \
-v path_to_your_script:/scripts/ \
--volumes-from gcloud-config cloud-sdk \

ProjectId: your_project
-> Robot
-> Robot
-> Robot
-> Robot

For more info on scripting gcloud see previous blog post on this topic.

Hopefully, these basic use cases have given you some ideas on how to containerize the Cloud SDK and extend the image and customize in ways to suit your need.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.