Deploying Spinnaker to Google Kubernetes Engine

At ShareChat, we run a micro-service architecture with a very large number of internal and external services running at scale serving billions of requests on a daily basis. With systems running at such a large scale with so many dependent components interacting with each other, along with developers pushing changes and deploying services multiple times in a day, continuous delivery is a challenge. The deployment infrastructure needs to be reliable and scalable enough to be able to keep up with the pace of development and scale at which we are running applications.

With an intent to make all deployments fully CD-compatible, the Infra team at ShareChat started to experiment with Spinnaker and we are happy to share our journey with you.

What is Spinnaker?

Spinnaker is an open source, a multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Released by Netflix Open Source Software Center, Spinnaker is maintained by a community consisting of Netflix, Google, Microsoft, Veritas, Target, Kenzan, Schibsted, and many others who are actively working to improve it.

Though Spinnaker boasts a long list of features required in any CD tool, but the biggest advantage it has over any other tool is being able to integrate across multiple cloud providers, meaning that it is a cloud provider aware continuous delivery tool.

Even though it doesn’t include CI, it can be integrated with popular CI systems such as Jenkins, Travis CI, Google Cloud Build, etc. Using Spinnaker with a good CI system can help you build end-to-end Continuous Delivery Pipelines that begins with changes in the source code, artifact creation (building images), unit testing, functional testing, and production rollout. You can also customize pipelines to be able to add stages for QA (manual judgment) and Canary deployments (releasing new versions for the only a subset of users).


In this guide, we’re going to outline steps to setup Spinnaker in Google Kubernetes Engine (GKE) with following goals in mind:

  • Install Spinnaker in a GKE Cluster spanning multiple zones
  • Using Google Cloud Storage (GCS) as persistent storage for Spinnaker
  • Accessing Spinnaker publicly


Since this guide uses billable components of Google Cloud Platform (GCP), please make sure you clean up resources to avoid incurring charges to your GCP account.

  • You must have access to a GCP project
  • You’ve enabled billing and access to GKE
  • You’ve set up Google Cloud SDK in your workstation to be able to use the glcoud command-line tool
  • You’ve installed kubectl in your workstation (guide)
  • A domain in which you can add DNS records

Prepare your environment

Create a GKE cluster (at the time of writing the latest version available was 1.12.7-gke.10).

gcloud container clusters create spinnaker --cluster-version=1.12.7-gke.10 --machine-type=n1-standard-2 --region asia-south1 --num-nodes=1 --disk-size=20GB --disk-type=pd-standard

Create a service account to allow Spinnaker to store data in a GCS bucket.

gcloud iam service-accounts create  spinnaker-account --display-name spinnaker-account

Store project name, bucket name and service account email address in environment variables. Note that bucket name should be changed as per your choice.

export SA_EMAIL=$(gcloud iam service-accounts list --filter="displayName:spinnaker-account" --format='value(email)')
export BUCKET=spinaker-config-20190514
export PROJECT=$(gcloud info --format='value(config.project)')

Create a GCS bucket and allow Spinnaker access to it via service account.

gsutil mb -p $PROJECT -c Standard -l Asia -b on gs://$BUCKET/
gsutil iam ch serviceAccount:$SA_EMAIL:roles/storage.admin gs://$BUCKET

Create a service account key, we’ll need this key later when we install Spinnaker in GKE.

gcloud iam service-accounts keys create spinnaker-sa.json --iam-account $SA_EMAIL

Deploying Spinnaker to GKE

First, we need to install helm (the package manager for Kubernetes), since I’m using MacOS workstation I can directly install helm using Homebrew as follows:

brew install kubernetes-helm

If you’re not using Homebrew, you can refer to this guide for complete installation guidelines. After setting up the helm, we need to create cluster roles and tiller service account (the server portion of Helm, typically runs inside of your Kubernetes cluster).

Since we created our GKE cluster using gcloud cli, it must have already added context entry in your kubeconfig so that we can start using kubectl directly.

kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

We need to initialize helm and install tiller in the GKE cluster.

helm init --service-account=tiller
helm update

After this, you should see the following output:

$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Note, same version “v2.13.1” is mentioned in the Client and Server response.

Now, we’re ready to deploy Spinnaker helm chart from official Helm charts repository. We need to create a config file to define an initial configuration for Spinnaker installation.

export SA_JSON=$(cat spinnaker-sa.json)
cat > spinnaker-config.yaml <<EOF
enabled: true
bucket: $BUCKET
project: $PROJECT
jsonKey: '$SA_JSON'
# Disable minio as the default storage backend
enabled: false
# Configure Spinnaker to enable GCP services
spinnakerVersion: 1.13.6
tag: 1.19.2

Then use helm to install Spinnaker using the above config file:

helm install -n spinnaker stable/spinnaker -f spinnaker-config.yaml --timeout 600 --wait

After some time, you’ll see a number of pods running for each micro-service within Spinnaker.

$ kubectl get pods
spin-clouddriver-6cd45d4557-6xf89  1/1  Running   0      5m30s
spin-deck-585f6bcf84-xbwn2 1/1 Running 0 5m31s
spin-echo-75cd9d4b76-66tw6 1/1 Running 0 5m34s
spin-front50-7dddd49885-pkxvb 1/1 Running 0 5m28s
spin-gate-677758b98c-p54h7 1/1 Running 0 5m32s
spin-igor-548b477b64-8l8bv 1/1 Running 0 5m31s
spin-orca-686b567659-jsl24 1/1 Running 0 5m29s
spin-rosco-5c55d5c7bf-jf7z9 1/1 Running 0 5m27s
spinnaker-install-using-hal-l5brh 0/1 Completed 0 9m40s
spinnaker-redis-master-0 1/1 Running 0 11m
spinnaker-spinnaker-halyard-0 1/1 Running 0 11m

Now that Spinnaker is deployed and running in GKE, you can also access it using port forwarding as mentioned in the output of helm installation above. Next, we need to make it publicly accessible.

Publicly accessing Spinnaker

We first need to make the spin-deck and spin-gate services of type NodePort from ClusterIP as follows:

kubectl patch svc spin-deck --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
kubectl patch svc spin-gate --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'

Since both deck and gate services are of type NodePort, we can use a single ingress to route traffic to these services using hostname-based routing rules. First, let’s create an ingress:

# ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
name: spinnaker-ingress
- host: <domain-pointing-to-spinnaker-ui>
- backend:
serviceName: spin-deck
servicePort: 9000
- host: <domain-pointing-to-spinnaker-api-gateway>
- backend:
serviceName: spin-gate
servicePort: 8084

Above is a yaml manifest file used for creating ingress as follows:

kubectl apply -f ingress.yml

After some time, GKE will add a load balancer corresponding to this ingress that we just created. To get the public IP of the load balancer, you can use kubectl as follows:

$ kubectl get ingress
spinnaker-ingress <ui-domain>,<api-domain> <public-ip-of-lb> 80 5m8s

Now, that you have the public IP of your load balancer, you can create two A-records to point the two domain names to the same public IP of the load balancer.

Note: by default GCP creates health check endpoints to “/” and expect status code 200 in response of a health check, but in case of Spinnaker API Gateway, “/” will return 302 status code, in which case you need to change the health path to “/health” to get 200 status code and only then the load balancer will start routing traffic to the API Gateway service.

Since, we’ve not enabled any sort of authentication after making Spinnaker publicly accessible, it is open to all and anybody who happen to know the domain names can reach the Spinnaker UI which is a security risk. It’s recommended that you add proper authentication layer on top of Spinnaker so that only authorised people can access it. Refer here.

Finally, you need to tell Spinnaker that the base URLs for Dashboard and API Gateway has changed. To make configuration changes in Spinnaker we need to use halyard (it is a command-line administration tool that manages the lifecycle of your Spinnaker deployment).

kubectl exec --namespace default -it spinnaker-spinnaker-halyard-0 bash
# the above command will open a shell into the halyard pod which was
# used by helm to deploy spinnaker within the Kubernetes cluster,
# run the following commands within the halyard container
hal config security ui edit --override-base-url http://<ui-domain>
hal config security api edit --override-base-url http://<api-domain>
hal config security api edit --cors-access-pattern http://<ui-domain>
hal deploy apply

You can check your Spinnaker installation by going to the respective domain names. For Spinnaker UI, go to http://<ui-domain> and for Spinnaker API go to http://<api-domain>/health to get the following response:


This concludes the setup of Spinnaker in GKE, you can use this setup to deploy applications to multiple cloud providers by adding cloud provider specific account details and then creating applications with corresponding deployment pipelines.

We hope that you enjoyed going through this guide, we’ll be adding follow-up articles covering advanced aspects of CI and CD integration with cloud providers and end-to-end deployment pipelines for applications.