Spinnaker Logo

Spinnaker and Kubernetes for Continuous Delivery

Bobby Tables
Namely Labs

--

Here at Namely we’ve successfully adopted Spinnaker to handle deployments for all of our container based applications across three environments (integration, staging, and production). Spinnaker is a popular open-source tool from Netflix and a great tool for continuous deployment. Matched with the popular and versatile Kubernetes orchestration platform you have an amazing combination to push out new functionality to a microservice platform.. Along the way we learned a lot about what works and what doesn’t. We now have a process and tooling that allows teams to reliably deploy using Spinnaker to Kubernetes.

We decided to go with Spinnaker because it has native Kubernetes support, supports pipeline executions for complex automatic and manual deploy steps, and has native blue/green deployments and rollbacks baked in for safe releases and recovery. Previously we had a half-baked solution: we were simply running “kubectl apply”’s in a Jenkinsfile from a Jenkins job. Every project had a templated Kubernetes manifests checked into its repository. We would just swap out the image name to the currently built image. We called this “SimpleCD”, and even though it was conceptually simple it caused a lot of headaches. Rollbacks weren’t a thing, deployment logs were scattered, there was no auditing our current state, people would modify their scripts leading to inconsistency, and the deployment task was prone to failure. Jenkins isn’t great at deploying things, especially when they are complex.

We’ve deployed Spinnaker to Kubernetes in a separate AWS environment called “ops” which has been peered with our other AWS environments. We used Halyard to configure most of the spinnaker deployment. There are little things that don’t work via the Halyard tool such as configuring certificate authentication, so we actually patch the secrets manifest that are deployed to Kubernetes that contains all of the Spinnaker configuration. Spinnaker is then given Kubernetes credentials for three Kubernetes environments: integration, staging, and production.

The primary reason we chose Spinnaker was its pipeline functionality for deploying our applications. At first we tried using the Docker registry push as a trigger for Spinnaker pipelines but it was slow due to the polling mechanism used to fetch new images from our internal Quay registry. We quickly pivoted from that to use Jenkins jobs as triggers for our pipelines.

This was a much better solution over SimpleCDas it allowed us to parameterize our pipelines using the “Property File” configuration option.

Jenkins config with a property file

We then made a simple modification to our Jenkins jobs that emitted a properties file that Spinnaker could pick up and read from:

sh “””echo ‘docker_image = ${DOCKER_REPOSITORY}:${COMMIT_HASH}’ >> .ci-properties”””
sh “””echo ‘docker_tag = ${COMMIT_HASH}’ >> .ci-properties”””
archiveArtifacts ‘.ci-properties’

This allowed us to configure our Spinnaker pipelines using the .ci-properties values easily. Our JSON configuration for image declaration changed to this:

“imageDescription”: {
“account”: “namely-registry”,
“fromTrigger”: false,
“imageId”: “${ trigger.properties[‘docker_image’] }”,
“organization”: “namely”,
“registry”: “registry.namely.land”,
“repository”: “namely/platform-nginx”,
“tag”: “${ trigger.properties[‘docker_tag’] }”
},

This works very well but it does break the Spinnaker UI meaning we had to deal with the raw json configuration. No one likes writing JSON by hand, so this lead us to creating a tool to handle this problem for us.

k8s-pipeliner

Spinnaker allows you to configure a pipeline with just JSON, which is great because it meant we can create a tool to render this JSON for us. But we needed to create a format to support this.

Since we already had our projects all defined in Kubernetes deployment manifest files, we just needed to make something that would basically glue Spinnaker pipeline configuration and Kubernetes specific semantics together. So we created the k8s-pipeliner tool to do just that.

The basic premise of this tool was “let’s just use Kubernetes manifests to define how a project runs” and “Let’s use Spinnaker to define how a project deploys”. The tool uses a relatively simple (but could be improved) format to reference manifests and how they fit into Spinnaker pipeline stages. After defining the pipeline.yml and manifest definition, engineers could run the tool locally to print out Spinnaker pipeline JSON, and paste that into the UI to configure an entire execution.

After you generate this JSON, you can create an entire pipeline easily:

Manual isn’t great though, right? We can do better than that.

Estuary

Having engineers copy and paste JSON is error prone and just feels gross, so we built another service to automate this process. Since k8s-pipeliner is written in Go, this meant we could import the packages that generate this JSON in a new service we call “Estuary”.

Using GitHub webhooks, we’ve configured all of our repositories to hit an endpoint in Estuary whenever a push to the main branch (master in our case) occurs. Estuary then clones the entire repository, generates the Spinnaker pipeline JSON, and does a POST request to the Spinnaker API:

level=info msg="handling github push event" clone_url="https://github.com/namely/namely.git"
level=info msg="looking for pipeline" total=2
level=info msg="found pipeline in Spinnaker" pipeline_id=e2ff76c7-03af-4fce-bc58-964eb83abded
level=info msg="received status code" response_code=200
level=info msg="updated pipeline" application_name=hcm pipeline_id=e2ff76c7-03af-4fce-bc58-964eb83abded pipeline_name=hcm

This means if you modify your pipeline.yml or manifest definitions, upon merge your pipeline will automatically be updated via Estuary. There are other benefits as well. For example, we can inject in configuration for all pods being deployed so pipeline.yml files don’t need to include it themselves, making Pipeline definitions smaller per project.

This project will be available as open source soon as well.

Deployment Events

Whenever a project deploys, it’s nice to know when it happens. Spinnaker has a small but powerful feature of dispatching webhooks whenever a task completes. We use this to emit DataDog events anytime a pipeline completes. This allows us to graph events over metrics so if a deploy breaks or slows something down, we can easily see it.

The tool that accomplishes this is a simple DataDog bridge service called spinnaker-dd-bridge. It accepts all webhooks from Spinnaker and can render Go templated events that are sent to DataDog. The tags for application are also included in the event so we can overlay them easily in our graph dashboards per app.

All in all, Spinnaker is a great tool for deployments, but we found it needed some tooling to become useful for our organization. Manual configuration of pipelines via the UI was a nonstarter because there’s no log of who changed what. By moving pipeline config to repositories, we get this audit trail of change.

Closing Up

We’re pretty happy with how Spinnaker and Kubernetes work together. We’re actively watching the development of the V2 provider in Spinnaker so we can move all of our deployments to manifest only. This will allow us to also deploy things such as ConfigMaps and ingress rules from Spinnaker as well. The k8s-pipeliner tool will support this in the future as it becomes more stable.

We had a great time building this infrastructure. Maybe you would too. Checkout our careers page to come help us process over $12 billion in payroll a year, provide benefits to top tier companies, and much more.

--

--