Building a Kubernetes Based Development Environment for Services

Jason Yoo
Hootsuite Engineering
6 min readAug 16, 2018
Photo by Rich Tervet on Unsplash

Introduction

Over the past few years, Hootsuite have been diving into the world of microservices. One microservice became ten, and ten soon became more than a hundred. This explosion led us to adopt Kubernetes, a fantastic container orchestration system. However, making these awesome changes came with some problems, including the problem of a local development environment.

Local development for a monolithic codebase is as simple as starting a single server. Local development for microservices is a lot more complicated since all inter-service dependencies need to be met before the service of interest starts functioning correctly. We at Hootsuite wanted the ability to efficiently develop on microservices as well as our older monolith. Hosting hundreds of different services on a single MacBook or giving every developer a copy of Hootsuite infrastructure stack in the cloud wasn’t feasible. Upon investigating four potential solutions: (pure off-line, proxied, live, pure on-line) we decided to go with the second option: proxied — a local-remote hybrid setup.

Different Options for Local Development Environment

Read more to find out about Hootsuite’s journey to building a Kubernetes based development environment.

Building the Development Environment

The very first step was determining the structure of the new development environment. We chose a Minikube based hybrid local-remote architecture for latency and cost reasons. Minikube is an open-source Kubernetes development environment where both master and worker Kubernetes nodes are situated in a single virtual machine. The term local-remote denotes routing logic that takes place when a HTTP or gRPC call is made from a microservice running within the local environment. The request will first look within the local environment to see if the microservice it wants to hit is present. If not, it routes to Hootsuite’s global development environment where it finds the target after more routing magic. Thus, we can easily test service interactions locally and deployments to the global development environment are not needed for a faster iteration time. This also has the benefit of not potentially taking out the development environment for all other developers with a breaking change.

After the rough plan was laid out, it was time for us to start building. The following paragraphs depict the simplified step by step actions we took that realized our plan.

1: Creating a Kubernetes Manifest for a Service

We first containerized components of our biggest service. Once the containers were operational, we grouped them into a Kubernetes pod and tested their behaviours while ironing out small details through Minikube.

2: Setting up Routing Logic

How routing will occur when Member service is present on the MacBook vs when it isn’t

Hootsuite has an in-house service mesh comprised of Consul, Consul Template, and Nginx created during the migration to microservices. Though we were leaving our old Vagrant setup behind, we decided to leverage the service mesh to set up communication between local services being developed on laptops, and services in Amazon EC2 instances. The routing would be setup as such:

  • Each laptop has a Consul datacenter daemon that’s federated with EC2’s Consul datacenter.
  • Helm templates out a chart that adds routing and service registration sidecar containers within the service manifest.
  • Kubernetes creates a deployment inside Minikube based on the manifest
  • The service registers itself with the local Consul datacenter.
  • Consul-template modifies Nginx templates of all local pods to alert them of the new, local service, but also of services remotely running in EC2.

This setup would enable developers easily work with local and remote services at the same time.

3: Creating Additional Kubernetes Manifests for Services (and more)

This step was the most time consuming and is yet to be finished. Using Helm to abstract some information away, we began writing Kubernetes manifests for each environment (Minikube, Development, Staging, and Production) per service. Makefile and Jenkinsfile templates allowed fast deployment to any environment and set up a deployment pipeline.

4: Packaging Resources

The next step was wrapping Minikube and other basic setup logic into a Homebrew package.This allows for the local development environment to be set up from one shell command. The package would perform auxiliary tasks such as ensuring that Minikube boots properly, loading caching Docker images, enabling feature flagging for local services, Vault authentication for services, and more.

5: Beta Release

Once we decided that the new development environment was stable enough, we gave a company-wide demo and released the environment to all developers as an alternative way of developing services. By leaving the environment in a beta state and have developers use it, we patched many lurking bugs, and made improvements based on feedback.

6: Full Release

When people at Hootsuite had time to adjust to the new setup, we deprecated the Vagrant based setup and made the Kubernetes compatible local-remote setup the official development environment for Hootsuite.

Done!

End User Experience

Let’s say you just joined Hootsuite and are very excited to work on a service. This is a slightly simplified version of what you have to do to start contributing:

  • Get access to resources such as Artifactory, Docker Registry, Github, etc…
  • Setup the tools that give you access to encrypted data and secrets
  • Run a Shell command to install Kubernetes / service mesh logic on your MacBook
  • Start the development environment with ‘hs-minikube start’ from your Terminal
  • Go to a Git repository of a service and run 3–4 Makefile commands from your Terminal
  • You are good to go

Takeaways

1: “slim client / fat middleware” approach is good

One key factor that greatly facilitated the buildout of the new environment was our service mesh’s “slim client / fat middleware” model. This model abstracts all the routing logics away from each microservice and dumps them into the service mesh, enabling things like circuit breaking, smart redirects, and more. Another key benefit of this approach is that when updating network settings, very few changes can be made to an individual microservice.

2: Kubernetes is great but is still changing and isn’t always backward compatible

We had to do a lot of ‘hacks’ to get Minikube working and loading properly. An example of this was waiting until all kube-system pods finished booting before initializing any of our resources. In addition, newer version of Kubernetes and Minikube came out as we were working on the project so the team had to make adjustments such as creating Roles along the way.

3: Transitions are easier when developers help out

Developers on various teams creating Kubernetes manifests for their own services expedited the processes. Their willingness to beta test the local development environment helped weed out bugs and problems. This project helped me first-handedly experience the benefits of DevOps.

Thank you for reading this blog. Make sure to check out more Hootsuite blogs here if you are interested!

Jason spent 4 months in Hootsuite (May-August 2018), where he joined Production Delivery team. He helped build Hootsuite’s Kubernetes-based development environment and serverless microservice for managing deployments.

Connect with me on LinkedIn!

--

--

Jason Yoo
Hootsuite Engineering

Software Developer Co-op, Product Operations & Delivery