An Introduction to Developing Microservices on Kubernetes (in Pipedrive)

Ragnar Paide
Pipedrive R&D Blog
Published in
7 min readFeb 10, 2021
Orchestrating containers
Orchestrating containers. Photo by ammiel jr on Unsplash

To better understand the main structure of Pipedrive, you need to know that it is built upon a microservice architecture. There are hundreds of different microservices deployed on clusters of servers within Pipedrive. Each one of these performs very concrete tasks and offer services to other microservices.

This creates an integrated network of small and easily manageable services that together form the Pipedrive application. These microservices are packaged (containerized) into Docker images. Pipedrive chose to manage these microservices as containerized workloads through a Kubernetes container orchestration platform.

The following is the first article in a series about the past, present, and future of microservices development in Pipedrive.

From Docker Swarm to Kubernetes

At the end of 2019, Pipedrive started to migrate microservices from Docker Swarm to Kubernetes in two phases. The decision to switch platforms meant that everyone in the engineering organization was impacted either immediately or some time in the future.

The rationale behind migrating:

  • Docker Swarm was no longer being developed actively (no significant developments in the roadmap)
  • Sizable and active community for Kubernetes
  • Container orchestration features and ecosystem for Kubernetes
  • Problems with scaling services on Docker Swarm
  • Questionable value for using Docker Enterprise Edition

The first phase of migration:

  • Develop tooling and processes to adopt Kubernetes in a backward-compatible way (for example docker-compose file to Kubernetes manifest converter)
  • Migrate microservices to Kubernetes with minimal impact on product development

In parallel with the first phase, a team of engineers from Infra and DevOps were put together to prepare for the second phase. They were given the task of developing a solution that would support developing microservices on Kubernetes from the ground up (locally on a developer’s machine). So far, the local development environment of microservices were based on a tool backed by Bash script and Docker compose files. With the new solution, there is no room for the compose files. The only things that remained from the Docker world were Docker images and the Docker Hub.

The second phase of the migration:

  • Deeply integrate Kubernetes into the development processes, tooling, and CI/CD
  • Remove technical debt related to Docker Swarm

In the second phase, it was planned that all of Pipedrive’s product development will onboard to Kubernetes by converting the old Docker compose-based development environment to a Kubernetes (and friends) based solution. The reason was simple, with the migration to Kubernetes the parity between development, test, and production environments was broken. Back in the Docker Swarm days, everything was based on Docker compose files, but now the compose files were only relevant to the development environment. This caused the problem of having an extra layer of complexity to convert from the compose files to Kubernetes manifests. Developers were stuck with compose files and didn’t know anything about the deployment model of their microservices on Kubernetes. This gap had to be filled.

Local Development Environment for Kubernetes

The team of engineers tasked with this problem did research into the existing tools for development on Kubernetes in order to better understand the features offered and if any of these tools could be adopted. As of spring 2020, there are quite a few open source solutions for developing applications on Kubernetes.

The researched open source solutions were:

and the features (requirements) we were looking for in our research were:

  • port forwarding
  • hot reloading
  • multi-repo support
  • Helm support
  • Kustomize support

Even though Devspace matched most of our requirements, we still opted for developing the solution ourselves. The main reason was the need to have full control over the solution (knowledge, making changes, etc), and not taking on any external dependencies in terms of decision making and available resources.

In March 2020, the team started the development of the tool. Since most Pipedriver employees work on MacBooks, the most obvious choice was to base the Kubernetes solution on Docker Desktop for Mac (the old development environment was also based on Docker Desktop for Mac). We also played with other platforms (minimal Kubernetes providers) like minikube, microk8s, and k3s as well, but found those to be a little less convenient for our use case.

Don’t get me wrong though, there was no plan to couple the solution with macOS only. On the contrary, one of the key requirements for the solution is that it needs to be extensible so that the support for different operating systems (Linux and Windows) can be easily added. Even the Kubernetes provider was abstracted away so that providers like minikube, microk8s, and k3s can be easily plugged in.

Since the old solution was called docker workstation, we decided to follow this tradition and dubbed the new tool “kubernetes workstation” a.k.a KWS.

During development, we took a long look into the future, and made another big decision, bringing Helm into the play. We needed a tool and an industry-accepted format for packaging Pipedrive’s microservices for Kubernetes. Helm became our weapon of choice.

Take-off. Photo by SpaceX on Unsplash

In June of 2020, the first version of KWS was released. It contained support for Kubernetes on Docker Desktop for Mac and all the everyday practices that developers need such as:

  • centralized packages of microservices for production mode in a local development environment
  • starting a service in development and production mode
  • deploying stacks of services
  • easy and self-explanatory bootstrap of environment
  • fast cleanup
  • application hot reloading
  • connecting debugger

In the list above you won’t find support for running functional tests. Developing and running functional tests in a development environment was one of the requirements for the solution, but it was in fact missing in the first iteration (it’s been solved now — implemented in autumn of 2020). In case you are interested how functional testing is being done in Pipedrive (the old way) take a look at another article from my colleague Valeriy Kassenbayev.

In future articles, we’ll dig deeper into the features of KWS, but for now, let me tease you with the list of commands for KWS CLI (in depth discussions will also follow in the future articles).

kws analyze
kws init
kws cluster start
kws cluster stop
kws cluster status
kws cluster reset
kws service start
kws service restart
kws service status
kws service remove
kws service logs
kws service exec
kws service list
kws service up
kws service down
kws stack start
kws stack remove
kws stack list
kws open

Adoption

For those who had time and were more interested in the new solution, the new interface of the local development environment was really appealing compared to the old one. Unfortunately, as it was the summer of 2020, most people were out enjoying vacations and the sun. Also, in the first release, KWS lacked the support for running functional tests locally (it was a conscious decision not to implement it immediately since some pieces were still missing in the bigger puzzle — CI/CD not knowing anything about Helm).

The adoption wasn’t anything spectacular at the time, which we kind of expected. Interest into moving away from the old solution was rather low since everything still kind of worked in the old way. After missions for adjusting CI/CD to use Helm, and implementing the support for running functional tests locally with KWS, interest began to peak. As of today, more than half of our 500+ microservices in Pipedrive can be developed on KWS and this number continues to grow every week. We expect to achieve full migration from docker workstation to KWS by the end of spring 2021.

Challenges

The biggest challenge has been the performance and stability of Docker Desktop for Mac. While we appreciate the hard work the Docker people are pouring into Docker Desktop for Mac, we struggle with occasional high CPU usage and bugs.

A few weeks ago we merged experimental support for Linux + microk8s. Although to be fair, KWS could be used on Linux even from the very beginning since the group of engineers who built the foundations of KWS, developed integration tests that could be run on Linux + k3d. Since the interface was semi-open and we have a low presence of Linux workstation in house, we didn’t promote it too much. Just recently we also added experimental support for Docker Desktop for Windows + WSL2 support.

From the very beginning, open-sourcing KWS has been another challenge on our minds. We have to admit that a few, very Pipedrive specific details, have managed to slip through the pull requests. Unfortunately, these tiny details block us from currently open-sourcing the project, but we remain confident that one day we will extract the core of KWS and make it public for the world.

Summary

Working from home. Photo by Charles Deluvio on Unsplash

The first version of kubernetes workstation was developed by a group of four engineers in a little less than 3 months during the first shutdown in Estonia caused by COVID-19, each one working remotely from home. The challenge was giant as the mission group tasked with the problem didn’t have any prior experience developing such a tool for Kubernetes. Nevertheless, the challenge was accepted and the goal was achieved. The feedback from our developers’ family has been positive. Developers at Pipedrive are getting more and more familiar with Kubernetes every day. Pipedrive development processes and tooling are constantly improved to be more streamlined and offer fast delivery cycles with less chance for errors and software bugs.

The next article in the KWS series will focus on KWS commands and try to set up a demo project.

Signing off,

Ragnar Paide (proud DevOps engineer at Pipedrive)

--

--