Why and how we migrated Preply to Kubernetes

Amet Umerov
Aug 19, 2019 · 7 min read

In this article, I’ll share our experiences migrating the Preply platform to Kubernetes, how and why did we did it, the difficulties we’ve faced, and the benefits we’ve seen since the migration.

Image for post
Image for post

My name is Amet Umerov and I’m a DevOps Engineer at . Let’s get started!

Kubernetes for Kubernetes? No, for business requirements!

There’s a lot of hype around Kubernetes. While many people say it will solve all your problems, there’s much discussion about : some say you should avoid it because it’s not a silver bullet.

Image for post
Image for post

But that’s a discussion for another article. Let’s talk about a little bit about business requirements and how Preply worked before the Kubernetes era:

  • When we used , we had a pool of features merged to the stage-rc branch which was deployed to the staging environment. The QA team tested in this environment, the branch was merged to the master, then deploy to the production was started. It took 3–4 hours to test and deploy to the environment, and we were able to deploy 0–2 times/day.
  • When we deployed broken code on production we had to revert all the features in the scope. It was also hard to find which task was causing the problem that broke production.
  • We used AWS for application hosting, and every Beanstalk deploy ran 45 min (all pipeline with tests ran 90 min). Our rollback time to the previous app version was 45 min.

To improve our product and processes, we wanted to:

  • Migrate to microservices
  • Deploy faster and more often
  • Be able to roll back faster
  • Change our current development flow because our old one wasn’t effective anymore

Our needs

Changing the development flow

To implement features using our previous development flow, we had to create a dynamic staging environment for every feature branch. In our old Elastic Beanstalk configuration, this was complicated and expensive. We needed to create environments that:

  • Were easy and quick to deploy (preferably containers)
  • Worked with Spot Instances
  • Were as close as possible to production

We decided to change to Trunk-Based Development. With Trunk-Based Development, every feature has a separate branch and feature branches can be merged directly into the master branch. The master can be deployed anytime.

Deploying faster and more often

The new Trunk-based flow allowed us to deliver features to the master one by one. Because of this, we could find broken code quickly and revert to functional code easily. However, we still had long deploy (90 min) and rollback (45 min) times. That gave us a limit of 4–5 deploys per day.

We also faced challenges using SOA with Elastic Beanstalk. The most obvious solution was to use containers with any container orchestration. We already used Docker and docker-compose for local development.

Our next step was to research popular container orchestrators:

  • Swarm
  • Apache Mesos
  • Nomad
  • Kubernetes

We decided to use Kubernetes. Every other container orchestrator had drawbacks: ECS is a vendor-lock solution. Swarm leans back to Kubernetes. Apache Mesos is like a spaceship with their Zookeepers. Nomad sounded interesting, but it’s inefficient to use without infrastructure based on Hashicorp products. Also, there are no namespaces in Nomad’s free version.

Despite the steep learning curve, Kubernetes is the de facto standard in container orchestration. It can be used as a service on every large cloud provider. It’s in active development with a huge community and strong documentation.

We expected to complete our migration to Kubernetes in 1 year. Two platform engineers without any Kubernetes experience worked half-time on the migration.

Starting to use Kubernetes

We started with Kubernetes’ proof of concept, created a testing cluster, and documented all of our work. We decided to use after Amazon’s EKS support became available in Europe starting in .

We tested many things, including , , Prometheus, Hashicorp Vault and Jenkins integration. Also, we played with rolling-update strategies for the self-hosted cluster when we our test cluster. We had and a few network issues related to AWS and cluster troubleshooting.

For cost optimization, we used . To check Spot instance issues, we used . We found out that we could use the for checking the frequency of spot instance interruption.

We started the migration from Skullcandy flow to Trunk-based development, where we ran separate stages in Kubernetes for every Pull Request. This reduced feature delivery to production from 4–6 hours to 1–2 hours.

Image for post
Image for post
GitHub hook triggers the stage environment creation

We used a testing cluster for these dynamic environments, and every dynamic environment was in a separate namespace. Developers had access to the Kubernetes dashboard for debugging.

We started to get value from the testing cluster 1–2 months after launching our proof of concept, a result we’re proud of!

Staging and production clusters

Here is the set up of our stage and production clusters:

  • kops and Kubernetes 1.11 (the latest version of kops available at the time of setup)
  • 3 master nodes in different availability zones
  • Private network topology with a dedicated bastion host, CNI
  • Prometheus on the same cluster for metrics with PVC (we don’t need long-term storage for our metrics)
  • for the APM
  • to provide developers with access to the stage cluster
  • Staging nodes run on the Spot instances

During the operation of the clusters, we ran into some problems. For example, the versions of the Nginx Ingress and Datadog agent were different on the clusters. Because of this, the staging version worked fine but there were problems on production. To solve the problems, we made the staging and production clusters exactly the same.

Migrating production to Kubernetes

Now that staging and production clusters were ready, we began the migration. Here is the simplified structure of our monorepo:

├── microservice1
│ ├── Dockerfile
│ ├── Jenkinsfile
│ └── ...
├── microservice2
│ ├── Dockerfile
│ ├── Jenkinsfile
│ └── ...
├── microserviceN
│ ├── Dockerfile
│ ├── Jenkinsfile
│ └── ...
├── helm
│ ├── microservice1
│ │ ├── Chart.yaml
│ │ ├── ...
│ │ ├── values.prod.yaml
│ │ └── values.stage.yaml
│ ├── microservice2
│ │ ├── Chart.yaml
│ │ ├── ...
│ │ ├── values.prod.yaml
│ │ └── values.stage.yaml
│ ├── microserviceN
│ │ ├── Chart.yaml
│ │ ├── ...
│ │ ├── values.prod.yaml
│ │ └── values.stage.yaml
└── Jenkinsfile

The main Jenkinsfile contains a map for the microservice name and its directory. When the developer merges PR to the master, the tag is created in GitHub and this tag deploys using Jenkins, according to Jenkinsfile.

There are HELM charts for every microservice in the helm directory with separate values files for production and staging. We use Skaffold for deploying multiple HELM charts into the stage. We also tried to use an umbrella chart but it was not scalable for us.

Every new microservice we run on production writes logs to stdout, reads secrets from Vault and has basic alerts (replica count, 5xx errors and latency on the ingress checks) according to the .

Whether we deliver a new feature broken up as microservices or not, there is still some core functionality in Django. This functionality still works on Elastic Beanstalk.

Image for post
Image for post
Breaking up monolith to the microservices // The Vigeland Park in Oslo

We used AWS Cloudfront as CDN because it was easy to use canary deploys during our migration. We started to migrate our monolith and test in on some language versions of our site and admin pages.

This smooth canary migration allowed us to find and fix bugs on production and polish our deploys in a few iterations. Over several weeks, we watched the new platform, load, and monitoring. Eventually, we switched 100% of our traffic to Kubernetes.

Image for post
Image for post

After that, we stopped using Elastic Beanstalk.

UPD (Nov 2019): we started to use Skaffold for production deploys also, instead of nested Jenkinsfiles.


It took us 11 months for our full migration. It was a good result: we expected it to take 1 year.


  • Deploy time reduced from 90 min to 40 min
  • Deploy count increased from 0–2/day to 10–15/day (and still growing!)
  • Rollback time decreased from 45 min to 1–2 min
  • We can easily deliver new microservices to production
  • We changed our monitoring, logging, and secret management infrastructure to be centralized and written as code

It was an awesome experience working on the migration, and we are still making improvements.

Don’t forget to read this on Kubernetes written by our former colleague Yura, a YAML engineer who helped make it possible to use Kubernetes at Preply.

Also, subscribe to the for more interesting articles about engineering at Preply.

See ya!

Preply Engineering Blog

We are hiring in Barcelona/Kyiv: https://preply.com/en/caree

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store