Turbine - An in-house CD that enables each teams deploying in the cloud

Guillaume Desmidt
ADEO Tech Blog
Published in
7 min readMay 20, 2021

My name is Guillaume Desmidt, and I’m Product Owner of Turbine. If you never heard about it, it’s normal. It’s our in-house CD application based on Kubernetes. Today it’s an ADEO Common Product. It means that it’s there to be used by every team in each Business Unit of ADEO Group. It’s now used by 160 teams from 4 different business units, and they are doing 23K deployment a month. The servers behind LeroyMerlin.fr are deployed multiple times a day seamlessly thanks to Turbine.

The modernity of an IT CD is a reflection of the modernity of your IT 😃
It’s about enabling developers to seamlessly ship features and provide value for their users.

As a foreword I would like to thanks the whole Turbine Team and especially Nicolas Lassalle (who was on stage with me at Cloud Nord to present Turbine - Replay on Youtube), Frédérique our Frontend Dev and UX/UI designer who provides our users with a very friendly and explicit interface.

It’s not only about moving to cloud

Turbine was born inside Leroy Merlin France (and I’m still part of Leroy Merlin FR). The whole ADEO group had already acted a replatforming. A progressive move to cloud was acted for every team, product and application. Replatforming leroymerlin.fr’s website was definitely one of the most challenging achievements. We decided to split the existing monolith into pieces, multiple microservices, even microfrontends, and deploy them with Kubernetes over OpenShift at first.

Even if multiple teams had already adopted DevOps practices, some of them weren’t able to deploy Docker images with Kubernetes. And we were looking for a turnkey solution that allows everyone to do so without having to dive deep in those new technologies.

Another achievement was to have a common way to deploy used by every team. It enables us to share good practices, to invest in our tool over time since there are more and more people using it. And also, teams are still autonomous, able to deploy as much as they want over time.

Demo of Turbine (french)

Under the hood: from a MVP to 23K deployment a month

This part is mainly explained by Nicolas in our presentation at Cloud Nord (go watch the replay seriously!). I’m giving you the main parts, so you understand easily 😀

The first MVP was built with Angular and Python

There is a frontend `turbine-ui` that is built with Angular and Bootstrap CSS. Then, the backend `turbine-engine` is built with Python; it is a Rest API with flask, a custom job queue and job engine with task logs persisted over MongoDB.

The first architecture

The distributed job design was key because third-party services can have slow response time, they can even be out of service. And this design was fault tolerant and had a logging system enabling us to monitor and replay failed tasks.
Some tasks can be just long to execute. For example, creating a namespace in OpenShift can be long.

This MVP allowed us to scope over 40 jobs that were essential:
- Creating a namespace
- Launch a provisioning
- Deploy an image

Getting turbine-ui to next level

On the technical part, there is no actual revolution. We followed up with Angular’s new release (from Angular 2 to 8). We took advantage of modularisation to have benefits from lazy loading modules.

But the game changer is definitely in the UI/UX changes. Frédérique took over this part and added a strong visual identity, an actual icon set (we used to have chess icons, but obviously they weren’t explicit at all). There have been a lot of iterations, changing every label and text, iterating with our users to make sure that every screen, dashboard was easy-to-use.

About Python for turbine-engine

Python was a great idea for the MVP. It’s easy to learn, to setup. We were able to build and deliver a prototype in a very short time window. This project required to launch sub-processes easily and Python has a strong ecosystem to do so.

Eventually we had to drop it. Python is not a strongly typed language, and we had to spend a lot of time dealing with regressions. The app behaviour at runtime can be unexpected and we had several bugs in production, with stack traces sent by the users.

Also, at scale having a synchronous rest API and an asynchronous Job engine was tricky. And also consuming a lot of memory, especially because we were using Subprocess. We had multiple Pod’s being killed by the orchestrator because of an OOM error. This isn’t what we were looking for.

Moving to Golang

We decided to migrate turbine-engine’s codebase to Golang.
Golang had several advantages:

  • Strongly typed language
  • Still easy-to-learn!
  • Native support of concurrency (Channels)
  • Native support of the k8s toolchain, docker and helm are built using Go

And also the team was enthusiastic about Golang. It seems very few relevant, but it actually makes a difference every morning 😃

Development with Golang is a very different workflow. With Python, we would spend 30 min to code the feature and 3h30 to debug it. With Golang, we spend 3h30 implementing the feature and 30 min to debug it (it’s even more like a light check).

Adopting a microservices’ architecture for this kind of migration is a good idea since the codebase was very large, and we didn’t want to have a big bang migration. Also, with our experience from the MVP we knew that some services would require more resources.

The current architecture

The benefits from the microservices architecture was the isolation. When one of them is down it’s not compromising the whole turbine engine.

The downside is the uniformity to keep over 15 microservices. We first used a bootstrap codebase to generate quickly each service. But it was actually a lot of duplicated code to maintain. That’s why we build a shared library used by all of them, the `go-lib`.

At the end of the day, it’s actually efficient and consistent in production. Also being able to take advantage of the native SDK’s is really a game changer. The main difficulty is to maintain the common dependency up-to-date in each microservice and have a consistent release.

Provisioning with Helm3, Charts and keeping them flexible

On the earlier versions of Turbine, we were using Helm 2, but without its templating engine. Instead, we were using a custom Python process to generate a dynamic helm chart containing the final Kubernetes Manifests. It was very flexible to use Python code to do so but the drawback is that this system is not helm standard and our users were rejecting it.

But now we are using Golang and Helm 3 has been released (goodbye Tiller), furthermore we replaced our Python subprocesses by standard “helm Charts” with its templating system. We still wanted to have some flexibility and designed a system that enables our Golang app to generate a `values.yaml` file as an input for Helm.

Turbine’s provisioning

There is a set of Charts being managed by the Turbine team. That means that any breaking changes on them would be endorsed by our team. Thus, they are released very carefully and with most of the time backward compatibility.

All Charts produced by the Turbine team support OpenShift and GKE out-of-the-box.

One of the most used chart is called `turbine-standard-api`. It’s being used by 800+ components, around 70% of the components deployed over Turbine. This chart is being regularly updated by Turbine’s users thanks to ADEO innersource strategy. This chart is being automatically tested, and this release system ensures that it’s always producing valid Kubernetes objects.

Turbine’s promotion and adoption

Turbine’s team is very user-centric. There has been several key moments in Turbine’s adoption.

At first some other teams at Leroy Merlin France started reaching out for us to use Turbine for their own deployment, and we were very happy about this.

We even organised a “Turbine Tour” spending time with every team at Leroy Merlin to show them what Turbine could enable them to do and gathering feedback (Vedette IPA was a fast-track for feedback).

The second achievement for us was being reached by the Cloud Partner’s team from ADEO Service. Their team aims to support Cloud adoption, and they wanted to support Turbine’s adoption to do so.

Now Turbine is an ADEO Common Product. That means it’s supported and built for the whole ADEO Group. It’s meant to be used by any team, in any country where ADEO has a brand and an IT Team

What’s next ?

If you are an IT team in one of ADEO’s BU and want a demo of Turbine and to think with us about the next evolutions, features that you would like us to implement. Feel free to reach me!

If you are working in a different company with a painful deployment, checkout our career website, we have dozens of open positions 😃

--

--