A Micro Open-Source Community with Helm

Katie Gamanji
Nov 20 · 5 min read

Earlier on this week, I was one of the speakers at KubeCon + CloudNative North America in San Diego. During my session, I addressed how Condé Nast established a culture that enables the furtherance of initiatives similar to the ones in the open-source community. In this blog post, I would like to highlight methods and support topologies that contributed to the creation of internal micro open-source communities while using Helm.

Photo by Syed Hussaini on Unsplash

Preamble

2 years ago, Condé Nast triggered the rollout of a centralised platform across the globe, that would eliminate inconsistencies and duplication of deployment mechanisms. At this stage, 22 websites were migrated and more than 265 million requests are served from our backend systems monthly.

Note: To find out more about the global platform at Condé Nast, read my blog post on A Centralised Globally Distributed Platform.

Developer Experience (DevExp)

The cloud platform team are dedicated advocates for a self-service deployment process and continuous enhancement of developer experience. We provide the infrastructure and associated tooling for 40 developers across 8 teams. Historically, in accordance with the business objectives, around 40 microservices were developed. Currently, we reach around 15 production deployments a week, representing a high ratio of feature buildout.

To deploy a new application to the Kubernetes clusters, the teams will require a dockerized version of their application. These will be stored in Quay.io, our choice of Docker image registry. Additionally, a template Helm chart is required, allowing the custom build of services within the Kubernetes ecosystem. A configuration file can be supplied to override default Helm chart values (e.g. amount of replicas, CPU, memory, HPA etc). These settings allow the flexibility to tailor the application deployment in each region.

Insights into Helm

At Condé Nast, Helm represents the de facto deployment manager to Kubernetes. Historically, the cloud platform team created a suite of in-house build Helm charts that abstracts the application deployment to a simple configuration file.

The base chart will create an uncompounded deployment and service account within the clusters. At this stage, the application will not be accessible to the users as no ingress rules have been set. To achieve this, conditional dependencies for sub-charts are available, generating an umbrella charts effect. The developers will be able to tune the service, ingress and HPA configuration.

Overall, around 10 custom Helm charts have been built internally. For instance, for scheduled workloads, developers will be able to use cronjobs, while for ingress whitelisting a network policies Helm chart is available.

Support Topology

In the course of 2 years, the cloud platform team at Condé Nast has transmogrified and shaped its identity multiple times. This was the key factor that contributed to the enhancement of the developer experience while creating a coherent ecosystem that instigates change.

Originally, the cloud platform team constituted one function, with the purpose of provisioning the core infrastructure elements. At the time, a bi-weekly rota would be supplied for developer-specific queries. As such, on a cyclical basis, each team member would rotate to support the buildout of microservices in diverse development functions.

With the growth of platform functionalities, the cloud platform team would get a considerable amount of on-demand queries. This would subsequently impact the velocity of delivering infrastructure features. There was a need to transition from a kanban to a scrum workflow. Additionally, at this stage, 2 core functionalities are distinguishable: cloud platform and cloud management. The latter is focused on upskilling developers and advocating for the best ownership practices of the products.

However, recently, we have introduced a 3rd core function to the cloud platform team: SRE. With this addition, we reach our goal support model, where the site reliability is an individual and central area of ownership.

Support topology evolution

The cloud platform team’s identity did undergo a gradual transformation while having the continuous improvement of developer experience on the radar. Overall, in terms of DevOps topologies, we transitioned from a type 3 model of infrastructure-as-a-service to a type 7, Google, model which positions Condé Nast in the direction of organisational maturity.

Helm μOSC

The open-source model focuses on establishing decentralized development practices while encouraging open contribution and collaboration. At Condé Nast, similar practices are encountered in the maintenance and development of Helm charts. The application teams started to contribute to the addition of new features, instead of going the traditional route of creating a ticket in the backlog. This encapsulated the premises for the inner source practices.

The following use cases are contributions from the application teams to the feature list of base Helm chart:

Case 1: Graceful termination with preStop lifecycle hooks

Previously, the application pods would trigger 502 and 504 errors. This was due to the fact that while pods were terminated, some latency would be encountered while waiting for the new endpoints. PreStop handlers were necessary to ensure the graceful shut down of the containers and subdue 5xx errors.

Case 2: Horizontal scaling based on

The introduction of for the HPA chart. It was necessary to ensure the scalability of the application based on the average external metrics exposed by all pods. This guarantees the addition of new pods while keeping the ratio of current workflows.

Case 3: Cronjob annotations

Annotations to the cronjobs were added in order to propagate AWS permission to pods. This would generate a controlled environment for pods to interact with AWS APIs.

Case4: Support secret volumes

The addition of secret volumes to the base Helm chart enabled the mounting of sensitive information to the containers as a volume.

Epilogue

Inner sourcing practices or the micro open-source communities have been the result of the proximal and extended collaboration between the application and infrastructure teams. Over time, it was necessary for the cloud platform team to metamorphose and transcend into a structure that would create a cohesive community to further enable technological initiatives, while instigating open collaboration and change.

The Startup

Medium's largest active publication, followed by +538K people. Follow to join our community.

Katie Gamanji

Written by

Sailing Kubernetes waters as a Cloud Platform engineer at Condé Nast International

The Startup

Medium's largest active publication, followed by +538K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade