Image for post
Image for post

How your delivery pipeline will become your next big legacy-code challenge

Alois Reitbauer
Mar 15, 2019 · 7 min read

Legacy code is a phenomenon that all software professionals must face at some point in their careers. Every organization maintains a few applications that, while still useful, have evolved into Frankenstein-type monsters — no one wants to touch them any longer because the risk of breaking them is so high.

Over the last decade we’ve worked hard to build code that’s easier to maintain over the long term. This has been achieved by breaking applications up into smaller microservices and developing descriptive interface definitions. We like to think that the code we’re writing today will make things easier to manage 10 years from now, but only the future will be able to tell us if we’ve been successful.

While it’s great that we now emphasize the quality and maintainability of application code, we often forget about an equally important goal: intuitive integration code that can be used to seamlessly build and ship our highly distributed and modern applications.

The DevOps movement has helped us mature as an industry. Ad-hoc manual tasks are now automated, and we’re moving toward an “everything-as-code” approach. Build scripts are now code. Jenkins pipelines are code. Infrastructure provisioning is code — we’re even moving toward translating operations routines into code. Long story short, we now create a lot of code around our application source code.

Usually such code starts out small, for example, when integrating tools like a build system with deployment automation or by integrating a monitoring tool with an issue management system. As developers see these integrations add value, they build more of them. More and more tools become integrated and additional code is required to implement the small workflows that ensure robust tool interoperability.

Such code is often viewed as an add-on[GA1] that doesn’t have the same requirements for testing, documentation, and long-term maintainability. Worse still, such code is often spread across an entire delivery tool landscape in the form of plugins or custom services that require multiple tools to maintain.

What started out as a solution for automating and improving software delivery processes has transformed into a potential risk to software delivery itself. Full end-to-end delivery pipelines now commonly consist out of seven or more individual tools (version control, build management, issue tracking, monitoring, deployment automation, artifact management, incident management, and team communication). These tools are somehow glued together and usually work fine… until they don’t. Organizations can then find themselves confronted with systems that have evolved and grown over time in an ad-hoc fashion with no clear architecture or governance.

The automation toolchain that helped your company become more efficient in application delivery now represents a business risk because the toolchain is no longer maintainable — it’s placed your company at risk of no longer being able to ship software.

What makes the situation worse is that, in many cases, different development teams have built different toolchains for each application. When talking to companies, I often hear that there is no central definition of what a delivery pipeline even is, and design decisions are left to individual teams. In the process of reverse engineering these delivery pipelines to assess the status quo, it becomes obvious that no two delivery pipelines are exactly the same.

As companies continue to march toward a “you build it, you run it” approach, a lot of poorly documented code is being built to ship applications. This issue is often overlooked because, as long as everything works fine, such code isn’t perceived to be business-critical.

This problem became obvious to us at Dynatrace as we were rolling out Autonomous Cloud Management to our customers and assisting them with the implementation of self-managing application infrastructure. We realized that, while we were consistently implementing the same approaches and concepts, each customer delivery pipeline was slightly different at the tool and implementation level. Even environments that were technically very similar had differences in their underlying technology stacks. We needed a way to solve this challenge so that we could help our customers avoid problems with their delivery pipelines.

Luckily, this is a challenge that has already been solved in our industry. We borrowed key concepts from the networking space. Networks are, to some extent, quite similar to delivery pipelines in that they must handle complex artefact (packet) delivery across heterogenous sets of devices.

In the networking space, delivery is organized into three layers:

· The application layer defines the application-level definition of what needs to be done. This layer defines application logic by configuring the control layer via APIs.

· The control layer takes this information and acts as the orchestrator of lower-level components that do the actual transport.

· The data layer is responsible for the actual networking. The data layer is configured by the control layer — ideally via a standardized protocol, such as Openflow.

The application and control layer components are often referred to as “northbound” while communication with the data layer is commonly referred to as “southbound.”

Keptn — A northbound control plane for continuous delivery

We took these concepts and applied them to continuous delivery and automated cloud operations. If the networking of an entire datacenter can be managed across hundreds of devices (from different vendors) using this approach, it should be good enough to gain control over ten different tools in a cloud-native delivery pipeline.

The application layer — shipyard files

The first concept of keptn is shipyard files. Shipyard files are used to describe the stages that an environment consists of (for example, dev, staging, and production) as well as the deployment strategies of the individual stages (dark deployment, blue/green, etc). Shipyard files also describe the core workflows that control how changes are propagated and problems are resolved. In addition to environment definition, shipyard files also specify how components should be monitored and they define the acceptance gates for entering the various deployment stages.

Shipyard files are designed to increase the reusability of continuous delivery and operations strategies. The details of how the “ship” is built are abstracted away. So shipyard files are reusable across projects and even technology stacks.

If you want to change how your applications are shipped, you simply reconfigure your keptn instance for a different shipyard file. This makes switching from, for example, a three-stage to a four-stage pipeline, a matter of minutes rather than days or weeks.

With shipyard files, your application delivery process is also well documented in code and can be fully version controlled, which is great for auditing and long-term maintainability.

Definition of the control plane — uniforms

The next core concept in keptn is uniforms. Shipyard files define what you want to do, but not which components are to be used. This is what uniforms are for. A uniform defines which tools are used to implement the functionality defined in shipyard files. You can specify that a standard GitOps provider or a solution like weave cloud be used. And you can specify that your continuous delivery solution is to be an Argo, Spinnaker, or AWS code pipeline. You can also specify which Kubernetes distribution you want to use — Openshift, Google Kubernetes Engine, or other.

Because uniforms are decoupled from shipyard files, you can easily exchange tools without rebuilding (or even touching) your delivery and automation logic. The same is true if you want to adjust your delivery pipeline — you don’t need to touch any integration code.

keptn — A control plane for cloud automation

The keptn services that are built using CloudEvents and knative services are the control plane of the cloud automation stack. These services orchestrate the underlying (southbound) components. keptn core is responsible for building proper GitOps structures, creating Kubernetes namespaces, and adjusting deployment files for services and istio configurations on the fly.

Developers only need to provide a container or Kubernetes service YAML file. Keptn takes care of the rest using the information found in the shipyard and uniform files. The pipelines that are built can include automated quality gates, blue-green deployments, automatic rollbacks, and even support for self-healing.

All the heavy lifting is taken care of by keptn, not developers. The magic behind the scenes is based on a set of well-defined CloudEvents that drive all cloud-automation processes. Individual tools supply knative services, which translate events into API calls or provide knative implementation of the required functionality.

The advantage of using knative is that it provides a serverless resource-optimized approach. Many delivery pipelines and operations tools have a heavy footprint and require a large always-on infrastructure. Even when you’re deploying frequently, serverless infrastructure is usually preferred.

Keptn — How far along are we?

We at Dynatrace have been implementing the core use cases of keptn for quite a while now. Our current focus is currently on extending keptn so that it can easily be used beyond our own internal scope and requirements.

We’re releasing updates to the Dynatrace platform on a bi-weekly basis, providing you with even more flexibility and configurability. Of course, some components are currently still hard-coded. We’ll make these components exchangeable over the next couple of months.

With the upcoming release of keptn 0.2 you will be able to apply keptn and all of it’s core concepts to your own projects. If you have any questions feel free to get in touch with us via

Photo by tian kuan on Unsplash

keptn —an event-based control plane for continuous delivery…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store