Continuous Delivery without pipelines — How it works and why you need it
Continuous delivery is a key component in cloud-native software development processes because it aims for developing, testing, and releasing software with greater speed, frequency, and quality. Delivery pipelines have been the tool of choice so far, because there was no alternative. However, even simple changes in these pipelines, like the replacement of a testing tool, can be a Herculean task.
This article explores how we can break-up continuous delivery pipelines and completely forego pipelines. It also shows how the open-source project Keptn implements these concepts and how you can benefit from them.
Delivery pipelines are monoliths
Delivery pipelines perfectly follow the “everything-as-code” approach, i.e., the code of these pipelines describes which stages are used in the environment (for example, dev, staging, production) and the applied workflows (i.e., the used deployment and test strategies). Furthermore, these pipelines also implement the integrations and calls to external tools — we often see pipelines using seven or more tools.
Let’s consider the sample Jenkins pipeline shown below (which has been heavily oversimplified). This pipeline implements the workflow (i.e., the stages for deploying and testing) as well as calls to external tools (i.e., the pipeline calls helm, selenium, jmeter, and kubectl).
stage('Deploy to dev') {
steps {
script {
sh """helm upgrade --install --namespace dev ... """
}
}
}stage('Dev tests: Functional tests') {
steps {
sh """java -jar selenium-server.jar functional_testsuite ... """
}
}stage('Deploy to staging') {
steps {
script {
sh """helm upgrade --install --namespace staging ... """
}
}
}stage('Staging tests: Performance tests') {
steps {
sh """java -jar selenium-server.jar perf_tests ... """
}
}
stage('Deploy to production') {
steps {
script {
sh """kubectl apply -f ... """
}
}
}
However, implementing both the workflows and the tool integrations within the same pipeline results in heavily coupled code — we consider pipelines to be monoliths. This can potentially cause high interdependencies and lead to complex, unmanageable delivery pipelines that will become your next legacy-code challenge.
That continuous delivery pipelines are already almost unmaintainable has been confirmed by some of our largest enterprise customers. Some of our customers have up to 50 different teams maintaining flavors of custom-coded pipelines. Especially when multiple teams independently implement their own pipelines, tasks such as replacing, adding, or standardizing on a specific tool across the organization result in labor-intensive projects that can take weeks or months to complete.
Our approach to breaking up continuous delivery pipelines
To overcome the problems faced by monolithic pipelines, we’ve come up with a completely different approach which doesn’t rely on pipelines at all. Let’s begin by explaining each step in the approach.
Each continuous delivery pipeline consists of phases. Take, for example, the small pipeline example above. Here, we have deployment and validation (i.e., testing) phases. However, when we implement the deployment and the validation phases within the same pipeline, we’re mixing unrelated tasks. Instead, it makes more sense to have one pipeline responsible only for doing the deployment and a separate pipeline for the testing. You probably already recognize a well-known pattern here: The breakup of software monoliths into microservices. This is the first concept in our approach. The break-up of a single pipeline into smaller phases, which may be realized as “micro-pipelines”.
When we break up monolithic pipelines, we always obtain a similar set of phases. These phases realize a deployment of a new configuration (i.e., of a new artifact), a validation, an evaluation, or a release/revert of a configuration. As the implemented phases were similar across all the pipelines we investigated, we defined a common set of phases (influenced by our lessons learned and best practices). To encapsulate each phase, we introduced interfaces in the form of events. Each phase therefore has a start event that contains the necessary data to execute the phase and an end event that represents the result of the phase.
Now we have micro-pipelines with clear responsibilities and clear interfaces even in the form of events. What we did not talk about yet is how to connect the events to each other. Here, we again follow best practices from the microservices world. Our approach uses a publish-subscribe mechanism, which allows you to subscribe to relevant event types.
However, the code of micro-pipelines would still contain the definition of the workflow (i.e., the used deployment or test strategy) as well as the integrations and calls to the used tooling. Let’s again consider the example pipeline shown above. This pipeline executes functional tests in dev and performance tests in staging. This workflow information should be decoupled from the tools that are used, i.e., whether it’s a jmeter test in dev or a selenium test in staging.
This brings us to the last concept of our approach. We split the workflow definition from the tooling that’s used. For this separation, we again benefit by the above introduced events. The events that are sent control the workflow and how the events are processed controls the tooling. In this way, our approach doesn’t require a pipeline at all. Instead, arbitrary services (for example, written in Go, Python, or your favorite language) can subscribe to an event and then execute the functionality.
You’re hopefully now considering that this no-pipeline approach sounds promising, but you may be asking yourself how it can be implemented. In the following section, we’ll show you how we implemented our approach.
Continuous Delivery without pipelines — How Keptn implements it
In order to bring the no-pipeline approach to life, we started implementing the open-source project Keptn. You can find further information about Keptn in Andreas Grabner’s blog. But for now let’s look at how Keptn implements these concepts — namely using CloudEvents, Shipyard, and Uniform.
CloudEvents
In order to encapsulate the different phases of continuous delivery, we introduce events which serve as input information for a phase or represent the result of a phase. Here, we use the CloudEvents specification. For a detailed description of our events please check out our specification.
In the following listing, you can see an example of a CloudEvent, which represents the start-tests event. In the data block of this JSON object, you can see that this test should target the dev stage using a functional test strategy. Please note that these events can be extended with further custom information required for executing the test.
{
"type":"sh.keptn.events.start-tests",
"specversion":"0.2",
"source":"https://github.com/keptn/keptn-service#start-tests",
"id":"49ac0dec-a83b-4bc1-9dc0-1f050c7e789b",
"time":"20190325-15:22:50.560",
"contenttype":"application/json",
"shkeptncontext":"db51be80-4fee-41af-bb53-1b093d2b694c",
"data":{
"githuborg":"keptn-tiger",
"project":"sockshop",
"service":"carts",
"image":"keptnexamples/carts",
"tag":"0.7.1",
"stage":"dev",
"teststrategy":"functional"
}
}
Shipyard
Shipyard declaratively describes the stages that an environment consists of (for example, dev, staging, and production). For each stage, the shipyard allows you to specify a deployment strategy (for example, direct, blue/green, canary) as well as a test strategy (for example, functional, performance). Shipyard just defines what should be done but not how it should be done.
The shipyard sample below describes an environment which consists of three stages: dev, staging, and production. In this example, we use a direct deployment strategy for the dev stage and we define that the deployment should be tested using functional tests while we use a blue/green deployment and performance tests in staging. In production, we use a canary deployment and execute end-to-end tests.
stages:
- name: "dev"
deployment_strategy: "direct"
test_strategy: "functional"
- name: "staging"
deployment_strategy: "blue_green"
test_strategy: "performance"
- name: "production"
deployment_strategy: "canary"
test_strategy: "end_to_end"
What would be necessary in the pipeline shown above or in your delivery pipelines to e.g. add another environment called security or to change the deployment strategy in one of your environments? This would require that you further extend the pipelines with code, which would again increase the interdependencies. In Keptn, however, you only need to change a few lines in your shipyard file.
Uniform
A uniform enables you to specify the tools used to ship artifacts — Independently of the actual delivery workflow. More precisely, uniform allows you to specify a set of services, which implement the integration and calls of external tools. Each of these services can have subscriptions to events representing the start/end of the different phases. When the service is triggered by an event, it translates the received data into an API call to the used tool. After completing the task, the service confirms with a done-event.
The example below shows a uniform file that defines a Helm service, Jmeter service, Pitometer service, and a GitHub service. All these services have subscriptions to events. For example, the Jmeter service is triggered when it receives a start-tests event.
apiVersion: sh.keptn/v1alpha
kind: uniform
metadata:
name: sample-uniform
namespace: keptn
spec:
services:
- name: helm-service
image: keptn/helm-service:latest
subscribedchannels:
- start-deployment
- name: jmeter-service
image: keptn/jmeter-service:latest
subscribedchannels:
- start-tests
- name: pitometer-service
image: keptn/pitometer-service:latest
subscribedchannels:
- start-evaluation
- name: github-service
image: keptn/github-service:latest
subscribedchannels:
- new-artifact
Again, consider what would otherwise be necessary in the pipeline above or your custom pipelines to replace, for example, the testing tool? How many pipelines would you have to change? In Keptn, this replacement takes place in a single point — the uniform. How easy it is to replace tools in Keptn is explained below using a real-life example.
Putting everything together
Let’s now look how Keptn combines these concepts and allows you to, for example, deploy a new artifact (i.e., a new Docker image). The figure below shows an example where the continuous delivery is split into the deployment, test, and evaluation phase — the used strategies for these phases are defined in the shipyard file. For each phase, Keptn sends CloudEvents, which trigger services registered in the uniform. These services then implement the integration and the calls of external tools.
Overall, this example shows you how Keptn implements continuous delivery without a single pipeline. Now let’s see how we can benefit from having no pipelines in real life.
Exchanging Continuous Delivery tools at runtime with Keptn
When we started implementing Keptn, we used three rather small Jenkins pipelines for deploying (applying a Helm chart), testing (executing Jmeter tests) artifacts, and evaluating KPIs of an artifact to decide whether to promote it or not. At this time, Jenkins pipelines have been the right tool as they allowed us rapid implementations. However, running a dedicated Jenkins instance for these three pipelines seemed like overkill and it was weird having a no-pipeline delivery approach that uses pipelines in the background. Fortunately, it turned out that the replacement of these pipelines is a great use case for demonstrating the benefits of Keptn’s no-pipeline approach.
We implemented dedicated services for the deployment, for running tests, as well as for evaluating whether an artifact should be promoted in the form of Kubernetes-services. Let’s exemplarily consider the new service for doing the deployment in more detail. For managing the deployment, this new service uses Helm — hence, it is called Helm service. This Helm service listens to start-deployment events, which in Go code, appears as follows:
type StartDeploymentEvent struct {
Service string `json:"service"`
Image string `json:"image"`
Tag string `json:"tag"`
Project string `json:"project"`
Stage string `json:"stage"`
GitHubOrg string `json:"githuborg"`
TestStrategy string `json:"teststrategy"`
DeploymentStrategy string `json:"deploymentstrategy"`
}
Using the data of this event (for example, the stage into which a new artifact should be deployed as well as the deployment strategy), the Helm service applies Helm charts, which are stored in a GitHub repository. After finishing the deployment, this service sends a deployment-done event to Keptn. Overall, this new Helm service is only a few hundred lines of Go code, which use the CloudEvents Go-SDK for receiving and sending events.
Finally, for replacing the deprecated Jenkins pipelines with the new, dedicated services, we only had to change the uniform file. The following listing compares the old and new version of the uniform file. More precisely, for the new version we removed the Jenkins service and its event subscriptions. Afterwards, we added the three new services and their subscriptions.
In a future release, the Keptn CLI will be able to “wear” (i.e., apply) a uniform using
keptn wear uniform uniform-using-dedicated-services.yaml
or to update a uniform using
keptn update uniform uniform-using-dedicated-services.yaml
And that’s it!
Conclusion
Pipelines for continuous delivery lack a separation between “What should be done” and “Which tools are used”. This results in pipelines that contain both the code for the workflows as well as the integration and calls of external tools. As a result, pipelines become monoliths, which are heavily coupled and complex to adapt, extend, or maintain.
Event-driven systems like Keptn instead work without pipelines. Therefore, Keptn breaks up the continuous delivery into phases and defines clear interfaces between these phases in the form of CloudEvents. Furthermore, Keptn abstracts the “What should be done” in shipyard and the “Which tools are used” in uniform.
In this way, Keptn allowed us to easily implement dedicated services and subscribe them to relevant events. From there on, only Keptn’s uniform needed to be changed for replacing the tools. By applying these concepts, Keptn allows you to exchange continuous delivery tools even during runtime!