The Modern DevOps Manifesto

Andrea C. Crawford
6 min readMay 27, 2020

--

By: Christopher Lazzaro, Andrea C. Crawford

DevOps is not new. The first DevOps Days conference was held in Ghent, Belgium in 2009. Hudson (the precursor to Jenkins) turned 15, in February 2020. Although its been a while since the early days of DevOps (remember these 2009 throwbacks?) — we have learned a lot. There are perennial DevOps themes, that are still absolutely relevant. However, technology, architecture, platforms, business, and regulations have moved on in the last decade it’s time for a Modern DevOps approach.

Photo by Crystal Kwok on Unsplash

What hasn’t changed in DevOps:

  • Increasing velocity and quality of software delivery
  • Bringing Development and Operations together
  • Eliminating the “throw it over the fence” behavior
  • Being accountable for a product from ideation through “Day 2” management
  • Compatibility with Design Thinking, Agile and Lean

What has changed in DevOps:

  • Expanding the stakeholders to include more than Development and Operations (think, Security, Auditors, Infrastructure Engineering) and thus expanding DevOps to infrastructure and enterprise assets
  • The Cloud. While the cloud is older than DevOps — companies are accelerating cloud adoption, strangling monoliths, and recasting delivery for cloud-native applications
  • The emergence and maturation of containers and Kubernetes, standardize the “Cloud Operating System” and allows greater portability across providers and on-premises
  • Automate everything. Have a developer mindset towards test, production deployment, operations and avoid all manual tasks

Therefore, a “Modern DevOps Manifesto” should be considered when starting or re-invigorating DevOps for your enterprise. There are elements of what we already know, however they are force multiplied with the maturation of cloud-native.

The Modern DevOps Manifesto

  1. Everything is code — Infrastructure, configuration, actions, and changes to production — can all be code. When everything is code, everything needs DevOps.
  2. Establish “trusted” resources — Enterprise assets such as images, templates, policies, manifests, configurations that codify standards should be governed (with a pipeline)
  3. Lean into Least Privilege — New roles are emerging: Cluster Engineer, Image Engineer, Site Reliability Engineer…define roles with just enough access to the “trusted” resources they access to get their job done, mitigating risk, limiting exposure.
  4. Everything is observable — Lay the foundation for AI for Pipelines by collecting and organizing data from an instrumented pipeline.
  5. Expand your definition of “everything” — DevOps is not just for application code. DevOps can apply to machine learning model (MLOps or ModelOps), integration (API lifecycle), infrastructure and configuration (GitOps), and other domains. Expand your stakeholders to include Security and auditors…the next evolution of breaking down silos.

Everything is code

Code is the blueprint for applications. Source code is stored in a repo and has a pipeline that transforms and lands source code in its runtime environment. With the advent of cloud, containers, and k8s adoption, configurations for applications, clusters, service bindings, networks, are also being expressed as code (i.e. YAML). Configurations applied through a CLI are a first-class citizen. Known as GitOps, we can now bring benefits of pipelines, governance, tools, and automation to operations and this new class of “code.” Welcome to the next step in Infrastructure as Code. When everything becomes code, everything can have its own pipeline, bringing multi-speed IT to a whole new level. A pipeline for applications, a pipeline for application configuration, a pipeline for cluster configuration, a pipeline for images, a pipeline for lib dependencies. Each pipeline has its own speed, and they are all decoupled from each other. View the world in pipelines!

Establish Trusted Resources

There are enterprise resources that are used to assemble cloud applications. The heritage assets of the past (VM images, buildpacks, middleware releases, lib dependencies) are evolving to images, cluster configurations, and policy definitions that are shared across multiple projects. These enterprise assets should have their own lifecycle, pipeline, governance, and deployment lifecycle. These assets should be trusted and be easily consumed. A trusted asset should be managed in a repo with a clearly defined set of pipeline activities that harden, secure, and verify according to enterprise standards and regulatory compliance. A trusted asset should have a status that indicates it can be safely consumed. Once an asset is awarded trusted status (by making it through a pipeline), it should be published for consumption (this could be as simple as tagging an image in a registry). Trusted assets should be actively maintained and governed.

Lean into Least Privilege

The Principle of Least Privilege (PoLP) states that systems, processes, and users only have access to resources that are necessary for completing their tasks. With everything as code and trusted assets identified, new roles and responsibilities start to emerge. An image could be considered a trusted asset, it is sourced from a Dockerfile (managed in a source code repo). That Dockerfile goes through an automated pipeline that builds an image and executes rigorous scanning and testing that ultimately pushes and tags an image as “trusted” in an enterprise private container registry. The role of an Image Engineer might emerge as a persona that creates, curates and manages the Dockerfiles that are fed into the image pipeline. Only Image Engineers would need to have ”push” authority to the repo where Dockerfiles are managed. If Separation of Duties is a concern, the role of Image Engineer may be restricted to those who are not in the role of Developer, to mitigate the risk of having 1 person have too much influence over a runtime container. New personas can be defined for Cluster Engineers, Site Reliability Engineers, and so on, each with a clearly defined set of responsibilities and privileges.

Everything is Observable

The mechanics of getting an idea to a running feature in production can be a long-running process. There are significant pipeline events to be collected for the express purpose of building pipeline metrics, calculating delivery measurements, correlating pipeline events to operational events, and establishing a forensic feature lifeline for auditors and IT security. Pipelines should be instrumented with event collection and organization. An event data lake in which analytics and machine models could be built and tied together with problem, incident, and change management data on ”Day 2.” The IBM AI Ladder starts with Collect and Organize, eventually leading to Analyze and Infuse of cognitive capabilities, in this case, AI for pipelines. Predicting the quality of a digital product before it exits the pipeline can preserve digital reputation and improve consumer satisfaction.

Expand the definition of “Everything”

Yesteryear, “everything” meant application code and database scripts. Those that advanced on the maturity curve would include test cases, monitoring scripts, infrastructure scripts for common tasks, and put them under source code control. Now amp it up. Machine Learning models, APIs, and even pipelines themselves are code. You will hear terms like ModelOps, API lifecycle management, or the pipeline (PipeOps?), but don’t get distracted. That is just the steady march of progress and wanting to bring increased velocity and quality to other parts of the IT ecosystem. DevOps for all!

The Modern DevOps Manifesto is a combination of the heritage and modern state of delivery today. There will be more changes for DevOps, time does not standstill. We are seeing an emergence and maturation of AI, machine learning, edge, and quantum. There will be permutations for these domains that will continue to mature and emerge.

How will your enterprise adopt the Modern DevOps Manifesto?
These are exactly the kind of problems we tackle with clients in the IBM Garage, where DevOps is a fundamental part of how we bring business value to life. Schedule a no-charge visit with the IBM Garage to see how you can co-create with us. Do these ideas and concepts around DevOps resonate with you and your firm’s transformation? Let me know with your comments below.

--

--

Andrea C. Crawford

Sharing my perspective on things related to implementing DevOps, Internet of Things, Cloud, Agile, Social. Views are my own. I bleed Blue. THINK!