Working in the technology field for the last 20+ years, I have always been passionate about automation, and I am so happy to see wide-scale adoption across all industries finally. New automation products and services now dominate the marketplace. IT leaders are echoing phrases like DevOps, Infrastructure as Code (IaC), Continuous Integration and Delivery, etc. in their boardrooms.
In this blog, I will try to make sense of all of this and at the same time try to add some perspective within the context of DevOps. Also, in future blogs, I will drill down into the automation aspects of DevOps.
DevOps is applying time tested manufacturing principles used to produce physical goods to develop and release software. It is common knowledge that Henry Ford invented the modern manufacturing process and the assembly line. To surpass the American auto industry, Toyota sent Taiichi Ohno to the US to learn about American manufacturing from Ford. After his visit, Ohno developed the TPS (Toyota Production System) and the Ten Precepts of Lean manufacturing.
Both men focus on maximizing the flow of resources (inventory) through the systems while reducing defects. Incidentally, these are the same goals of deploying software. That is, if we can build software efficiently, with minimum errors (bugs) by testing before releasing to production, preferably in an automated way, we will not only minimize risks but also reduce our deployment and management costs. In this way, cost savings will be a result and not the goal.
Traditional Model for Deploying Software
DevOps is quite different from the traditional software development model, where many developers work on various parts of the application and then merge the code at the end to create it. If there is a flaw or a change in requirements, the developers have to start from the beginning. Below are some common pain points of the traditional model for deploying software.
- Complex organizational structure requiring several teams (handoffs) to complete a single development project
- Complex software delivery pipeline with many manual operations
- Knowledge drain after developers leaves — No updated documentation or expertise available to support the application. After the project is “completed,” the development teams move on to the next big project.
- During application outages, it is often difficult to identify who needs to be involved or where to find the experts
- Multiyear development projects that do not deliver incremental value
- Perception of slow IT that serves as justification to go around IT
- The impression that information security is a roadblock
- Infighting between teams — us versus them
- Resources cannot focus on new opportunities because they are tied up in multi-year projects
- Teams are terrified to make changes
- Waiting for new environments to be provisioned (environment, testing, etc.)
- Overproduction and processing — users, provisioned the most massive VMs because they are afraid of delays in provisioning new capacities or solutions exceeding specs, building additional features, etc.
- Teams work long hours moving code between environments, testing new features and code base, developing tests, fixing defects, etc.
- While the code is waiting to be shipped, revenue is lost
With DevOps, we can apply those same manufacturing principles to deploying and managing software. We can implement the equivalent of Ford’s assembly line, a CICD (Continuous Integration and Continuous Delivery) pipeline, to build, test, and release software as we will see firsthand in future blogs.
DevOps builds upon lean manufacturing principles such as focusing on customer value, attention on time; eliminating waste shared learning, reducing cycle time, avoiding batching, finding bottlenecks, and identifying and elevating constraints. According to Gene Kim, author of the DevOps Handbook, — “IT is like a factory floor. It is about adding value and increasing flow, reducing wastage, and reducing friction between developers and operators; thus, making more profits for the firm.”
Implementing DevOps is not a trivial task and will not happen overnight. It is a journey that will change your organization’s culture and the way your teams’ function, especially developers and operations. Instead of developers and operators having opposing goals, they will now have a common goal. With DevOps, the aim is to automate as much as possible, continually measuring our progress and finally fostering an environment of collaboration so we can quickly identify risks and make changes.
An essential requirement for implementing DevOps is your culture. DevOps change to IT culture and technology aims to remove friction between developers and operators to accelerate the delivery of new capabilities and services. DevOps needs a culture that fosters teamwork, transparency, empowerment, trust, learning, and accountability. Teams should be empowered and not afraid to fail or take ownership. They should be able to speak up and use their judgment. Continual learning is also critical for DevOps success. Teams should share knowledge as they learn new skills and use post mortems as a learning experience.
In my view, an essential element of DevOps is automation. Automation of not only the software development and delivery process, but also the underlying infrastructure that supports the application — public cloud or containers. Automation is possible by some very specialized tools used to stitch together an automation fabric for DevOps and support all stages of the software delivery process.
I cannot overstress the importance of transparency in DevOps. Tools such as Trello and Visual Studio Online provide visibility and transparency to DevOps projects across the organization. This visibility will also help create shared visions and break down barriers.
GitHub functions capture “tribal knowledge” that is important to the DevOps team, and turning it into the documentation. GIT also helps with compliance by providing a complete audit trail of code changes. Using the GitHub “Request Review” feature, you can make sure that every required compliance reviewer signs off on any new version. Finally, GIT reduces wastage because the process of creating documentation is the same as creating code which everyone on the team can update the documentation.
A key factor of collaboration is the ability to share information. Monitoring earlier in the lifecycle can help provide teams with a common data set shared across different departments to optimize application performance and availability.
DevOps teams will have visibility to the entire application lifecycle and can address issues before we release applications to production.
With the traditional software development model, there is a high probability of bugs and rework. We find defects late in the process, and integration is manual and dependent on proper documentation.
With Continuous Integration (CI), developers are delivering code updates faster and more frequently — sometimes several times per day — into a shared repository, such as GIT. Each developer is responsible for checking their code in separate branches (instead of a single “branch”) and compiling regularly. Also, the build will include automated testing (unit, security, UI, etc.). The entire build and testing process is automated. With tools like Jenkins or CircleCI, every time the developers’ checks in code, the automatic build runs and the build progress and status is visible to everyone.
CI helps teams test more frequently to discover and address defects (bugs) earlier before they become more significant problems. Freeing developers from manual tasks and encouraging behaviors that help reduce the number of errors and defects improves developer productivity. Also, because of the short feedback loops, teams can make changes more often and react to customer needs faster.
With the traditional model, there is a high probability of errors during the deployment process. Installation steps may not be accurate and executed on multiple environments, and many things can go wrong because the process is manual. It is typically a very stressful process, which leads to the slow delivery of new functionality to users.
Continuous Deployment is where software “can be deployed” to production at any time. Because in most situations, we would like to retain control, we will focus on Continuous Delivery.
With Continuous Delivery (CD), we will take the release produced in the CI phase and feed it into a release pipeline. The release pipeline may contain steps similar to manual steps such as ssh to a server, copying files, stopping and starting services, and deploy to specific environments (test, dev, prod, etc.).
When it comes to provisioning the release environment for the application, we can use Infrastructure as Code (IaC). Using tools like AWS CloudFormation or HashiCorp Terraform, you can programmatically provision the underlying infrastructure on your public cloud platform. Jenkins or CircleCI can orchestrate both application deployment and release environments. Additionally, if you are using Docker containers, the process is even more straightforward using Dockerfiles to define a preconfigured image and Docker-Compose.yml to defined your application environment. While the build pipeline is automated, the release pipeline is on-demand.
In future blogs, we will see firsthand how to set up a CICD pipeline using Jenkins and Docker to deploy an application to AWS.
In DevOps, you must continually try to optimize the software delivery processes, and as such, it is critical you collect performance, process, and people metrics as often as you can — if you cannot measure, you cannot improve. To help with metrics collection and analysis; several tools are available such as Logstach, Kibana, Datadog, New Relic, etc.
Because operations and developers have a shared responsibility, the collaboration between them and other teams is critical. This increased communication and collaboration helps all parts of the organization to align more closely with goals and projects. Various tools are available to assist with decision making and planning. Some examples are Skype, Lync, Slack, and other real-time chat solutions.
Common Objections to DevOps
While many of the benefits of DevOps is apparent, there are still some who prefer the traditional software delivery approach and argue that DevOps practices will compromise our security, make compliance even more challenging or will not be able to reskill our teams.
Concerning security, it is easy to make a case that because DevOps relies so heavily on automation, we can bake security objectives into all stages of the development and operation process (in the application blueprints) you will spend significantly less time remediating security issues later. We can also minimize configuration drifts using platform policies or scripts. Check out articles about DevSecOps for more information. Similarly, with compliance, we document all changes in the GIT repository and the code. Finally, IT is an ever-changing landscape, and training is a continual process.
DevOps is becoming more than hype. The most challenging part of adoption is changing our culture; however, if we are successful, the short and long-term benefits will be significant. Cost savings will be a byproduct and not the target, attract the best talent and enabling our teams’ experiment and fail fast.
As mentioned previously, in the next series of blogs, I will focus on the automation aspects of DevOps to show you how we can develop and deploy software in a consistent way, with minimal human intervention using a CICD pipeline and leveraging IaC to automate the underlying infrastructure using the public cloud and Docker.
Until next time.
Join our community Slack and read our weekly Faun topics ⬇