DevOps and Workflow Automation
DevOps isn’t a tangible product. “You can’t buy DevOps in a box,” explains Kevin Behr, author and Chief Science Officer at Praxis Flow. The culture that can foster the DevOps mindset has to exist throughout a company, not just within a single department, so people can collectively contribute all their skills to solving problems. Otherwise, DevOps can become just a local optimization.
DevOps is all about:
- monitoring metrics
However, DevOps is NOT solving a technical problem, it’s trying to solve a business problem and bringing better value to the end user at a more sustainable pace.
DevOps at Amazon
Here is a DevOps snapshot (deployment stats for production hosts & environments) for a month at amazon (data from Keynote: Why We Need DevOps):
- 11.6 seconds : Mean time between deployments (weekday)
- 1,079 : Max # of deployments in a single hour
- 10,000 : Mean # of hosts simultaneously receiving a deployment
- 30,000 : Max # of hosts simultaneously receiving a deployment
Although devops is not about tooling, there are a number of open source tools out there that will be able to help you achieve your goals. Some of those tools will also enable better communication between your development and operations teams.
Most successful DevOps organizations automate using tools in a few core categories, using a variety of specific tools (DevOps Best Practices: Finding the Right Tools):
- Configuration management.
When DevOps aficionados throw around phrases like automated infrastructure, infrastructure as code, and programmable infrastructure, they’re talking about configuration management. That’s the tracking and controlling of changes to the software code base and the archiving of all file versions into a central configuration management database (CMDB), which enables multiple developers to work on the same code base while avoiding version-control issues.
Popular configuration management tools include
However, the real question is what do we need in configuration management? For example, if we want to take in data that other people provide and then do something with it, we need a tool that handles that well.
2. Application deployment.
Application deployment tools enable the automation of releases, and are at the heart of continuous delivery, one of the primary tenets of DevOps.
For continuous integration and continuous deployment, we need a number of tools to help us there. We need to be able to build reproducible artifacts which we can test. And we need a reproducible infrastructure which we can manage in a fast and sane way. To do that we need a Continuous Integration framework like Jenkins:
- Capistrano, a deployment library, is the most popular standalone tool in this category. Other popular tools for automating application deployment include Ansible, Fabric, and Jenkins.
Again, the key is to find a tool that tracks behavior-from history to change logs–in ways that is meaningful for both dev and ops.
DevOps requires two distinct types of monitoring. Application performance monitoring tools: nagios and New Relic APM enable code-level identification and remediation of performance issues and at the infrastructure level, server monitoring tools like New Relic Server provide visibility into capacity, memory, and CPU consumption so reliability engineers can fix issues as soon as they appear. icinga
The key is make sure that everyone can see the data so they can make better decisions.
4. Version control.
To achieve the benefits of DevOps, it’s essential to version not just our application code but our infrastructure, configurations, and databases. This requires scripting all of our source artifacts, but the payoff should be a single source of truth for both our application code and our IT systems and databases, allowing us to quickly identify where things went wrong, and recreate known states with the push of a button. No more having to play Sherlock Holmes to figure out which versions of our application code goes with which environments or databases. While commonly used version-control tools include
They differ widely on how well they support DevOps-style collaboration.
5. Test and build systems.
These tools automate common developer tasks including compiling source code into binary code, creating executables, running tests, and creating documentation. Tools in this category include
6. Storing Metrics
There, Graphite is one of the most popular tools to store metrics. Plenty of other tools in the same area tried to go where Graphite is going , but both on flexibility, scalability and ease of use, not many tools allow developers and operations people to build dashboards for any metric they can think of in a matter of seconds.
A timestamp and a value to the Graphite platform provides us with a large choice of actions that can be done with that metric. We can graph it, transform it, or even set an alert on it. Graphite takes out the complexity of similar tools together with an easy to use API for developers so they can integrate their own self service metrics into dashboards to be used by everyone.
Logstash: Initially the logstash was just a tool to aggregate, index and search the log files of our platform, it is sometimes a huge missed source of relevant information about how our applications behave. Logstash and it’s Kibana+ElasticSearch ecosystem are now quickly evolving into a real time analytics platform. Implementing the Collect, Ship+Transform, Store and Display pattern we see emerge a lot in the #monitoringlove community. Logstash now allows us to turn boring old logfiles that people only started searching upon failure into valuable information that is being used by product owners and business manager to learn from on the behavior of their users.
The right tool chain for DevOps will automate IT services, provide real-time visibility into system and application performance, and give us a single source of truth. More important than an individual tool’s capabilities, though, is how closely the all match our organization’s strategic goals. That’s the way to maximize our chances of achieving DevOps goodness.
Of course, tools are only part of the DevOps equation. We also need to create a culture that gets dev and ops working together towards the same goals.
Relationship between Vagrant, Docker, Chef and OpenStack (or similar products)?
- Chef: Chef is an automation platform that transforms infrastructure into code. This is a configuration management software. Most of them use the same paradigm: they allow you to define the state you want a machine to be, with regards to configuration files, software installed, users, groups and many other resource types. Most of them also provide functionality to push changes onto specific machines, a process usually called orchestration.
- Vagrant: Create and configure lightweight, reproducible, and portable development environments. It provides a reproducible way to generate fully virtualized machines using either Oracle’s VirtualBox or VMWare technology as providers. Vagrant can coordinate with a configuration management software to continue the process of installation where the operating system’s installer finishes. This is known as provisioning.
- Docker: An open source project to pack, ship and run any application as a lightweight container. The functionality of this software somewhat overlaps with that of Vagrant, in which it provides the means to define operating systems installations, but greatly differs in the technology used for this purpose. Docker uses Linux containers, which are not virtual machines per se, but isolated processes running in isolated filesystems. Docker can also use a configuration management system to provision the containers.
- OpenStack: Open source software for building private and public clouds. While it is true that OpenStack can be deployed on a single machine, such deployment is purely for proof-of-concept, probably not very functional due to resource constraints. The primary target for OpenStack installations are bare metal multi-node environments, where the different components can be used in dedicated hardware to achieve better results. A key functionality of OpenStack is its support for many virtualization technologies, from fully virtualized (VirtualBox, VMWare), to paravirtualized (KVM/Qemu) and also containers (LXC) and even User Mode Linux (UML).
I’ve tried to present these products as components of an specific architecture. From my point of view, it makes sense to first be able to define your needs with regards to the environment you need (Chef, Puppet, Ansible, …), then be able to deploy it in a controlled fashion (Vagrant, Docker, …) and finally scale it to global size if needs be.
How much of all this functionality you need should be defined in the scope of your project.
Also note I’ve over-simplified mostly all technical explanations. Please use the referenced links for detailed information.