How to Choose the right DevOps tools

Karl Schwirz
Slalom Technology
8 min readJan 11, 2017

--

by Karl Schwirz and Bruce Cutler

The DevOps tools market is flooded with options — and choosing the right one for your organization can be daunting. If you Google “DevOps tools,” you’ll see endless lists of tools — everything from agile collaboration platforms to frameworks that provide you with continuous delivery capability. What you won’t find, however, is guidance for picking one.

The fact is, there’s no single DevOps solution that caters to every organization’s unique needs. If you adopt specific technologies simply because others have done so, it could end up doing more harm than good. Here, we’ll walk through the best process for identifying the right tools for your organization. We’ll cover the gotchas and pitfalls, and how to add valuable pieces to your workflow so you can make your implementation a success.

As an essential first step, take time to assess the current state of your delivery pipeline. This will enable you to identify any inefficient processes or areas that can benefit from the adoption of DevOps tools. If you have extended testing times or slow provisioning of new hardware, it means you may have bottlenecks within your system, hurting productivity and increasing feature cycle time.

Bottlenecks within software delivery can appear in many different forms, including:

  • Time-consuming and error-prone manual processes (i.e. code builds and code deployment)
  • Manual or non-existent testing strategies
  • Manual creation and configuration of any environment
  • Failure to properly understand and test the deployment process, resulting in extended deployment times to production
  • Time spent waiting for shared resources to become available

Who better to ask about bottlenecks than the team that interacts with the software delivery processes on a daily basis? They’ll be able to provide valuable insight.

We recommend backing up any critical items you discover — things like logging and customer feedback. This will help you understand the entire workflow — customer impact, discovering which tasks are performed most often, how long each takes, and rates of failure. Armed with this information, you can plan and prioritize what pain points you want to tackle based on what would provide the most benefit.

As the concept of DevOps has grown from buzzword to a necessity in the last few years, the number of tools available to automate processes within the area of software delivery has grown exponentially. These tools can be divided into the five categories below. Organizations that have successfully integrated DevOps principles within their software delivery pipeline automate tasks using tools from each of these major tool categories.

A common myth is that version control is designed to hold source code only. We would like to dispel this myth. The source control should store everything that encompasses a releasable version of your software. In short: application code, infrastructure code, configurations, build mechanisms, and databases should all be maintained using a consistent version control strategy.

The initial time investment required to script all aspects of your software will far outweigh the long-term benefit of being able to view your entire system as a single, releasable unit. If you’re doing it right, any authorized team member should be able to re-create any version of the software system at any point in time.

You likely already have a selected version control system in place. However, if you have the opportunity to start from scratch, consider tools like Git/GitHub, Subversion, BitBucket, and Microsoft Team Foundation Server. However, these are not the only version control tools that work well within a delivery pipeline implementing DevOps principles. We have yet to find version control software that doesn’t allow us to integrate well with other technologies. The VC tools mentioned above were highlighted because they’ve been found to be flexible and reliable by a number of organizations, including Slalom.

Some things to consider as you decide on a VC tool are:

  • Centralized vs distributed model
  • Team size
  • Open source/proprietary
  • How well it integrates with other parts of the DevOps toolchain

PuppetLabs aptly define configuration management (CM) as “the process of standardizing resource configurations and enforcing their state across IT infrastructure in an automated yet agile manner.” Expanding on this definition, you write code that describes desired configuration states, and your chosen CM tool does the heavy lifting to ensure that this configuration is applied to desired targets in a consistent manner. Whether you’re provisioning infrastructure, deploying your application, enforcing server configurations, or updating security policies, configuration management tools automate tasks which were previously performed using slow, manual steps.

Only a few years ago, these tasks would take days or even weeks to complete, with a high possibility of configuration error given the number of manual steps. As more and more organizations shift toward using the scalability of cloud-hosted infrastructure, configuration management tools’ ability to seamlessly apply desired configuration states to hundreds or thousands of nodes at once is extremely beneficial.

There are many open source configuration management tools available. Some popular ones are Chef, Puppet, Ansible, SaltStack, and CFEngine. When choosing a specific CM tool to adopt, consider:

  • Does the tool require the DevOps team to learn a new language?
  • How does the tool integrate with other parts of the DevOps Stack?
  • How complex is the tool to learn, in terms of setup and getting started?
  • Push vs. pull: How are updates to nodes triggered?
  • Is it straightforward to scale the number of managed instances both up and down?
  • How good is the available documentation?
  • Is there active community support?

Each of the aforementioned CM tools have advantages and disadvantages, so we recommend taking the time to do your research. Consider some of the questions we’ve raised along with the needs and requirements of your organization before choosing one to use.

Build system software could arguably be the heart of your software delivery pipeline. From compiling code to orchestrating various levels of testing suites, your build system will have a hand in some very important tasks.

Cooperating directly with your chosen version control software, the build system can be configured to validate the integrity of code checked-in by developers and report any build errors and unit test failures. By doing this, the build system acts as a virtual safety net. If build errors are reported or testing suites fail, the proposed changes never make it to the deployment package. The value of this is immense, allowing organizations to have an additional degree of confidence when deploying code to production servers ten or even hundreds of times per day.

The decision on which build tool to integrate within your solution will be based on a number of factors:

  • Does it interact well with other members of the toolchain, particularly version control?
  • What’s the level of support for third-party software via plugin libraries, etc.?
  • Written configuration or web interface: How are jobs created and scheduled?
  • What is the quality of available documentation?
  • User preferences and prior experience with specific technologies

With a build system in place, it’s possible to further streamline this process using an artifact repository tool. When we develop an application, we commonly use supporting development libraries from a variety of different sources. These libraries often get stored in the darkest depths of your source control system and become difficult to manage as projects scale in size.

This issue often materializes when multiple teams require access to different versions of a library with an ambiguous owner. Thankfully, situations like this can be avoided through the use of an artifact repository tool, which provides a central repository for commonly employed dependencies. This greatly simplifies the distribution of artifacts among various project teams and has the added benefit of versioning these files.

Along with maintaining source code using version control software, it’s important to also store successful software builds, so you can deploy any version of your software at any point in time. Maybe you’re deploying the latest build, or perhaps three versions ago as part of a rollback. Storing packages in a repository, like NuGet or Artifactory, will provide you with the flexibility to fully control both the what and when of software deployment.

Along with version control and build system software, deployment tools make up an important part of a software delivery pipeline, because they automate the deployment of code to specific server instances. A number of the build system tools listed in the previous section (Jenkins, TravisCI, etc.) also conveniently offer a deployment component. Using a combined build system/deployment tool will allow you to consolidate some of the delivery pipeline processes, but may lack the flexibility and scripting capabilities of dedicated deployment tools like Capistrano or Octopus Deploy.

When choosing a tool for application deployment, consider:

  • What steps are required to deploy your application (straightforward vs complex)?
  • Do you require a tool that offers extensive scripting capabilities?
  • Does a tool require the DevOps team to learn a new language?
  • Ease of use and documentation
  • User preferences and prior experience with specific technologies
  • Release management. Does the tool offer code promotion between environments? (Dev >> Test >> Production)

Organizations that fully integrate DevOps principles within its software delivery process place an immense amount of importance on monitoring. In a previous blog post, we shared how we were able to achieve a 95 percent increase in velocity by implementing DevOps principles with a client. A big piece of that came from monitoring and alerts setup for the solution — like notifying the team when performance was falling on a critical data job for the application. By doing this, the team had a much better understanding of where to start addressing problems, vs. finding out via a user and starting at square one.

When we talk about monitoring, it’s usually in one of two areas: application or system monitoring. At the application level, metrics like requests per second, transactions per second, and response times are collected to gauge web level performance. Whereas at the system level, metrics relating to the underlying hardware such as CPU utilization and memory usage are gathered. With cloud systems, we can also view the state of your resources — for example, bandwidth utilization across web servers, table performance on a database, or custom monitors to give even further depth into your application’s execution.

When examining available monitoring options to choose from, consider:

  • Is setup and the presented information intuitive?
  • Is analysis provided on the gathered metrics?
  • Is the software open source and does it offer an API for custom metric creation?
  • Are there notifications based on metric triggers? If so, are there third party integration points for collaboration mediums such as JIRA or Slack?
  • Is it straightforward to scale the number of managed instances both up and down?

Fully integrating DevOps principles within an organization’s software delivery pipeline can be challenging. For many team members who have used traditional software delivery techniques for years, the idea of deploying and releasing code to production servers multiple times per day may seem extremely far-fetched.

Considering this, it’s important to pick the right DevOps tools to integrate within the software delivery pipeline. Selecting the right tools will enable you to demonstrate the benefits of DevOps and alleviate the fear of change within an organization. Instead of rushing to introduce four or five new tools at once, you should begin by introducing tools that will bring the largest benefit to most people, as identified by your investigation into the system bottlenecks.

In addition to demonstrating the enormous benefits that DevOps tools can bring, it’s important to spend time educating team members about how the tools apply to the software delivery pipeline. If a new tool is simply thrown at a team with no instruction on how to use it, it’s highly likely that they will reject the tool and regress to their previous method of working. Instead, DevOps team members should schedule time to help others learn about new tools and answer any questions they may have. Doing so will provide enormous benefits, greatly increasing the chances that the adoption of new DevOps tools goes smoothly.

To learn more about DevOps best practices, check out our list of five things you can start doing right now that will get you on your way to DevOps and our post on why DevOps can have a huge impact on the efficiency of your SDLC.

Originally published at www.slalom.com

--

--

Karl Schwirz
Slalom Technology

Boston based Cloud and Software Architect for @Slalom. Co-founder and editor of Slalom Technology. Father. Husband. And savior of countless digital planets.