Why DevOps Best Practices Won’t Work With “Old” Infrastructure

Kamesh Pemmaraju
Cloudel
Published in
6 min readMar 13, 2017

I don’t mean “old infrastructure” in terms of “how aged is it in years”, but “how it is holding you back in achieving faster software delivery”.

“Old Infrastructure” doesn’t meet the needs of your developers

Imagine this scenario: You are a company building modern software which is a key driver of your business. To support this, you have grown your infrastructure organically over time adding servers, storage, networks, security, virtualization software, monitoring, alerting, and analytics tools, and a plethora of other middleware, databases, messaging, and operating systems your developers need to build, test, and deliver their applications.

As a result of this unplanned organic growth, there is no consistency in the environment. You manage, maintain, and operate much of this environment manually. Your various specialist teams (storage, networking, security) begin to form fiefdoms around their respective functional areas. Change becomes extremely hard. Automating across silos turns into to lot glue and duct tape which turns into maintenance nightmare. You over-purchase to meet the demand and may end up under utilizing resources. Your costs spiral out of control. You are now finding it a hopeless challenge to meet the needs of your developers — which was the primary purpose of doing all of this in the first place.

What a mess!

No wonder this “old infrastructure” cannot meet the agility and on-demand scale needs of your developers.

What you need is a modern platform that address the requirements of both your development and IT/Operations teams.

Such a platform should provide software development teams an easy way to consume on demand compute, storage, network as well as provide easy access to CI/CD tooling and software services that helps them increase their development throughput and shorten product delivery.

In addition, the platform should also empower operation teams to manage, maintain, and operate the entire infrastructure environments with very few people using smart software that drives automation and intelligence into the entire stack.

In short it should have the ability to break the silos within your IT environment and break silos between development, QA, and operations teams. And this in turn improves velocity of software development, improves infrastructure utilization, increases overall operational efficiency, and reduces costs.

What you need is a solution that enables DevOps best practices.

Let us look at the key essentials of such a solution.

Self-service API-driven Infrastructure: A key attribute of DevOps best practice

Self-service, API-driven infrastructure is a fundamental requirement to enable Infrastructure as Code — a key attribute to enable DevOps best practice. It allows developers to write code (which can be done using their favorite programming language) and make RESTful API calls to the underlying programmable infrastructure to manage initial deployments and configurations as well as manage ongoing automated dynamic provisioning of infrastructure, autoscaling, monitoring, and alerting.

All this automation removes the confusion and error-prone manual steps for the entire application delivery process, including develop

ment, testing, staging, and production deployments. This in turn accelerates software delivery and increases quality.

However, building such a self-service API-driven infrastructure yourself is not easy. The key component that pushes up the cost is the operational complexity and the cost of experts needed to build and operate it.

To lower cost, you should look into solution that is not entirely handled by people but by smart software that drives automation and intelligence into the entire stack.

Very few solutions in the market handle the full lifecycle (deployment, upgrades, monitoring, alerting, and on-going resource management) of infrastructure automatically and also allow a seamless migration of workloads between private environments and public clouds.

Additionally, if self-service is available as a SaaS-based delivery model, it is easy to add more features and workflows very quickly without having IT doing a major upgrade. If your solution comes with just an on-premise install, ask how much work IT has to do in order to maintain this environment and how often new features will get added.

Hyper-converged Infrastructure (HCI): Eliminating silos in the infrastructure

As mentioned earlier, “silos” impede velocity because they lead to complexity of operations, lack of consistency in the environment, and lack of automation. A hyper-converged cloud design with a software-centric architecture tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor. This approach eliminates silos and lowers costs and complexity. It also makes it easy to start small and grow on demand while staying tightly right-sized on capacity and cost.

Built-in browsable categorized application store

DevOps engineers need readily accessible Continuous Integration and Continuous Deployment (CI/CD) tools such as Jenkins, Git, Maven, JUnit, etc. to automate the development and test pipelines. Additionally, tools like Ansible, Puppet, and Chef help with automated configuration and lifecycle management of workloads. Application developers need to quickly integrate with and deploy middleware services like RabbitMQ, Redis or storage backends like MySQL, Postgres, Cassandra, or MongoDB. A single-click deployment of such services and other complex multi-tier networked tools and applications greatly automates, accelerates, and simplifies the development process

Seamless migration between private and public cloud

Developers and testers often need much greater capacity than is available on-premises for scale testing, and they should be able to move their application to public cloud for such purposes. Alternatively, they may already have some applications running on a public cloud and they may want to bring them back to their private cloud for cost savings or better performance. For these use cases, a private cloud solution should offer seamless bi-directional migration between public cloud and the private cloud.

Dashboards & visibility for troubleshooting

One of the key advantages for a private cloud is that you can get complete visibility across the infrastructure and applications. In a public cloud, you can only get VM-level stats but have no control or visibility below that. Make sure that your private cloud solution can provide visibility to developers directly. Many vendors only provide access to IT, thereby controlling what developers can do and making IT a bottleneck in the application development and deployment process.

Better insights to improve efficiency and capacity management

A cloud requires capacity planning, utilization monitoring, right-sizing of workloads, and detecting zombie VMs and unused resources. Look for a solution that comes with built in analytics, management tools and insights. An even better approach would be to have a solution where you don’t have to install the management tool on-premise and it can deliver its value as a service.

To meet the needs of application developers and development frameworks and to create an efficient DevOps platform, companies should look at consistent converged architectures managed via smart software. This empowers application developers and operations teams to leverage a consumption model they want, while providing control to IT and making them relevant to your business success.

Integrated intelligence and insights delivered as a service ensure that you never have to use spreadsheets to decide what to purchase and when. Cloud solutions with smart software ensure that you never have to make a Level 1 support call again; your management software can do that for you.

This article first appeared on ZeroStack.com https://www.zerostack.com/why-devops-best-practices-wont-work-with-old-infrastructure/

--

--