The enterprise challenge to OpenStack adoption and how to address it

OpenStack software can rapidly give you access to latest cloud technology and its benefits.

Developers love OpenStack for its open source benefits, its programmability, and rapid innovation. Enterprises love OpenStack for its rapid transformation to facilitate organizational mission, and flexibility to cater to broad range of enterprise workloads and use cases. We love OpenStack for all of these great open source capabilities.

Flexibility, speed and agility needs

But before OpenStack can be consumed by developers (who are Service Consumers within the enterprise) and before an enterprise can reap the bountiful benefits of OpenStack, Operators in the enterprise would need to deploy OpenStack.

The challenge: OpenStack is not a challenge for an enterprise in terms of cash flow, but in terms of the developers’ expertise and the time needed to easily and smoothly deploy the first system without spending too much time and effort.

The solution: a flexible, agile and fast deployer solution. For an enterprise, the journey from zero to fully operational production OpenStack cloud system should ideally take no more than “2 cups of coffee”.

In the earlier days of OpenStack, much effort and fanfare was given to building deployment tools. But many of those tools later either suffer lack of maintenance or become a trojan horse for consulting services. OpenStack cloud implementation should not be a multi-week or multi-month consulting-services-laden affair!

How about building an OpenStack Deployer with all the configuration centralized and available in an easy-to-work-with format, with HA configuration included and Ceph deployment integrated? These capabilities will allow enterprises to rapidly and easily benefit from a full in-house OpenStack platform, while ensuring a smooth and flexible cloud system, with great cost efficiencies.

The resource management system

When speaking about a resource management system for OpenStack, we imagine a system that is designed to ensure high resource utilization while meeting business objectives.

The challenge: In order to achieve high resource utilization efficiency, without compromising on resource availability to meet business objectives, there are 2 situations that would need to be handled:

· given a particular set of new workload to be initiated, where should the workload be placed, considering live resource utilization of all the existing workloads across the entire facility?

· given a set of workloads running within the facility, how can the workloads be packed onto the optimal number of servers while catering to the resource requirements of the workloads, without performance impact?

The solution: A system designed to analyze actual live resource utilization, taking into account:

· all hypervisors in the system, considering their capacity and

· all running workloads, considering their actual resource utilization

Using this approach, we could optimize the number of hypervisors used to host the workloads without putting constraints on the workloads. Therefore, using an optimal number of servers to host the workloads would allow enterprises to achieve lower OpEx and CapEx, lowering license and electricity costs, without compromising on resource availability to meet business objectives.

It would not make sense to simply rely on “reservation” data from specifications provided by a user (Service Consumer). Using such approach, a 4-core laptop would just be running 4 applications! Nor would it make sense to blindly over-allocate. Using such approach, a 4-core laptop would just be running fixed multiples of 4 applications!

The upgrade phase — with Zero downtime maybe?

Think about the impact of having access to a zero downtime OpenStack upgrade operation. Any system that can’t be upgraded without downtime might be devastating for an enterprise, interrupting business activities within the enterprise.

The challenge: while OpenStack introducing a new release every 6 months for providing new innovation, features and capabilities, it is also introducing some operational upgrade challenges for Operators.

The solution: keep up the momentum and upgrade each OpenStack release to benefit from better performance and management capabilities, in addition to security.

Get an open source solution that is capable of delivering OpenStack Zero-Downtime upgrade in order to achieve high uptime and reliability through the upgrade processes. By embracing this capability, any enterprise can easily solve all the operational upgrade challenges for Operators. It should not be yet another consulting-services-laden effort!

This is our, Sardina Systems, big goal for OpenStack — to provide every enterprise access to the OpenStack technology while building a flexible, reliable and scalable innovative cloud.

Sardina Systems

A cloud platform vendor, building on OpenStack, Kubernetes…

Mihaela Constantinescu

Written by

Woman in Tech, marketer, cat lover

Sardina Systems

A cloud platform vendor, building on OpenStack, Kubernetes, Ceph. Founded in 2014, Sardina Systems makes infrastructure invisible, elevating IT to focus on enterprise applications and services.

More From Medium

More on Open Source from Sardina Systems

More on Openstack from Sardina Systems

More on Cloud Computing from Sardina Systems

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade