The Quest for a Truly Software Defined Datacenter

More than a single vendor’s buzzword filled vision, the Software Defined Data Center is the culmination of several significant trends that have disrupted the IT industry status quo, delivering order-of-magnitude improvements in datacenter efficiency, flexibility, and business agility.

Virtualization Transforms Datacenter Silos

Compute, Storage, and Networking have all undergone significant industry and architectural transformation with the advent of virtualization. Each underlying physical hardware stack that comprises the traditional datacenter has experienced dramatic disruption and subsequent standardization, followed by commoditization. The rise of differentiated software has allowed abstraction, consolidation, and optimization of the underlying hardware, fundamentally transforming the operational models and capital expenditure of compute, storage, and networking.

With the mainstream adoption of software defined compute (server virtualization), software defined storage (both hyper-converged form factors and software-only abstraction layers), and software defined networking (with network virtualization overlays and fabric based SDN underlays), the stage has been set for a new chapter in datacenter orchestration, automation, and policy based provisioning.

The Journey to the SDDC Starts with Server Virtualization

In 2001 VMware launched a game-changing platform for server virtualization: ESX, the first hostless bare-metal server hypervisor. The magic of ESX was its ability to take a traditional server running a single application, which was the standard method of deployment in datacenters around the world to avoid application conflict and performance issues faced when running two workloads on a single physical server. With Moore’s Law powering increasingly powerful and lower cost servers, IT teams started to realize that each server running a single application was actually only running at 10–15% utilization. In other words, datacenters around the world were powering, cooling and managing servers that were 85% underutilized!

Server Consolidation — The “Killer Use Case” of Server Virtualization

Once IT organizations began down the path of server virtualization with ESX, they quickly realized the benefits of server consolidation — the ability to take an application, load it into a software-based virtual machine (VM) abstracted from the hardware, and add multiple VM’s to each physical server to maximize the CPU capacity. This effort began in the Test and Development environments of IT, but quickly spread to production workloads. Since each VM is isolated and encapsulated from every other VM, there is no resource contention or performance degradation. Consolidation ratios therefore became a badge of honor for server teams going through the virtualization transformation. 10 virtual servers on 1 physical box, 20:1, even 50:1 in desktop virtualization use cases. As server architecture has continued to evolve to multi-core and ever-more powerful physical boxes, the virtualization layer has enabled otherwise trapped server potential to be harnessed, while reducing energy consumption, heating and cooling challenges. The savings in CapEX funded virtualization licenses, which helped to further drive down CapEX: a virtuous cycle that transformed VMware into a dominant new IT vendor.

vMotion and the End Of Planned Downtime

When VMware launched a central management console, vCenter, in 2003 that allowed centralized management of VMs across all servers connected across one management domain, another benefit of abstracting physical servers from software-based virtual machines also became clear — the ability to move running VMs from one physical server across the network to another physical server because possible through a new technique called vMotion. With this new ability to provision VMs centrally, move workloads across different physical infrastructure from a central console, the implications for datacenter operations were clear. Planned downtime for server maintenance became obsolete overnight. Instead server admins gained the abilty to vMotion all workloads off of a physical server, power it down, perform maintenance / upgrades, then vMotion VMs back onto the asset. vMotion opened up a new era of agility and flexibility for the compute silo of the datacenter, while vCenter enabled easy programmability and centralized VM provisioning. Two critical enabling steps forward on the journey to the SDDC.

Proliferation of Hypervisor Choices

While VMware was the first to commercialize server hypervisors, the IT industry responded with multiple alternatives given the strategic position in the IT stack where hypervisors reside. Microsoft in particular, a vendor used to controlling strategic elements of the IT stack, jumped into the server virtualization market with the launch of Hyper-V in 2008, and KVM rounded out the mainstream virtualization approaches. KVM is a linux-based hypervisor and was acquired by Red Hat in 2008 and subsequently included in the Red Hat distribution of Linux. Each of these hypervisor alternatives powers a different IT ecosystem, and is the cornerstone for the implementation vision of software defined data centers for each: VMware vCloud, Microsoft Azure, and OpenStack.

Disruption Comes To Storage

While dramatic changes and architectural disruption were transforming how IT managed compute resources from 2000–2010, the storage silo of the datacenter was evolving at a much slower rate. Industry heavyweight EMC, who acquired VMware in 2004, was and continues to be the dominant vendor in the space. While a giant market, worth in excess of $20 Billion per year, the predominant technology in use was the Storage Area Network (SAN) and spinning disk to store data the standard approach for data storage. The industry incumbents, including EMC, NetApp, HP, IBM, Dell, HDS, and Fujitsu competed primarily on incremental hardware feature innovation — increasing the speed of rotating disk from 7,200 RPM to 10,000 RPM to 15,000 RPM, and software innovation including data services like de-duplication, snapshots, clones, High Availability and Disaster Recovery. The storage wars were fought by industry giants as trench warfare — with an ebb and flow of small market share gains year in, year out.

More Than a Flash in The Pan

The traditional battle lines evident in the storage industry began to give way with a new wave of venture-backed startups who had seen the disruption and change transform compute, and correctly identified flash storage, with order of magnitude improvements in IOPS for performance, on a plummeting price slope over time, as an industry-disrupting trend. Initially, with the price of flash at a premium, early promising start-ups bet on new hybrid architectures — deploying small amounts of flash in a performance tier, with slower, cheaper spinning disk for data at rest. Nimble Storage was the poster child for this approach and leapt out to an early lead in hybrid storage before incumbents began to launch their own hybrid solutions, or acquired competing startups to fill out their product lines. Several startups made an even bolder bet — deploying all-flash arrays. Pure Storage is now synonymous with this approach. The key takeaway is that an entire next-generation architecture has emerged with order of magnitude improvements in performance, while simultaneously driving down cost per / Gb for capacity.

Per-VM Policy Comes to Storage

Despite the torrid pace of investment, innovation and IOPS performance gains in storage over the past 10 years, a fundamental mismatch exists between the storage world and the server virtualization world: LUNS vs. VMs. The traditional storage “unit of management” is the LUN (logical unit number), while in the server virtualization world the unit of management is a unique VM. It is standard practice to deploy multiple VMs provisioned to each LUN on traditional storage LUNs. This makes conducting storage operations like backup, recovery, and migration complex with virtualized workloads. VMware has traditionally been a closed platform with limited published storage APIs that all traditional and start up storage players have been forced to write to. Of the start up landscape, Tintri and Maxta are two notable start-up exceptions to develop storage management software with a per-VM management approach, amongst over a dozen next-generation storage start-ups. VMware released a new vVOLS API in 2015 that provides traditional storage arrays the ability to plumb into the VMware platform and access per-VM management. Storage vendors are now developing a wide variety of systems to take advantage of vVOLS capabilities. The ability of the storage silo in datacenters to be managed at a per-VM level is another critical milestone on the journey towards the SDDC.

The Storage Wars Spill Over: Convergence leads to Hyper-convergence

The widespread adoption of server virtualization in datacenters all over the globe has caused ripple effects in adjacent silos of the data center, specifically storage and networking. One of the early reactions to this wave of change was the move toward converged systems. One of the most popular was UCS: a collaboration between Cisco, EMC and VMware. The concept of a converged system with virtualization, compute, storage and networking was so popular that the joint venture vaulted to a $1 Billion dollar run rate and adoption by 75% of the Fortune 500 in less than 5 years.

New start-ups like Nutanix, Simplivity, Atlantis Computing and even VMware’s VSAN aimed to build on the success of the UCS converged systems approach, with the important omission of SAN storage. Converged compute, storage, and virtualization packaged in a scale out, lower cost server and local storage architecture has created a new billion dollar + / year market.

Innovation Comes to Networking

Transformations in both the compute and storage silos of the datacenter have achieved critical mass. Change has come more slowly to the traditionally conservative and risk-averse networking silo. This is ironic given that Software-defined Networking (SDN) projects launched out of Stanford University back in 2010 that led to the creation of early SDN pioneers Nicira and Big Switch Networks.

Traditional networking, led by industry titan Cisco Systems, traditionally has been deployed managing each switch individually, in support of the application, VM and storage infrastructure. While the storage industry is led by EMC, the networking industry is dominated by Cisco. For the past 20 years, the traditional chassis architecture — with supervisor, backplane connected to individual line cards — have dominated networking.

But just as dominant players in compute and storage have been disrupted, emerging approaches to datacenter networking are beginning to impact the larger networking market. Software Defined Networking (SDN) is a concept that aims to separate the control plane from the data plane in networking; in other words to separate the management of the individual switches and abstract the management software from the physical switch hardware. Hyperscale companies like Facebook, Google and Amazon realized that traditional networking architectures were too expensive and difficult to manage at the unique scale of their datacenter. These companies pioneered a new approach to network abstraction, centralized software management and dramatically lower cost switch hardware in order to realize order of magnitude improvements in network operations, simplification, and improved agility.

The Final Datacenter Silo Becomes Software Defined

Just as compute and storage has attracted venture capital to invest in innovative start-ups, networking has attracted significant interest and investment since 2010.

Dominant vendors looked at the market and began to make strategic bets on macro IT trends including the acceleration of public cloud outsourcing and the anticipated need for tighter integration for proprietary private cloud stacks. In 2012 EMC’s wholly owned subsidiary VMware offered $1.23 billion for Nicira — a software-defined networking startup. A traditional storage company with the leading virtualization platform was making a strategic bet to move into networking for the first time. Cisco reacted with an ill-fated acquisition of WhipTail — an all-flash storage start up, but announced a new hyper-converged storage offering in March 2016. EMC and Cisco began to move away from partnership to more open competition, exacerbated by Dell’s acquisition of EMC and VMware for $67 Billion. A race has begun to optimize entire datacenter stacks — compute, storage, networking, virtualization and orchestration.

The SDN category is rightly recognized by the investment community as one of the key areas in the datacenter in need of transformation. The SDN category has evolved into network virtualization overlay providers — VMware NSX (the acquisition of Nicira), Plumgrid, Nuage, and Contrail, and the SDN underlay vendors including Cisco ACI and Big Switch Networks.

The Last Datacenter Silo Transformation Required for the SDDC: Networking

In order to achieve the vision of the SDDC with policy-based automated provisioning, each silo in the datacenter — compute, storage, networking — requires an intelligent abstraction layer to separate hardware from software. With VMware vCenter, Microsoft SCVMM, and OpenStack, there now exist mature orchestration layers to help enterprises to achieve increased infrastructure utilization, policy based provisioning, and simplified management. Traditional networking approaches have been slow to evolve to enable a true SDDC based infrastructure. With providers like VMware NSX, Cisco ACI, and Big Switch Networks, networking innovation is finally delivering automation and innovation to this last silo of the data center.

The Arrival of Containers: Younger, Smaller, Faster,

Virtualization is fundamentally transforming each silo of the data center, and virtual machines have been at the center of the datacenter transformation. However, we may be in the golden age of VMs, as a young but transformational technology is starting to gain early adoption in forward-thinking datacenters: containers. This approach, spearheaded by Docker, Kubernetes and Mesos, allows workloads to be provisioned and deployed in seconds, with much lower overhead than VMs. For large-scale datacenters with many dynamic workloads being requested, for example a streaming content service like NetFlix delivering content on demand, with each session getting delivered via an individual container for each subscriber, containers are a much more agile and flexible approach for content delivery than traditional virtualized environments. The container revolution is in its infancy, but is one of the most hyped technologies to arrive on the scene since the VM.

Conclusion

The SDDC is increasingly accessible to enterprises deploying private cloud infrastruture. Modern, scale-out architectures, software abstraction, and intelligent orchestration are bringing the vision of the Software Defined Data Center to life, delivering on the promise of agile IT, with modern data centers delivering business agility at the pace of modern business.

Show your support

Clapping shows how much you appreciated Gregg Holzrichter’s story.