2019 Emerging Technology Preview: Part I — On-Premise Infrastructure
Recently, I saw an old ad outlining the benefits of the failed Betamax device and I was reminded how terrible we are at predicting the future. The ad marketed a superior video player to its then competition — VHS. In many cases Beta was considered superior to VHS. So why did it lose and why were pundits wrong? Because even when discrete data seems to reveal a clear path forward — it’s hard to predict what human behavior will do — in this case adoption.
I’ve been in the IT industry for over 25 years. In that time, I’ve seen a lot of technology trends come and go. Some were obvious and foreshadowed. But several of the most impactful things that have shaped IT and our world — we didn’t see coming. Take the Internet for example. Even though academia and early pioneers had a vision for what the early ARPANET network would be — the level of what they were building could not have been prognosticated. Large innovations surprise us while we’re busy with our belief that we know exactly where this is all going.
Near the end of the year — I do enjoy reading people’s best guesses for the following year. I say ‘best guesses’ because these sorts of predictions used to hold much more water when the rate of technology was much slower or more localized. Some guesses are wishful marketing on behalf of companies who stand to benefit from the realization of those predictions. Others are formed out of fear of a changing of the guard that could lessen relevance for a company, group or individual. The ones I really enjoy are from people who are thoughtful and have considered that predicting the future is more about understanding the somewhat unpredictable nature of human behavior.
Does Intel engineer more performant chips because it’s the next logical thing for them to do? No — they do it because the human appetite to visualize and process data has grown to levels where existing CPUs are limiting the need. We move technology forward for humans because of humans. There’s a purpose.
Does this mean that we should stop trying to predict what will happen? Perhaps we should we stop listening to predictors? Absolutely not — these yearly efforts to spot trends stir a collective curiosity and wonder in the industry about what is possible. More importantly, the conjecture provides possible data points for people to consider. I take a slightly different approach by trying to pull the lens closer to my field of vision to see not what we will encounter a mile down the road — but what the next few steps look like. Everything happens in the present moment — the step we’re actively taking. We don’t live in the future.
Certainly, we need to understand what may be out in the distance and acknowledge the destination is worth traveling to, but we cannot distract ourselves from the immanency of the next step we need to take. Companies that live in the future often become paralyzed with indecision from an overwhelming number of paths to follow. The wisest companies seek to blend what they know today vs tomorrow into a very calculable next step. I help companies make decisions using data that considers the current, near-term and future of what we know today.
So — as the year comes to an end and the predictions for 2019 start — I thought I would share, through a multi-part blog, some active data points to consider from different areas of technology. These are observations based on what I’ve experienced, seen and read about in our industry. They are not predictions and certainly not an exhaustive list. They are things to consider when taking the next step.
Part I — Modernizing On-Premise Infrastructure
Wait what?! You’re starting off with boring old infrastructure after that lead-in?
Yes. Yes, I am.
I think the hype of our industry has led us astray from the exciting transformations happening inside of the workhorse of IT today. It’s not sexy and doesn’t fit neatly into a headline — but let’s be honest here — everything that we have accomplished in technology is made possible by processing, storing and moving data. #Respect
On-premise infrastructure is in an “adapt or die” transformation from the changing landscape set in motion by environments like public cloud that are changing the consumption model of technology. Self-provisioned, highly elastic and easily consumable services require infrastructure components to be open and programmable. And while we’ve grown used to the unique feature-sets of servers, storage and networking components, we need a level of unity amongst these to deliver a foundational set of capabilities with parity to public cloud IaaS services. The challenge that vendors have is to agree to a common set of functions that enable openness and interoperability while continuing to expose unique and valuable features that drive differentiation.
As we explore the fundamental building blocks to modern data systems — compute, storage and network — we will see a repeating pattern of opening, integrating and unifying these devices with a healthy dose of innovation to provide unique value.
Compute and Processing
In the area of compute and processing, no longer are we seeing the general purpose x86 CPU as the sole operator. Purpose built processors have spawned from necessity of changing workloads and use cases. Mobile devices have taught us that an entire new ecosystem could drive the need for a different processing architecture. In that case, the need for a reduced instruction set, smaller physical footprint and lower power consumption paved the way for ARM processors. And while this has been the initial strength of ARM, we see momentum with powering a whole new line of connected IoT and edge devices.
The growth of more connected devices has in turn resulted in more data and analysis which has been exposing some weaknesses with the x86 platforms. And while general purpose CPUs could perform this function, we’ve seen the rise of specialty processing with parallel architectures such as GPUs (graphics processing units) which hold strength in processing large quantities of data. These architectures are driving new capabilities in areas like machine learning and AI. Other specialty chips including FPGA, ASICs and TPUs have also been maturing in the market to offer choices for architectures that best suit the work to be done.
Compute and Processing Trends:
- Look for strong traction with specialty processing (GPU, FPGA, ASIC, TPU) in workloads where advanced analytics and machine learning are requiring large data processing. Additionally, edge computing, IoT and real-time analytics are seeing adoption of a mixture of processing choice with the main goal of “right tool, right job”.
- Intel and AMD will continue to carry large responsibilities in traditional x86 architecture enhancements of cores, clock speeds, power consumption and caching. They will join new players such as Nvidia, Google and others in the quest to offer choice with specialty processing. ARM processors will continue to increase outside of initial use cases and beyond niche areas as IoT and edge use cases expand.
The interconnectedness of infrastructure has a level of dependency such that if you adjust one area — you’re likely impacting another. Changes in processing and data volume places heavier demand on storage infrastructure, which itself is going through transition from the magnetic mainstay over the past few decades. Although we do see an overall decline in usage, magnetic HDDs aren’t dead. In fact, we continue to see incremental upgrades from vendors packing a whopping 16–18TBs into a single hard drive. Look for upgrades to capacities, buffers and caching as IT organizations and cloud providers position this media in the area of cold storage and archive. If you’ve been paying attention over the past several years, there’s no surprise with the rise of flash storage and the dominance that it’s established in primary storage environments. With capacity sizes upwards of 100TB in a single drive and better wear leveling technology, SSDs have become the standard for server and shared storage systems in the data center.
The flash revolution has placed a spotlight on the age of existing storage protocols (SAS and SATA) who struggle to provide the necessary bandwidth to fully utilize flash. NVMe is currently positioned as a logical successor in both server and shared storage arrays, offering the plumbing needed to fully realize the IO flash capabilities. And while this is a great step, applications and workloads are eager for more performance and aim to push protocols like NVMe across fabrics using technologies like NVMe over Fabric or PCIe to further expand this capability.
Even with these advancements, application and data needs are accelerating at a pace that requires more processing, storage and faster recall times of information. One solution that has been discussed for some time is persistent memory technology. Memory based storage technology has been steadily perfecting the speed of DRAM coupled with the data protection of persistent/non-volatile storage. This innovation opens the doors to in-memory computing use cases where entire applications and databases can be run entirely inside of memory without data loss worry.
Combine the aforementioned technologies with innovations in distributed file systems and its clear why software defined and hyper-converged storage systems have moved from early to main stream adoption and are placing themselves as one option for solid storage foundations. Software defined and hyper-converged systems both tout benefits of performant hardware, pooled together and managed as a single system. These platforms can play a critical role in establishing an open, highly flexible and programmatic approach to offering storage and computing while also being flexible for integrations into other areas of the infrastructure stack. All of these innovations lead to choice for IT organizations to place the right mixture of storage types to meet performance, availability, capacity and cost requirements.
- Flash storage will further its dominance of market share inside of both server-side and shared storage arrays. Magnetic disk will continue to be used in areas of cold storage but will see overall decline due to availability and cost efficiency of flash.
- NVMe usage will be common-place in highly dense areas of flash and we will see development of standards to extend this protocol over existing network infrastructures.
- Memory-based storage will begin to open up server-side acceleration of high performance workloads and introduce new application architectures as a result. Look for startups to attack areas of in-memory database technologies specifically. Traditional storage array vendors will expand their opportunities to use memory-based storage inside of their shared arrays offering environments a refreshed approach to caching and tiering.
- Finally, the shift to mostly software controlled storage will take center stage as hyper-converged and traditional shared storage vendors seek rapid feature release schedules resulting in their transformation to more software centric approaches.
With innovation optimizing processing and storage, the hunger naturally turns to moving data in and out of infrastructure more efficiently. The nervous system of infrastructure, networking will no doubt rise to the occasion of developing higher speeds with the release of 400G/800G/1.6TB networking — but this isn’t the most exciting area of this infrastructure. Provisioning and operations are going through overdue transformations as the network opens up to software programmability and automation — leaving behind closed CLI systems. Common automation tools can now reach deep into the command sets to instantiate application services.
Automation alone doesn’t lessen the underlying complexity of networking however which is why we see another hot area of technology starting to take shape. Intent-based networking or intent-driven networking, refers to the concept of instantiating a desired end-state by using software and algorithms which have the capacity to understand the active running state. While this seems like advanced automation, it differs in the use of much simpler language to describe the future state of the network and abstracting much of the complexity of the command sets away. The software is capable of determining which commands are needed to move the network from current to the desired state. Intent-based networking removes the burden of understanding a complex language that network devices need in order to instantiate service.
In addition to advancements in operations, application architectures such as micro-services are continuing the shift away from north/south patterns to more east/west. Distributed services leads to more reliance on the network to interconnect these parts of the application which requires tools to optimize and secure traffic between different processes of the application. Virtual networking has matured as the first line worker to handle these requests. Integration with existing hardware switching fabrics brings both functionality and performance into reach placing the optimal decision for traffic direction in the best tool’s hands.
- 2019 will bring about products from all data center networking vendors for 400G switching. We are anticipating 800G previews by end of the year with 1.6TB switching on the horizon for the next few years.
- We’ve seen device programmability mature and open up over the past few years with a fairly firm foundation with API and programming access. Refresh cycles will equip the data center network with the capability of autonomous or fabric-based control. Large heterogeneous networks will benefit from automation platforms that use data modeling to normalize differences in commands and capabilities into a common definition.
- While automation will certainly help network operations, complexity of the network still exists underneath. Intent driven networking will begin to take shape by abstracting the complexity of the command sets into a language much easier to provisioning and consume. Startups and incumbent network vendors will attack this space to up-level the operations of their devices.
- Virtual and hardware networking will integrate more tightly to nearly erase the line dividing them to provide organizations with better end to end definition and operations.
2019 Goal of Modernized Infrastructure: The 2019 goal for infrastructure should be to unify the components into a highly scalable, open and flexible fabric of capability where the uniqueness of each layer is exposed as a service upwards in the IT stack. Infrastructure devices should be open, extensible, adhere to standards and be easily integrated into the rest of the resources. In order to achieve the nirvana of hybrid cloud, we need to elevate our on-premise environments to a level of parity with IaaS public cloud. Businesses who can achieve this parity can focus more on servicing the needs of the applications and providing choice to the business to adjust levers for cost, performance, security and capability in the placement of workloads.
Next Up: 2019 Trends: Part II — Expansion of Hybrid and Multi-Cloud Environments
Equipped with an understanding of how to modernize on-premise infrastructure, my next blog segment will focus on Hybrid and Multi-Cloud environments and why more of the industry believes this is the long-term strategy. Combining capabilities across multiple clouds offers the business with flexibility and choice while also optimizing cost, security and user experience. With so many choices however, the path forward is less prescriptive and can be difficult to accomplish. That journey can be further inhibited if organizations attempt to jump on the hybrid and multi-cloud train without addressing several of the fundamental requirements for the underlying infrastructure.
Join me in our next segment where I’ll outline the current state of Hybrid and Multi-Cloud and provide some details about what trends we see as we approach 2019.