The Infrastructure Processing Unit for the Next Generation Cloud Services

Author: Brad Burres, Intel Fellow, Ethernet Products Group Chief Architect

Intel
Intel Tech

--

Introduction

General-purpose processors like Intel Xeon are the lifeblood of the data center. It’s where most value-added applications execute. But it’s clear that more targeted, or domain-specific, processors are being adopted in the cloud, enterprise, and edge when the advantages can be shown to outweigh the costs.

In order to justify the investment in new hardware and software, this domain-specific processor must hit a high threshold. I believe the Infrastructure Processing Unit (IPU) has clearly passed the bar and is emerging as the most useful domain-specific processor. Let’s look at Intel’s first ASIC-IPU, Mount Evans, to understand this better.

Performance

The single biggest argument for any domain-specific processor is performance. This stands true for the IPU as well. The general-purpose CPU core count is scaling up dramatically. These drive huge changes in infrastructure needs. Today’s CPUs spend significant effort handling infrastructure processing instead of value-adding application processing. As network speeds and storage volume scale from 10G to 25G to 100G to 200G, the way in which we handle data processing needs to evolve and scale. A processing unit needs to meet the higher performance demands and understand the type of data and how it is processed.

Intel’s Mount Evans, on the other hand, comes in and pushes performance by building hardware offloads and utilizing right-sized ARM N1 cores for the subset of workloads that make sense in the IPU. Our chip has full vSwitch offload and is capable of supporting 200M packets per second. It can fully saturate a 200G network with remote NVMe storage operations. Mount Evans adds crypto and compression capabilities to make sure every packet sent across the network is secure and to reduce storage media demands.

While any one of these things is possible in the CPU, combining them all together and doing them while doing interesting application work isn’t possible to do well. Intel’s IPU, on the other hand, does all of these things and more in a performant way, freeing up the Xeon to do the value-added applications. Sometimes, the promised performance advantages don’t show up because these specialized processors have been designed for very specific benchmarks. Mount Evans doesn’t suffer from this issue: Intel’s IPU has been designed in partnership with a top cloud provider to provide the top performance under real-world workloads and conditions.

Wide Applicability

I debated whether to call this “wide applicability” or “TCO”. But they are related, and both apply. In order to adopt a domain-specific processor like Mount Evans, it only makes sense when the investment can be applied broadly and when the Total Cost of Ownership — TCO — -is materially better than the baseline.

In the Cloud, Infrastructure workloads exist everywhere. We’ve seen previous data from Facebook and Google showing this “infrastructure tax” ranging anywhere from 20% to 80% of a workload. As enterprise and edge become more cloudified, this same tax applies as the workloads get partitioned similarly.

In Mount Evans, we’ve accelerated these infrastructure workloads with very flexible hardware that can be adapted to work across Cloud, Enterprise, and Edge. And we’ve utilized an array of ARM cores to provide power-efficient performance across control planes and other infrastructure applications. Together, the Mount Evans design can tackle the infrastructure needs of many data centers and should provide a meaningful TCO advantage.

Emerging Use Cases

While performance, wide applicability, and TCO are clear motivators for using an IPU, the adoption of a domain-specific processor often happens when there is a new emerging use case. With Mount Evans and the IPU, we see several broad emerging use cases that help drive the adoption.

First, the cloudification of “compute ”with tenants and Infrastructure providers drives an increasing need for a separation of tenant workloads from infrastructure workloads. By moving to an IPU architecture, the infrastructure workloads can be fully isolated from the tenant, creating greater security (for both), reducing or eliminating noisy neighbor effects, and simplifying life cycle management across both processors. Mount Evans takes this a step further with increased virtualization and Quality of Service capabilities to really enable each tenant and the associated infrastructure to work in isolation from each other.

Second, the need to support Bare Metal tenants on the same infrastructure as virtualized tenants drive this a step further. Mount Evans enables the infrastructure provider to do this. By enabling things like full hardware virtualization, providing NVMe native interfaces to the IPU, and support device emulation, Mount Evans provides the hooks to allow a service provider to leverage the same service models for Bare Metal hosting as they do for VMs and containers.

Third, Mount Evans enables the Infrastructure Provider to move to a completely diskless architecture at the compute node. It does this by presenting the NVMe native PCIe device model to the CPU, but allowing the infrastructure provider to implement the correct network storage backend on the Mount Evans compute complex. This ends up being another huge operational and TCO advantage.

In modern data centers, a typical SSD on a local CPU might use 35% of its storage capacity and 25% of its available IOPS. By moving these off the CPU and across the network, the infrastructure provider can utilize both storage capacity and IOPS. And Mount Evans, with the offloads and RDMA transports, can do this in a highly performant and low latency way.

Conclusion

The IPU in general and Mount Evans, in particular, provides compelling performance advantages versus running the infrastructure workloads on the CPU. These infrastructure workloads apply widely across many if not all data center use cases, and using Mount Evans on these workloads will provide a meaningful TCO advantage because of the highly flexible acceleration hardware of things like vSwitch and the use of smaller efficient cores for infrastructure apps. This, in combination with enabling emerging use cases across the data center, makes using Mount Evans as your IPU the right choice for your infrastructure processing.

Notices & Disclaimers:

Performance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.

No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software, or service activation.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

--

--

Intel
Intel Tech

Intel news, views & events about global tech innovation.