High Performance Networking with Kubernetes

Open Source Voices
6 min readMay 16, 2019

--

Abdul Halim, Cloud Software Engineer at Intel

In the era of 5G and Edge deployments, high-performance networking is a requisite. Containers and cloud native technologies are well-suited to address such needs, yet container orchestration solutions like Kubernetes lack critical features needed to run these workloads. Primarily focused on microservices and web applications, Intel’s collaboration with the cloud native community has opened the door to Kubernetes for telco and comm service providers. Abdul Halim, Cloud Software Engineer at Intel, talks about his experiences in persevering through adversity to respond to the needs of these users.

Read on for more about Abdul’s story, and check out a video replay of his session, High Performance Networking with Kubevirt, with Doug Smith at Red Hat!

Tell us a little bit about yourself and what you do at Intel.
From the early days of my career, I’ve been working with network operations and technical support, and I’ve always had a great interest in network connectivity and security. When I studied Computer Systems at the University of Limerick, I picked up a great interest in software development. Networking practices have evolved so much in recent years. As cloud software engineer in the Cloud Native Orchestration team at Intel, I have the unique opportunity to combine my networking knowledge with my interest in software engineering to transform the network for better adaptation for network function virtualization (NFV). I’m currently focused on enabling deployment of network performance-sensitive workloads in cloud platforms, enabling Kubernetes to support capabilities required for virtual network function (VNF) applications. I contribute to Multus and serve as a maintainer of the SR-IOV network device plugin and the SR-IOV CNI plugin.

What challenges have you observed in relation to networking in Kubernetes?
The trend in the networking community has been to virtualize many of the network functions into more general systems, known as virtual network functions (VNFs). These are often implemented in VM-based solutions, such as vCMTS, vBNG or vEPC, by telco and comms service providers. While the simple networking model adopted by Kubernetes is well-suited for web applications and microservices, this model does not suffice for high-performance networking VNF applications with higher Service Level Objectives (SLO). There are some gaps in Kubernetes that we’ve found really challenging.

The fundamental obstacle we observed was the lack of native support for more than one network interface in a Kubernetes pod. VNF applications require the separation of the data plane from the control plane — this means you need at least two interfaces, which wasn’t supported in Kubernetes. Decomposing VNF apps into microservices isn’t always feasible, yet telco and comms service providers want to take advantage of the agility that cloud native technologies provides by managing and orchestrating these apps using Kubernetes. This multi-networking capability is a requisite for the migration of legacy applications, as well as workloads requiring deterministic performance — even with noisy neighbors in a multi-tenant environment — and hardware acceleration for compute, I/O and network.

How did you address this challenge?
Our journey in Kubernetes networking started with Multus. We introduced Multus to enable more than one interface in Kubernetes pods. We demonstrated some of this work at KubeCon 2017 where we received a lot of interest from the community. We also engaged with the Network SIG group, as well as Comcast and some of the other telcos who are actively involved in the Network SIG community. The community showed great interest in taking it further, so we formed a group called the Network Plumbing Working Group within the Network SIG community where we defined a working de-facto standard specification for multiple network attachment. From there, we collaborated closely with Red Hat to integrate this work into OpenShift, and Red Hat announced general availability of Multus in OpenShift 4.0, which releases in June.

Tell us about this experience of bringing multi-networking to Kubernetes. What challenges did you face along the way, and how did you address them?
When we first started working on enabling multiple network interfaces for multi-networking in Kubernetes, we received pushback from the community. The community didn’t want to introduce a lot of the complexity that comes with multi-networking such as service discovery, service endpoints, DNS and other elements related to multi-networking. We realized in the community that we needed to do something outside of Kubernetes but that could be closely integrated into Kubernetes. The main goal of the Network Plumbing Working Group was to devise standards and specifications to implement multiple networking in Kubernetes.

What were you proudest of in tackling this challenge?
I was proud that, when we faced opposition and resistance, we worked together as a true community to find a way through this adversity to emerge with a solution that responded to the needs of users. We were able to listen to the needs of telco and comms service providers, like Comcast, and to use their needs as a compass to guide us. To find the winning approach required us to challenge assumptions and think differently. Ultimately, we showed the community that we could implement multi-networking without introducing complexity in the core of Kubernetes.

What other networking challenges have you observed in Kubernetes?
The next challenge we see is that there’s no native support for guaranteed and uninterrupted CPU times through isolation and affinity. High-performance, packet-processing applications require core pinning and resource locality to achieve maximum throughput and low latency. To address performance determinism challenges, Intel has open sourced CPU manager for Kubernetes that allows CPU pinning for workloads. Intel has also collaborated with the community to enable features that bring enhanced platform awareness and resource allocation, including enhancements to the Kubernetes Topology Manager and delivery of a Node Feature Discovery capability along with a number of device plug-ins that help offload CPU-intensive workloads.

Are there any other networking capabilities you’ve enabled in Kubernetes?
Once we implemented multi-networking through Multus, we could then focus on enabling accelerated networking through Single-Root I/O Virtualization (SR-IOV). We’ve worked closely with the community to provide a solution to discover, provision and attach SR-IOV network interfaces for workloads running in Kubernetes using an SR-IOV network device plug-in and SR-IOV CNI plug-in. We’ve seen the adoption of the SR-IOV device plug-in in Kubevirt, a VM management add-on for Kubernetes that allows you to run VM-based workloads in Kubernetes. As a result, you can now deploy your legacy VM-based apps with high network performance. The SR-IOV network device plugin and SR-IOV CNI will be available as tech-preview in OpenShift v4.0 in June.

Recently, we’ve enhanced the SR-IOV network device plug-in with a selector-based deployment feature, which makes the overall orchestration experience of SR-IOV networking resources seamless and more cloud-friendly. The targeted release for this feature is mid-June.

Looking forward, what are some of the challenges yet to be addressed? What are you setting your sights on?
While we’ve made significant progress, we still have a lot of work to do to enable service discovery of multiple interfaces. We have a few ideas and are still brainstorming about how to do this well and how to do this in a way that can be easily adopted into the Kubernetes native environment.

What inspires you about your work and its value to others? In your work, what makes you most proud?
It’s exhilarating to see telcos and comms service providers embrace the benefits of Kubernetes as a result of our work on their path to 5G network transformation. Service providers like Comcast have adopted our technologies in their vCMTS that they rolled out last year. They showcased how they can utilize Kubernetes to virtualize their workloads to deliver High-Speed Data (HSD) services to their customers. This showcase helped raise visibility across the ecosystem, and we’re now receiving widespread interest from other comms service providers. Today, we’re working with a breadth of customers to deploy applications including vBNG, vCPE, vEPC, NGCO, Edge computing and FlexRAN/5G.

It’s also gratifying to work with the open source community. It can be challenging at times, yet extremely rewarding when the community acknowledges your contributions. I’ve learned so much by interacting with folks with a variety of different perspectives and ideas. Being part of a community gives you a greater sense of inclusion that you’re doing something incredible.

by Nicole Huesman
Community & Developer Advocate

--

--

Open Source Voices

A series that explores the broader impact of open source development through voices across the community