Sitemap

“Solving problems for customers is really what drives me.”

6 min readMay 22, 2019
Cloud Native Orchestration team in Shannon, Ireland. On left: Conor Nolan, Engineer; Zvonimir Bogdonovic, Architect. On right: Top, left to right: Kate Mulhall, Software Engineering Manager; Fran Cahill, Program Manager; Gary Loughnane, Engineer; Killian Muldoon, Engineer. Bottom, left to right: Louise Daly, Engineer; Swati Sehgal, Engineer; Abdul Halim, Engineer; Dave Cremins, Architect; Valerie Brosnan, Intern.

Cloud native technologies deliver the speed, agility, scalability and optimum resource utilization that users need as they face an ever-increasing number of data-centric, compute-intensive workloads. To help a greater number of users embrace the benefits of cloud native technologies, like Kubernetes, Kate Mulhall was tasked with building the Cloud Native Orchestration team for the Data Center Solution Group at Intel. In this discussion, Kate shares her experiences in building this team and collaborating with the cloud native community to enhance Kubernetes, driven by her passion to understand and respond to the needs of customers and develop a team around this.

Tell us a little bit about yourself. What inspires you about your work and its value to others?

Kate Mulhall, Software Engineering Manager at Intel

I earned a degree in Engineering at Liverpool University, and then won a fellowship to Harvard University where I earned a Masters in Engineering, which was amazing! Then I returned to Ireland, first working as a developer and then moving into engineering management. I came to Intel four years ago, and started building the Cloud Native Orchestration team in Shannon, Ireland about three years ago.

Solving problems for customers — and developing and motivating the team around this — is really what drives me. Many of the customers our team works with are in the telco space, which means they’re typically running different types of virtual network function (VNF) workloads. But when our team first formed three years ago, we heard from customers that Kubernetes, which was very cloud oriented, didn’t support these workloads. So, we strived to solve this issue by enabling multiple data paths through a feature known as Multus, which allows these customers to run containerized VNF workloads in Kubernetes.

The networking and resource management capabilities we develop in response to customer needs are available for Kubernetes, and demos of these capabilities are added to the Intel experience kits, which includes a complete test set-up so that customers can run proof-of-concepts or capabilities in their labs. In addition to continually enhancing these experience kits, we’re also planning to open source a script to make installation and deployment of these capabilities easier for them.

Tell us about the work of the Cloud Native Orchestration team. What are its current focuses?

Our earliest work focused on enabling networking capabilities in Kubernetes, such as multiple network interfaces with Multus, and on data plane acceleration with technologies like Single Root I/O Virtualization (SR-IOV). Since then, we’ve worked on enhancing resource management through activities like Node Feature Discovery (NFD). Most recently, we starting to work on telemetry in Kubernetes.

Talk more about how you’re working with customers.
Whenever we’re talking to customers and responding to their needs, we always think of how what we do will suit the larger community. Our work needs to scale across multiple, different customers, in a way that can be adopted by the community. Typically, we identify a common unmet needs or gap, work with the customer to shape the design proposal, develop a demo that can illustrate the concept, and get the proposal and demo out to the community for feedback.

We’re really at a precipice. We started working with Comcast as well as customers in the 5G area, and at the start of 2018, Red Hat came to us and asked to work directly with us to incorporate the work that we had done on Multus and SR-IOV networking into a commercially deployable product.

Last week, Intel received a Community Impact Contributor award from Red Hat, with the work that we had done with them on Kubernetes networking as a key input. This work — Multus as general availability (GA) and SR-IOV as a tech preview — will be available in OpenShift 4.1, which releases in June. Up until now, the industry has had to take the Multus and SR-IOV capabilities predominantly from our Intel Github repo, and now they’ll be able to take them from OpenShift 4.1.

The journey of enabling multiple network interfaces — known as Multus — in Kubernetes

We’re also focusing more on Edge computing workloads. Last year, we worked with two customers with 5G deployments, and this year the Non-Uniform Memory Access (NUMA) capability enabled through the Topology Manager planned for Kubernetes is helping use cases that we’re seeing at the Edge, which includes deployment of Edge Data Center.

How is the team collaborating across the cloud native ecosystem and community?
In addition to working with Red Hat on Multus, the team has also been working with the ecosystem through the SIG Networking, SIG Scheduling and SIG Node groups.

Tell us more about your experiences in building the Cloud Native Orchestration team.
I work as an Engineering Manager because I like seeing the individuals on my team grow. As a team, we’ve grown organically over time. Our approach has been to refrain from over-complicating things, and to fail fast. For example, if you have something running and it doesn’t quite work, well then, you’ve learned something, so just go back and try again. We also have a philosophy around getting something up and running that is rough and ready — it doesn’t need to be fully baked, it’s more about showing a concept to customers to get their input quite early on, mature the concept, and then go to the community with it. Innovation is also a big part of who we are. You can’t expect that people are going to come up with ideas for you; you need to be able to think about options to a problem and which solution will scale and work best for everyone. We’re a small team who collaborates with many other teams across Intel, and with many colleagues across the larger cloud native community. What motivates us is seeing our work deployed by different customers and in different use cases.

Looking forward, what is the team focused on next?
I like looking at the future and what we’re going to do. We work in Shannon, an R&D center, where innovation is a big thing, so we’re always thinking through possibilities & bringing innovative ideas to customers. In addition to the experience kits that showcase the value of using capabilities that have been developed, we have something very exciting coming up — we’re putting together an installation script that will help customers install all of the work we’re doing and make it even easier for them to deploy these capabilities in their labs. In addition, we’re looking at how we enable orchestration of features and capabilities in upcoming releases of the Intel® Xeon™ Scalable platform and adjacencies through Kubernetes.

What are you looking forward to at KubeCon Barcelona this week?
It’s a wonderful time for innovation. To go and see — almost in real time — what the industry wants to talk about is a great opportunity, and I think this event will spark a lot of innovation. I’m also looking forward to meeting our customers and chatting with them in person. I think this event also gives us all a great chance to learn by catching up with all the SIGs — those we participate in, as well as those we may be less familiar with.

by


Community & Developer Advocate

--

--

Open Source Voices
Open Source Voices

Written by Open Source Voices

A series that explores the broader impact of open source development through voices across the community

No responses yet