Project Pacific Technical Overview for New Users

[EDIT 01/05/2021: Note that terminology in this article is outdated and may no longer match official documentation. One such change:

Guest Kubernetes cluster/guest cluster is now Tanzu Kubernetes Cluster (TKC)]

VMware’s Project Pacific is a re-configuration of vSphere that integrates Kubernetes as it’s control plane to allow for a higher level of abstraction that dramatically simplifies how we build, deploy, and manage modern applications and streamlines IT Operations and Development in today’s cloud-native and hybrid cloud world.

This guide is intended to thoroughly explain basic and high-level key concepts and terminologies of Project Pacific and is written for beginners to the technology. I hope you find this helpful and enjoy reading!

Before I begin, this article requires a basic understanding of Kubernetes, which you can learn about in this quick guide to Kubernetes Basics for New Users.

The Modern Application

Traditional applications build before the cloud native model typically consisted of an app deployed and operated on a single or small number of VMs. With the prominence of cloud-native and hybrid-cloud approaches to app development, and the accelerating scale of tech in modern times, this model is long outdated and too simplistic to apply to applications today.

Modern Applications

Above is an example of a modern application, a combination of cloud-native and traditional components: Kubernetes Clusters, traditional apps on VMs, server less functions, and distributed databases, with many more resources, tools, and elements not pictured.

For developers building this application, how would they deploy and operate such an app? What tools would they be able to use with such a diverse app?

For operations teams, how would they manage policy on availability, security, quality of service, and storage? How would they control the cost of infrastructure supporting such an app?

These are the problems that Project Pacific tackles.

What is Project Pacific?

Although Kubernetes is traditionally seen as a container orchestration system, the control plane is highly-extensible and can be used to manage multiple other Kubernetes Clusters for nested Kubernetes in Kubernetes.

Project Pacific concept

Project Pacific leverages the Kubernetes control plane to orchestrate modern applications across custom resource definitions (CRDs). This means that any kinds of resources and applications can be deployed and controlled in a Kubernetes native way.

A Kubernetes Native Approach to Modern Apps

Defining the workload

With modern applications built in such diverse configurations, defining a deployment becomes very complex with many different interfaces for each component of the app. Project Pacific address this issue by allowing all applications across cloud-native and traditional approaches to be defined and deployed in a Kubernetes native way. As shown in the diagram above, users can define Kubernetes Clusters, traditional apps, serverless functions, and databases all using a YAML configuration to create a unified deployment configuration. Apps defined in this way can also be operated as one unit thanks to Project Pacific’s integration of Kubernetes with vSphere. This dramatically simplifies the process of working with modern applications!

Workload-level Management for Developers

Workload-level management for developers

With deployment configurations simplified, developers can now deploy and operate applications and define infrastructure resources at a workload-level rather than modularly. This is a really straightforward but powerful advantage Project Pacific offers developers. Additionally, by giving developers the capability to define and create any kind of resource in their deployment, they no longer have to go through the lengthy and troublesome process of making these service requests through operations.

Namespace-level Management for Operations

To understand the operations advantages of Project Pacific, you need to first understand the…

Kubernetes Namespace: A namespace is a virtual resource boundary within a Kubernetes Cluster that allows for a single Kubernetes cluster and its resources to be divided among multiple users.

Think of a farmer who divides their field (cluster + cluster resources) into fenced-off smaller fields (namespaces) for different herds of animals. The cows in one fenced field, horses in another, sheep in another, etc. The farmer would be like operations defining these namespaces, and the animals would be like developer teams, allowed to do whatever they do within the boundaries they are allocated.

Namespace-level management for Operations

While developers can self-service deploy and manage their cloud-native applications on a workload-level, Project Pacific gives operations the power to define a user namespace and assign policy including quality of service, security, availability, and access controls, at a namespace-level. Once a namespace is handed off to a developer or developer team, operations doesn’t need to worry about handling and providing resources for deployments within namespaces and developers can create, deploy, and manage apps and resources however they want in their operations-defined sandbox.

Note that vSphere Namespaces are securely built with multi-tenancy models in mind at the compute, networking, and storage layer such that users don’t get access to the API Master in the Kubernetes control plane. To install custom software, helm charts, CNIs, etc., users should spin up a guest Kubernetes cluster in their Namespace (this concept is discussed in the below architecture section).

Project Pacific Basic Architecture

But first, a brief overview of vSphere.

vSphere and the SDDC

vSphere is VMware’s cloud computing virtualization platform and is made up of two working components.

vCenter: this is the centralized management control plane for VMs, ESXi hosts and clusters, and accompanying resources for Networking and Storage. Operations can create, manage, and monitor virtual machines and Software-Defined Datacenter (SDDC) resources.

ESXi Clusters: an ESXi is a bare-metal hypervisor for deploying and hosting VMs, and a ESXi cluster is a working group of ESXi hosts defined and managed by vCenter. each ESXi host in the cluster has a hostd, which communicates between the host and vCenter.

The above two components comprise vSphere, which altogether with vCenter managing networking and storage layers along with the hypervisor form the SDDC layer.

What happens when you enable Project Pacific on an ESXi cluster?

Enable Project Pacific on this cluster!

Enabling Project Pacific across an ESXi cluster in vSphere, notice the following key components, also shown in the above diagram, that transform it into a Supervisor Kubernetes Cluster:

Kubernetes Control Plane VMs: multiple VMs deployed across the ESXi cluster that work together to form a Kubernetes control plane for the cluster. This Kubernetes control plane, along with the Spherelet, allows for developers to interface with the ESXi cluster turned Supervisor Kubernetes Cluster using the Kubernetes API.

Spherelet: a Kubelet for ESXi. Just like Kubelet interfaces between its host node with the Kubernetes control plane node, the Spherelet interfaces with the Kubernetes control plane VMs to turn its ESXi host functionally into a Kubernetes cluster worker node.

ESXi Native Pods: these enable users to run container workloads on ESXi and to use a Kubernetes native interface with them. The ESXi Native Pod is enabled by a container-runtime executable (CRX), which is a VM executable optimized with a Linux runtime image that registers a VM on the ESXi host, then strips down and optimizes it to a thin layer of Linux kernel just enough to run container workloads. The Native Pod running in its own VM presents some cool advantages:

  • The Native Pod has the same security isolation as a VM.
  • Because the CRX runs as a guest in the hypervisor, the Native Pod also has its own resource allocation and isolation from other pods. Any impact in performance to one pod won’t leak to other workloads.
  • Internal testing at VMware has demonstrated that due to CPU isolation, containers running on Native Pods have potential to run 30% faster than on PodVMs than on traditional vSphere VMs, and 8% faster than on bare metal!

This re-architecturing of vSphere with a Kubernetes control plane nails together the concept of high extensibility, anything in Kubernetes!

The Supervisor Kubernetes Cluster abstraction level

Above is the Supervisor Cluster abstraction layer that hosts the VM, Cluster API (Kubernetes), and Guest Kubernetes Cluster Controllers (whose functions are detailed here) in the service namespace, a namespace instantiated to host these and additional services. Operations can create user namespaces for developer teams, and developers can deploy applications in their assigned namespaces. Let’s take a look at how it all works.

Operations: Creating a Namespace on the Supervisor Cluster

Operations creates namespaces and manages the SDDC and policy per namespace

Once Project Pacific is enabled on an ESXi Cluster, transforming it into a Supervisor Kubernetes Cluster, Operations can manage the Supervisor Cluster through vCenter. Operations can create User Namespaces within the Supervisor Cluster, managing SDDC resources and assigning policy on each Namespace. From vCenter, Operations can monitor and manage these Namespaces without worrying about the deployments and activity in each one.

Developers: Deploying and Managing Apps

Developers self-service deploy and operate applications and resources

Once a namespace is created and assigned to a single or a team of developers, those developers can self-service deploy and operate applications, manage infrastructure resources in their namespace, and creating supporting services simply through the Kubernetes API. It’s that simple! One very cool feature of Project Pacific is the…

Guest Kubernetes Cluster: The supervisor Kubernetes cluster is a specific implementation of Kubernetes for vSphere which is not fully conformant with upstream Kubernetes. That’s why Project Pacific provides the Guest Kubernetes Cluster, a completely conformant upstream Kubernetes cluster service that developers can create and deploy in all using the Kubernetes CLI. You can read more in detail about Guest Kubernetes Clusters here.

Building App Services

ISVs build services on the Supervisor Cluster that developer apps can interface with

Collaborating with independent software vendors (ISVs) and developers, app services that utilize Kubernetes operators can be built on vSphere and Project Pacific. Users will be able to install and run these services directly in a vSphere environment and create apps that utilize these services.

Monitoring the Supervisor Cluster

Project Pacific allows Operations to manage through vCenter and Developers to self-service through k8s API

From vCenter, operations can monitor the activity of VMs, Kubernetes Clusters, and ESXi Native Pods, and create and manage Namespaces and the SDDC while developers are free to self-service deploy and operate as they please. Extensibility, multi-tenancy, self-service, and high-level management: this is the power of Project Pacific!

VMware wants your feedback!

VMworld Design Studio is a series of small-group sessions with customers on numerous products and topics. The design team tests early design concepts and run engaging research activities to gather valuable user feedback from VMworld participants. You can sign up for VMware Design Studio sessions on to watch demos and give your feedback on Project Pacific!

Thank you for Reading! ❤

Check out more on Project Pacific:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ellen Mei

I care about justice, equity, & labor power in tech + everywhere | Product Manager @ VMware