Runtime Fabric Manager on Elastic Kubernetes Service Architecture and Components — Part 1

Jitendra Bafna
Another Integration Blog
7 min readAug 15, 2022

Introduction

As we are aware, MuleSoft provides various deployment options which includes Customer Hosted Mule Runtime, CloudHub, Runtime Fabric Manager on Bare Metals/Virtual Machines and Runtime Fabric Manager on Self-Managed Kubernetes (EKS/AKS/GKE). It is very important to select the right deployment options for your enterprise and this is only possible if we have concrete requirements and are aware of all deployment options and its capabilities.

For choosing right deployment options, we need to consider various factors like High Availability, Fault Tolerance, Scaling, Disaster Recovery, Resource Allocation and Requirements, Security, Reliability, Operations and Maintenance of the platform, shared responsibility model, cost and it is very important to consider organization strategy, policies, and current landscape etc.

In this blog, we will be going to explore the capabilities of Runtime Fabric Manager on Elastic Kubernetes Service (EKS) and how it is different from Runtime Fabric Manager on Bare Metal Server/Virtual Machines.

EKS Architecture

Before we get into the RTF, let’s first understand EKS Architecture and its capabilities. It is very important to design and set up robust, secured EKS with High Available, Fault Tolerance, Durable and maybe Disaster Recovery in case the customer is looking for 100% business continuity.

EKS is a fully managed AWS service that makes you run Kubernetes on AWS without requiring the user to maintain their own Kubernetes in Control plane. It is an AWS service to run, manage, scale or deploy the containerized application in the Kubernetes.

In EKS architecture, the Kubernetes control plane is managed by EKS that is running in EKS managed VPC. EKS automatically manages the scalability and durability of the Kubernetes control plane nodes and it automatically replaces unhealthy nodes.

Kubernetes control planes communicate to worker nodes via EKS Managed ENI, and it is generally provisioned in multiple Availability zones by EKS. Below is the physical architecture of Elastic Kubernetes Service (EKS) and that explains what different components are required for EKS and how Kubernetes control planes communicate with EKS worker nodes and how clients can communicate with applications deployed on the worker nodes.

  • In the above physical view, we have one regional Virtual Private Cloud (VPC) with three subnets (Private, Firewall and Public) that is spin up across multiple Availability Zone for High Availability and fault tolerance.
  • EKS Cluster (worker nodes) will be set up in a private subnet and worker nodes are the EC2 instances. EC2 instances size can be selected based on your requirements.
  • Public Subnet will be associated with the Internet Gateway to allow ingress and egress traffic to and from the internet. NAT Gateway will be set up in Public Subnet, and it will allow private subnet to send traffic outside (egress traffic will be enabled) and no ingress traffic to private subnet.
  • Firewall Subnet can be optional, but this is required in case if you want to restrict egress traffic from private subnet. By default, if you associate NAT Gateway with your Private subnet then the private subnet can send traffic out to any internet application. In case, if you want to allow out traffic to fewer internet applications or services, firewall subnets become handy.
  • In firewall subnet, you can have firewall policies where you can define stateful or stateless rules and domain list which allows for Private Subnet to send traffic and validate what traffic can come inside.
  • There is a route table associated with each subnet to ensure that network traffic from your subnet or gateway is directed. There are separate route tables for Public, Firewall and Private Subnets.
  • Kubernetes’ control plane is spun up in the AWS managed Virtual Private Cloud and it also sets up EKS Managed ENI in EKS Cluster to communicate with worker nodes.
  • Clients can connect to applications deployed on worker nodes via Ingress Load Balancer as shown in the above image.

Runtime Fabric Manager Architecture on Elastic Kubernetes Service

Anypoint Runtime Manager on Self-Managed Kubernetes allows you to deploy the MuleSoft application and API proxies on Kubernetes Cluster.

  • In above Runtime Fabric Manager architecture on Elastic Kubernetes service, we have defined multiple components and each component has its own responsibility. Persistence Gateway allows to persist the object store data to ensure that in case even application restart, data can be persisted and share the data across application replicas. Currently, Persistence Gateway supports only PostgreSQL databases.
  • There are two kinds of clustering one can be achieved at Infrastructure Level for the High availability of nodes in the cluster and other is application-level clustering to allow you share the data across multiple replicas, and this enables High Availability at the application-level and improves the application performance.
  • RTF Agent POD is responsible to communicate with RTF control using AMQP protocol and log forwarder is responsible to forward the log to external logging systems like Splunk, AWS CloudWatch or Anypoint Monitoring etc.
  • Applications deployed on the worker node can be paired with Anypoint API Manager via API Auto Discovery or API Proxy.
  • Application deployed on the POD within the worker nodes and POD basically contains Application Container (Mule Application + Java Virtual Machine + Operating System) and Monitoring Container that collect metrics. One POD can contain one replica of the application.
  • High Availability of application can be achieved by deploying multiple replicas of the application.
  • Fault Tolerance — RTF is by default have capability that application automatically recovers from any failure or crashes. For example, in case any replicas become unhealthy or crashed, RTF will automatically create healthy replicas in one of cluster nodes and replace unhealthy replicas.

Anypoint Runtime Fabric on Self-Managed Kubernetes (EKS) as a Shared Responsibility

Ingress Controller

Anypoint Runtime Fabric Manager allows you to specify custom ingress configuration using an ingress resource template. Ingress has capabilities like SSL Termination or Offloading, SSL Tunneling, Load Balancing, Routing etc. Ingress exposes HTTP or HTTPS routes from outside the cluster to the services within the cluster.

For configuring the Ingress Controller, read the blog

Persistence Gateway

Persistence Gateway allows you to store the application object store or vm data and that can be shared across the applications. With Persistence Gateway, this data will be not lost or persisted in case application restarts. Persistence Gateway supports only PostgreSQL databases.

Last Mile Security

Last Mile Security enables HTTPS traffic between Ingress and Application deployed on worker nodes. Runtime Fabric Manager on Self-Managed Kubernetes does not include Ingress in product scope. It is part of ingress configuration, and this may vary as per ingress has been used. Application deployed to EKS cluster always listens on port 8081.

Runtime Fabric Manager on Bare Metal or VMs V/S Runtime Fabric Manager on Self-Managed Kubernetes

In the above architecture, we have used VPN connection to connect corporate datacenter resources and services. VPC is connected to the corporate datacenter via Transit Gateway and Transit Gateway to VPN Connection. In your architecture, you can use any other connectivity options provided by your Cloud Provider like Direct connect etc. to connect corporate data centers.

Note — Transit Gateway has been just one of the examples shown here as a connectivity option and it can be a different connectivity option within your organization and it completely depends where and how you want to connect.

Conclusion

This blog completely explains how to design and architecture EKS and RTF to ensure High Availability, Fault Tolerance, Durability and what are the different components required for setting up Runtime Fabric Manager on Elastic Kubernetes Services. It is also providing insights on how Runtime Fabric Manager on Virtual Machines and Bare Metal is different from Runtime Fabric Manager on Self-Managed Kubernetes. In Part 2 of this blog we will walkthrough various concepts related to Runtime Fabric Manager on Elastic Kubernetes Service like CPU Bursting, Networking, TLS, Ingress etc.

--

--

Jitendra Bafna
Another Integration Blog

I am Jitendra Bafna, working as a Senior Solution Architect at EPAM Systems and currently leading APIN Competency Center.