Runtime Fabric Manager on Elastic Kubernetes Service Architecture and Components — Part 2

Jitendra Bafna
Another Integration Blog
7 min readAug 17, 2022

In part 1 of this blog, we have seen how to design and architecture EKS and RTF to ensure High Availability, Fault Tolerance, Durability and what are the different components required for setting up Runtime Fabric Manager on Elastic Kubernetes Services.

In this blog, we will see various concepts in detail supported by Runtime Fabric Manager on Self-Managed Kubernetes like CPU Bursting, Last Mile Security, TLS, Persistent Gateway, Networking and benefits provided by Runtime Fabric Manager on Elastic Kubernetes Service.

What is CPU Bursting?

Runtime Fabric Manager CPU bursting is using Kubernetes based CPU Bursting technology. With CPU Bursting, applications running in the pod can consume more spare CPU than allocated. This event will occur when the application is overloaded due to more requests hits and needs more CPU than actually allocated. You need to ensure that CPU consumption is not more than what you have paid, even in case of the CPU bursting.

When you deploy an application to Runtime Fabric Manager, you need to set Reserved CPU and CPU Limit. This means your application will guarantee to get the Reserved CPU and in case application requires more CPU, it can go up to CPU Limit defined and this will be only possible if that much of CPU is free.

In case the Reserved CPU and CPU Limit value is the same, it means CPU Bursting is disabled. One of the advantages of CPU Bursting, is that whenever an application needs the extra CPU, it can automatically use unallocated or spare CPU.

Can we set the Reserved CPU and CPU Limit value the same?

Yes, you can set the same value for Reserved CPU and CPU Limit. This will mean CPU Bursting is disabled.

How to Persist the data when an application on Runtime Fabric Manager deployed in Clustered mode?

You can Persistence Gateway for persisting the data across the applications deployed in clustered mode. Your data will not be lost in case the application gets restarted or crashes.

Can we deploy applications to Runtime Fabric Manager running on Self Managed Kubernetes using CI/CD pipeline?

Yes, we can deploy applications to Runtime Fabric Manager running on Self Managed Kubernetes using CI/CD pipeline. For that, you can use Mule Maven Plugin.

Does Runtime Fabric Manager provide any component that can be used to hide or mask sensitive data?

Yes, there is a Tokenization Service that can be setup on Runtime Fabric Manager. Once the tokenization service has been set up, you can use the Tokenization and Detokenization policy for hiding or masking the sensitive data and vice versa in the API Manager.

How can we apply Policies to APIs deployed on Runtime Fabric Manager?

To apply policies to the APIs deployed on Runtime Fabric Manager, you need to create API Proxy or use API Auto Discovery to peer API Manager with APIs deployed on Runtime Fabric Manager and then you can apply any out of the box or custom policies to your apis.

What databases are supported by Persistence Gateway? Does it support all JDBC databases and no sql databases?

Runtime Fabric Manager doesn’t support all the JDBC databases and it only supports PostgreSQL. It doesn’t support any no sql databases. Here is a list of PostgreSQL databases supported by Persistence Gateway.

When to enable Last Mile Security?

Last Mile Security can be enabled when there is a requirement to redirect traffic to an application deployed on a worker node on HTTPS from Kubernetes Ingress Load Balancer.

The last-mile security means use HTTPS instead of HTTP between the edge and application. The important thing is HTTPS also listens on the port 8081.

How does TLS work with Runtime Fabric Manager on Self-Managed Kubernetes?

To enable TLS, you need certificates (private and public key) to be stored in kubernetes secret using the below command.

kubectl create secret tls rtf-nginx-secret --namespace rtf --key private_key.pem --cert public_key.pem

Add the Secret name in the ingress template and apply it again on Kubernetes. This name should match with the name of TLSsecret.

Apply Ingress template to Kubernetes using kubectl.

kubectl apply -f ingress-tls.yaml

Can we get a complete Ingress controller template?

This syntax nginx.ingress.kubernetes.io/backend-protocol: “HTTPS” will enable Last Mile Security.

To apply Ingress template use below commands.

kubectl apply -f ingress-tls.yaml

What is the property “Enforce deploying replicas across nodes” while deploying the application?

If enforceDeployingReplicasAcrossNodes is enabled, the maximum number of replicas you can configure is equal to the number of nodes.

With this property, you can ensure that multiple replicas for your application are created across multiple nodes. In case this property is disabled, it can create multiple replicas of the same application on the same node and it doesn’t guarantee replicas for the same application on different nodes. This property can be only used if you want to deploy multiple replicas of the same application.

What is updateStrategy?

updateStrategy in Runtime Fabric Manager can be recreate or rolling.

  • rolling maintains availability by updating replicas incrementally and requires one additional replicas worth of resources to succeed. If enforceDeployingReplicasAcrossNodes is true, the maximum number of replicas can be configured is one less than the total number of nodes.
  • recreate terminates the replicas before re-deployments. Redeployment is quicker than rolling and doesn’t require any additional resources. If enforceDeployingReplicasAcrossNodes is true, the maximum number of replicas can be configured is equal to the total number of nodes.

How can POD communicate with the services which are outside EKS/RTF Cluster for which IP Whitelisting is required?

As we are aware, each POD gets an IP address from the VPC range and this can change while redeployment of application or restarting of the application or during application updates.

In such a case we cannot whitelist the IP address of POD on an external system as IP can change dynamically.

In the Kubernetes world, POD communicates to other POD and external world through the Service Abstraction.

In Kubernetes, applications running on the POD will get SNATed with the public IP of the node and they can communicate with the outside world using a public IP address of the node and you just need to whitelist the IP addresses of all the worker nodes in the external system.

In this case, we have a worker node running in the Public Subnet and public IP is allocated to the worker node and that is the reason node can communicate with external service after whitelisting node public IP address.

In the real scenario, EKS nodes will be in a private subnet and it will not get any Public IP address to communicate with external services and to enable communication with external services from private subnet, we will be using NAT Gateway. NAT Gateway public IP addresses can be whitelisted in external systems to enable communication with external service. This process is also known as an External SNAT.

What are Self-Managed Kubernetes supported by Runtime Fabric Manager?

It supports EKS, AKS and GKE.

Can we get some documentation on setting up the Ingress Controller and Last Mile Security?

Here are some links that can help you understand how to set up Ingress Controller and Last Mile Security.

Ingress Controller

Last Mile Security

Does the Runtime Fabric Manager of Self-Managed Kubernetes support Zero-downtime updates or deployments?

Yes, it supports Zero-downtime updates and deployments.

What are the benefits of Runtime Fabric Manager on Self-Managed Kubernetes?

  • Reduce Infrastructure Cost — Amount of Infrastructure required for setting up and managing the Runtime Fabric Manager is to reduce footprint.
  • More visibility to operation teams for managed kubernetes environments and Self-Managed Kubernetes service providers like AWS and Azure offered Auto Scaling, automatic Kubernetes upgrades and monitoring.
  • Kubernetes controller is managed by Self-Managed Kubernetes service providers like AWS and Azure.
  • Flexibility of choosing own ingress load balancer and preferred Linux-based operating system in case of Self-Managed Kubernetes.
  • Isolation between applications by running a separate Mule Runtime per application.
  • Ability to run multiple Mule Runtime on same sets of resources.
  • Cheaper to run mule applications longer term compared to CloudHub as apps can be deployed with as low as 0.02 Core (CloudHub is 0.1 vCore minimum)
  • Applications deployed on Runtime Fabric Manager can be managed by Anypoint Runtime Manager.
  • Scaling the application across the multiple replicas with Zero downtime.
  • Intelligent Healing and Automated application failover.

Conclusion

This blog completely explains various Runtime Fabric Manager components and what various benefits can be achieved using Runtime Fabric Manager on Self-Managed Kubernetes Service. Overall, it is very important to design robust Elastic Kubernetes Service architecture and choose the right Runtime Fabric Manager components to be set up on Elastic Kubernetes Service and this can vary depending on your organization need or requirements.

--

--

Jitendra Bafna
Another Integration Blog

I am Jitendra Bafna, working as a Senior Solution Architect at EPAM Systems and currently leading APIN Competency Center.