Controlling Traffic in MuleSoft Using a Hybrid Deployment Model

Anandasankar Joardar
Another Integration Blog
8 min readNov 11, 2022

Making API led connectivity scalable and able to span across cloud and on-premises Mulesoft Runtime.

Introduction:

Many organizations use MuleSoft Runtime on both CloudHub and on-premise data centers to run MuleSoft applications. This is usually referred to as a hybrid deployment model. Experience, process, and system APIs (the 3 layers of MuleSoft recommended API-led connectivity) are spanned across MuleSoft CloudHub VPC and the on-premise datacenter. There are many reasons for organizations to adopt this architecture. A few major reasons are:

  1. Keeping the consumer facing API closer to the end-customer but the process and system APIs located with core applications in the on-premise datacenter.
  2. Enterprise application integration needs to connect with both cloud native (SaaS) and on-premise applications.
  3. Real time workload execution on cloud and batch workload execution on on-premise runtime.
  4. Core on-premise applications are safeguarded with the on-premises system API (as an additional security layer) but business logics on the process and experience APIs are executed on the cloud (closer to end customer).

However this type of architecture is complex and often requires additional considerations to control the traffic across cloud VPC and on-premise data centers. This blog is an attempt to highlight some key points to manage and secure traffic flow within this type of architecture.

MuleSoft Hybrid Architecture

MuleSoft Hybrid Architecture Example

As depicted on the diagram above, the key components of a hybrid architecture are as follows:

  1. It is recommended to have separate VPC for production and non-production.
  2. Non-production VPCs usually have different environments, like DEV, QA, UAT, etc.
  3. The CloudHub VPC is connected with the on-premises data center through the IPSec VPN tunnel. However, MuleSoft also supports direct binding, VPC peering, etc. For this blog, we assumed that the CloudHub VPC and on-premise data center are connected through the IPSec VPN tunnel. It is recommended to set up two VPN tunnels for each VPC, to achieve more reliability in the data packet transfer.
  4. Separate dedicated load balancers (DLB) are recommended for external facing (often experience APIs) and internal APIs (process and system APIs). Separate DLBs for experience APIs and process/system APIs will also make the API led connectivity more scalable.
  5. Inside the data center, MuleSoft Runtime and other applications reside in a separate network subnet.

Now, let’s delve into the CloudHub VPC and on-premise data center to understand how to control the traffic flow to establish a proper hybrid MuleSoft architecture.

Controlling Traffic inside CloudHub VPC

Controlling traffic within CloudHub VPC and connection with corporate network

The diagram above depicts a CloudHub VPC and dedicated load balancer setup to manage the ingress and egress traffic with respect to the CloudHub VPC.

It is not a wise decision to use shared load balancer for CloudHub deployment when deploying MuleSoft applications at enterprise scale. A dedicated load balancer (DLB) has many benefits over a shared load balancer (SLB). The key reasons to use a DLB over a SLB are as follows:

  1. Map MuleSoft application endpoints to company vanity domain
  2. SSL configuration and applying customer security certificate
  3. High availability and scalability (by default DLB deploys on 2 workers)
  4. Support for high throughput

To prevent exposing MuleSoft applications/API through SLB, even accidentally, many organizations block port 8081 (http) and 8082 (https) on the CloudHub VPC firewall.

Mask the DLB A record with a company vanity domain as CNAME on company DNS server so that MuleSoft API (applications) endpoints can be exposed on company domain name. For example, API endpoint can be exposed with https://api.mycomapnydomain.com (where api is the subdomain name) as the HTTP host header instead of using MuleSoft DLB A record <lb-name>.lb.anypointdns.net. This will ensure that MuleSoft API or applications deployed on CloudHub and on-premises are using the same company vanity domain.

DLB mapping rules allow traffic to be distributed among multiple environments, domains, or SSL endpoints. Mapping rules further route the traffic to the intended MuleSoft API or application within the VPC. Since mapping rules support pattern matching, adopting a standard nomenclature for MuleSoft API (or application) can help to build mapping rules that can route traffic across the API (applications), environment, and domain. It is strongly recommended that the MuleSoft C4E team publish the guidelines to be followed by the MuleSoft development team for APIs, application names, and endpoint URI patterns to support the mapping rule setup at the DLB. These types of guidelines need to be communicated to all MuleSoft development teams across the organization as part of the development standard. For example, if multiple environment setups are done (i.e., DEV, QA, etc.) on non-prod VPC, and the same API is supposed to be actively running on multiple environments in parallel, then mapping rules, API name, and URI pattern (output path) can help the ingress traffic reach to the correct API on the correct environment. An example pattern based URI mapping is as follows:

Example: pattern based URI mapping on DLB

So, if an experience API is named “myapi” and it is deployed to the DEV environment, then the deployed application on CloudHub should be named as dev-exp-myapi-v1. If there is a resource or end point “myuri” to be exposed through the API then that endpoint should be /api/myuri.

The external consumer will invoke the endpoint (myuri) on the api (myapi) as https://api.mycomapnydomain.com/dev/exp/myapi/v1/myuri (match the bold portion with the input path from the URI mapping table above). Any traffic on the URL shown above will route to /api/myuri/ endpoint of myapi application (version 1) deployed on the dev environment within the CloudHub VPC. Also, since the protocol is defined as HTTP, there will be an SSL offloading at the DLB layer and Ingress traffic will move onto HTTP protocol (and not on HTTPS) within the VPC. Click here for more information.

It is often required to segregate between the traffic coming from the public internet to cloud VPC and traffic within VPC and on-premises to achieve better control and security. This means you should limit traffic from public internet to access only external facing MuleSoft APIs (or experience APIs). Internal APIs, like process and system APIs, are only restricted to be invoked from applications/API deployed within the VPC or on-premises. To achieve that, deploy and configure two DLBs. One should be an external API where on the DLB firewall whitelist the IP range is 0.0.0.0/0 (i.e., any traffic from the public network is allowed). Only expose the experience API (or external facing API) through this external facing DLB by setting up the DLB mapping rule properly. For example:

Example: DLB Mapping rule for external facing DLB

Now, the input path is not using any pattern for the layer, rather it has a hardcoded value “exp” to ensure that this DLB routes traffic only to the experience API. Developers need to put exp in all the experience API names or build a CI/CD pipeline such that exp is appended to every API name during deployment. A subdomain named “extapi” can be created for external APIs and a SSL endpoints, as extapi.mycompanydomain.com can be mapped with this DLB.

Another DLB should be created for the internal API, where the DLB firewall will be whitelisted with CloudHub CIDR range (for example, 192.2.0.0/24) and on-prem subnet CIDR range (for example, 124.0.0.0/16). This will ensure that ingress traffic to the DLB is only allowed from on-prem and internal VPCs. This DLB is not reachable (with the allowlists setup explained) from the public internet. All process and system APIs on CloudHub are supposed to be exposed through this internal facing DLB. To enforce the same a DLB mapping rule, you can reference the table below:

Example: DLB mapping rule for internal facing DLB

With this DLB, asubdomain named “intapi” can be created for the internal API and an SSL endpoint as intapi.mycompanydomain.com can be mapped. This setup will also provide a higher degree of scalability for API led connectivity. For more details on DLB setup from Anypoint Runtime Manager, click this link.

Controlling Traffic from CloudHub VPC to On-Premise MuleSoft Runtime

Controlling Mulesoft traffic within on-premise Runtime

Now, let’s focus on the on-premises traffic flow for MuleSoft applications deployed on on-premise Runtime. The key points with respect to control traffic are as follows:

  1. As mentioned earlier, the most common way to move traffic from CloudHub to on-premises is through VPN tunnelling (IPSec). For better reliability it is always recommended to setup at-least two VPN tunnels from every VPC.
  2. On-premise data center traffic is controlled through the NAT (Network Address Translation) Gateway which prevents ingress traffic from the public network to the data center. However, ingress traffic from private VPCs (including CloudHub VPC) are allowed. Egress (outbound) traffic is allowed from on-premises data centres to public networks.
  3. Use internet gateway instead if it is required that ingress traffic can reach the on-premises data center from public internet.
  4. On-premise MuleSoft Runtimes are scaled through customer managed load balancer (like F5 load balancer). Irules can be configured to further control the traffic movement across various APIs. An example of URI redirect using irule is as follows:
when HTTP_REQUEST {if { [HTTP::uri] equals "/"} {HTTP::redirect "https://[HTTP::host]/api/"return}}

5. Easy maintenance APIs and real time applications (with http/s end points) can be deployed on server groups since these server groups are configured behind the customer managed load balancers.

6. However, for batch or listener based integration (i.e., file polling, database table polling, etc) it is recommended to use clusters with one master/primary node. This will remove the risk of duplicate reading and processing of files or database table rows.

7. It is also recommended to create MuleSoft Runtime on a separate subnet than the backend application.

8. For a greater degree of security, some organizations prefer to allow traffic from on-premises MuleSoft APIs to back-end core business applications only. For that level of control, only MuleSoft on-premises CIDR ranges are whitelisted on the back-end application subnet firewall.

Conclusions

This blog is a demonstration of how to control traffic flow end-to-end on a complex hybrid MuleSoft Runtime deployment model. However, many more sophistication- and network architecture level- considerations are possible. It is strongly recommended to involve the enterprise network and security team at the time of MuleSoft platform setup to avoid any down-the-line loopholes in controlling ingress and egress traffic. I hope this blog will provide some ideas on controlling traffic across CloudHub and on-premises Mulesoft Runtimes.

--

--

Anandasankar Joardar
Another Integration Blog

MuleSoft Ambassador and Delivery Champion, YouTuber, Blogger and Speaker, An Integration Architect