Photo by Sigmund on Unsplash

Continuing the Mastering OCI- DRG v2 Networking Series: A Quick Recap…

In our previous blog posts, we explored two scenarios: Scenario-1 involved configuring DRG v2 with hub and spoke VCN route-tables, enabling OCI Network firewall to inspect traffic in all directions. Scenario-2 was built on top of Scenario-1 and focused on network isolation in a multi-hub and spoke topology, providing isolation for workloads across different hub and spoke environments while maintaining unified management.

Now, in Scenario-3, we will continue building on the concepts from Scenario-2 and delve into extending network isolation requirements across regions while also implementing a firewall bypass channel.

Firewall bypass is often misunderstood as weak security, but from my experience dealing with multiple high volume and large-scale implementations, I cannot stress enough how crucial it is to understand when to inspect traffic and when to bypass Next-Gen firewall inspection. Let’s take a common example of customers trying to inspect synchronization traffic between Oracle Exadata Cloud Service (ExaCS) systems deployed across regions for disaster recovery scenarios. ExaCS utilizes an encrypted channel for synchronization, making firewall inspection impractical and leading to potential network bottleneck issues. In such cases, bypassing the firewall inspection only improves both security and optimal performance of the workloads.

Reference Architecture:

Architecture of Prod & Non-Prod workloads across regions.

The above architecture will meet following requirements.

  1. Prod and Non-Prod workloads are segregated in a hub and spoke architecture, meaning both Prod and Non-Prod environments have their dedicated hubs.
  2. The segregation spans across regions with dedicated remote peering connectivity (RPC) links.
  3. On-Prem-A Datacenter is connected to Region 1 over IPsec, and On-Prem-B Datacenter is connected to Region 2 over IPsec, both allowing access to Prod and Non-Prod workloads in both regions.
  4. Connectivity from Onprem, RPC, and Internet is considered “untrusted” and routed to the WAN/Untrusted interface of the respective Prod and Non-Prod firewalls in both regions.
  5. Workload VCNs and Database VCNs are considered “Trusted” networks, and they get routed to their respective LAN/Trusted interface of Prod and Non-Prod firewalls in their respective regions.
  6. Traffic between DB VCNs across regions bypasses the firewall, while access to DB instances from Non-DB networks, as well as traffic between non-DB networks, undergoes inspection by the firewalls
  7. There is no connectivity between Prod and Non-Prod workloads, whether inter or intra-region.

OCI-Network firewall is a one-armed setup, but some customers prefer routing network segments to different interfaces of the firewall for selective inspection. In such scenarios, deploying a marketplace firewall cluster becomes necessary. For this post, a pfSense firewall (Opensource) image has been deployed in a compute instance. To build a high-availability (HA) cluster of pfSense, you can refer to this article written by James George.

Understanding Network Flow:

Watch the video below to grasp the North-South (Internet-bound) network flow of the Production environment

In both Production and Non-Production environments, the Ingress and Egress internet access flows are alike, operating through their respective hubs. Additionally, RPC and on-prem traffic are routed to the untrusted interface of the Next-gen firewall, which is deployed in the DMZ VCN. Simultaneously, the trust interface acts as a gateway for spoke VCNs and belongs to the Hub VCN. Consequently, the deployment of Marketplace firewalls spans across two VCNs.

The following video illustrates the East-West (On-Prem to Prod / Prod-DR) network flow.

Important Points to note:

  1. The described flow remains consistent for the Prod-DB-Spoke VCN too.
  2. Non-Prod spokes (both Non-Prod-DB-Spoke & Non-Prod-VCN-Spoke) VCNs will be accessed through the Non-Prod firewall.
  3. Similarly, Onprem networks access the Prod and Non-prod workloads in the DR (Secondary region) via their respective RPC links, facilitated by their respective firewalls.
  4. The same applies to the OnPrem-B network connected to Region-2 (DR), as it can also access both Non-Prod and Prod workloads in the DR region. Workloads in Region-1 can be accessed via the RPC links, and all of this traffic is inspected by the respective firewalls.

Inter-region traffic between DB VCNs, require firewall bypass. However, other types of traffic must go through firewalls for inspection, while ensuring network segregation between Prod and Non-Prod workloads. Let’s watch the video below to gain a better understanding of how these two types of flows will appear in this architecture.

Important Points to note:

  1. Firewall inspection is bypassed only when both the source and destination are either Prod (or) Non-Prod DB VCNs belonging to different regions.
  2. Any other traffic gets inspected by the firewalls in both regions. (i.e) both source and destination could be web server , or between web server and DB server across regions.
  3. No routing happens between Prod & Non-Prod workloads.

Configuration Overview:

Assumption: Please note that I assume you are familiar with provisioning VCNs, subnets, Highly available IPsec VPN, deploying pfSense Network Firewall, OCI security lists, and network security groups. The configuration screenshots provided below are specifically of the DRG and the routing tables of the hub and spoke VCNs to create the architecture diagram mentioned above in the OCI console.

  1. VCNs (Virtual Cloud Networks) list

Above are the list of VCNs, where both Prod and Non-Prod environments will each have 4 VCNs:

  • VCN (with “DMZ” in its name): will host pfSense untrust / WAN virtual-network interface (VNIC). It will serve as the gateway for Onprem and RPC facing workloads, as well as route traffic to and from the internet.
  • VCN (with “Hub” in its name): will host pfSense trust / LAN virtual-network interface (VNIC). It will act as the gateway for spoke VCNs.
  • VCN(with “DB” in its name): As depicted in the architecture diagram , will host the DB instance.
  • VCN(with “Spoke” in its name): This VCN as indicated in the architecture diagram, will host the webserver.

Prod VCNs Routing:

1.1 Prod DMZ VCN — Public Subnet routing table.

The provided route entries ensure that the Untrust VNIC (pfSense firewall), assigned from the public subnet, routes internet-bound traffic to the Internet Gateway (IGW). Meanwhile, other local traffic destined for RPC or IPsec is routed to the Dynamic Routing Gateway (DRG). For demonstration purposes, private ranges in both Class A and B have been added. In a production environment, these can be replaced with the actual RPC networks and on-premises network ranges during implementation.

DMZ VCN Public Subnet

Please take note that the route table is linked to the Public subnet of the VCN, that contains pfSense firewall’s Untrust / WAN Vnic.

1.2 Prod DMZ VCN — Transit routing table.

Transit routing facilitates the VCN in advertising routes to other DRG attachments. Based on the provided route entries, the Prod-DMZ-VCN will advertise routes of a CIDR that encompasses addresses from Prod DMZ, Hub, Web Spoke, and DB Spoke.

DRG Attachment transit routing

As you may observe, the Transit route table is connected to the DRG attachment. Scenario-1 has thoroughly covered the steps to associate the route table with a DRG attachment, so those details will not be repeated here.

1.3 Prod HUB VCN — Private routing table

The specified route entries ensure that the Trust VNIC (pfSense Firewall), assigned from the private subnet, directs Oracle Services network traffic to the Services Gateway (SGW). Conversely, other local traffic destined for the Spoke VCN is routed to the Dynamic Routing Gateway (DRG). For demonstration purposes, a Class A CIDR encompassing all prod workload ranges has been included. In a production environment, these can be replaced with the actual Spoke VCN network CIDRs.

Please take note that the route table is linked to the Private subnet of the VCN containing pfSense firewall’s Trust VNIC.

1.4 Prod HUB VCN — DRG Transit routing table.

The transit routes above direct “any” incoming traffic to its firewall trust interface. It’s worth noting that the DMZ VCN route advertisement will only be learned by the RPC and IPsec connections. On the other hand, all Prod Spoke workloads will learn the Transit route advertised by this HUB VCN. As a result, Spoke VCNs utilize the HUB VCN as the default gateway to access the internet, on-premises, or RPC resources — essentially all networks.

Note that the DRG-Transit-Rt route table is connected to the DRG attachment.

1.5 Spoke VCN route-table

Both Spoke VCNs that has Web-server & DB server will have only a single default route that points all traffic to DRG. refer below screenshots.

Non Prod VCNs Routing:

Similarly, in the Non Prod VCN routing, the DMZ VCN advertises only the CIDR that encompasses all Non-Prod networks. The HUB VCN will advertise a default route for the spoke VCNs. The DMZ routes are learned by the RPC & IPsec networks. Given the length of the blog post, I won’t be sharing the screenshots.

Please note that the architecture diagram includes private subnets in both Prod & Non-Prod DMZ-VCNs, as well as a High Availability (HA) subnet in the HUB VCN for both Prod & Non-Prod. However, in this demo, these spare subnets are not utilized, and therefore, no routing configuration is shown for them.

In production scenarios, you have the option to utilize these subnets for either Management purposes or as a firewall High Availability (HA) subnet, depending on your marketplace firewall vendor. In such cases, you will need to configure routing for these subnets accordingly.

2. Redundant IPsec tunnel.

The screenshot below displays the redundant IPsec tunnels that were established with the on-prem datacenter A.

Redundant IPsec tunnels — Primary Region

We have established redundant IPsec tunnels using Libreswan (OpenSource-VPN) as the Customer Premise Equipment (CPE). To learn how to configure this setup, please refer to this blog. Additionally, you can consult this document to gain insights into IPsec/Fastconnect redundancy best practices.

3. Compute Instances.

Here is a list of instances deployed in the primary region:

  • Two instances with “NFW” in their names are the network firewalls (built with pfSense Image) — one for Prod and one for Non-Prod.
  • Two instances with “Web-Server” in their names are server workloads running a dummy HTTP page — one for Prod and one for Non-Prod.
  • Two instances with “DB” in their names are dummy DB instances — one for Prod and one for Non-Prod.”

3.1 Compute — Network Firewall — Attached VNICs.

3.1.1 Below are the VNICs attached to Prod Firewall.

Note : WAN interface is assigned from the public subnet of Prod-VCN-DMZ.

Note : LAN interface is assigned from the private subnet of Prod-VCN-Hub.

3.1.2 Below are the VNICs attached to Non-Prod Firewall.

Note : WAN interface is assigned from the public subnet of Non-Prod-VCN-DMZ

Note : LAN interface is assigned from the private subnet of Non-Prod-VCN-Hub

In both Prod and Non-Prod environments, the rest of the Web-servers and DB-servers have only one VNIC associated with them. You can find the IP addresses and VCN details for these instances in the compute instance list screenshot.

3.2 Compute — Network Firewall — Routing.

To ensure proper firewall routing, it is essential to configure the spoke workload related traffic to be routed via the LAN / Trust interface. On the other hand, internet, RPC, and IPsec traffic should be routed via the WAN / Untrust interface.

Here is a configuration snippet of Prod — Firewall.

configuration snippet of Non-Prod — Firewall.

Note : pfSense firewall policy , NAT , cluster related configuration are not covered in this blog.

4.Dynamic Routing Gateway (DRG) v2 configuration.

As we are aware, DRG v2 employs destination-based routing, directing traffic ingress from any attachment to the DRG based on the destination CIDR in its associated route table. In the following configuration screenshots, we will examine the DRG route tables of all attachments, including VCN, IPsec, and RPC.

4.1 — VCN Attachments ( Virtual cloud Network and its associated DRG route tables)

VCN Attachments

Above screenshot displays all VCN attachments and their corresponding route tables.

Now, let’s examine the DRG Route table of the VCN attachments in both Prod and Non-Prod environments:

  • Hub
  • DMZ
  • Spoke Workload (containing the webservers)
  • DB Spoke (housing the DB server)

Prod DRG Route tables

Below screenshots are of the route tables associated with the following attachments.

  • Prod-VCN-Hub-Attachment
  • prod-vcn-dmz-attached
  • Prod-VCN-Spoke-Attachment
  • Prod-VCN-DB-Spoke-Attachment

4.1.1 Prod Hub Route-table

Prod_Hub_Rt

Prod Hub Route-table uses, “Prod-Hub” import distribution policy.

Note — that hub route table only imports routes from Prod VCN spoke attachments (Web-server / DB)

4.1.2 Prod DMZ Route-table

Prod DMZ Route-table uses, “Prod-DMZ” import distribution policy.

The DMZ route table exclusively imports routes from the Prod RPC attachment and IPsec Attachments. Notably, it does not import routes from the Prod-DB-RPC attachment, as this connection serves as a firewall bypass channel.

4.1.3 Prod Spoke Route-table

It has only one static (default) route, routing all traffic from the attachment to Prod-VCN-Hub attachment.

4.1.4 Prod DB Spoke Route-table

Similar to the previous route table, this one also contains a single static (default) route, directing all traffic from the attachment to the Prod-VCN-Hub attachment. However, it additionally imports routes using the Prod-DB-RPC import route distribution policy.

Note that it imports the routes learned via the RPC connection (Prod-DB-Link).

Non-Prod DRG Route tables

Below screenshots are of the route tables associated with the following attachments.

  • Non-Prod-VCN-Hub-Attachment
  • Non-Prod-VCN-DMZ-Attachment
  • Non-Prod-VCN-Spoke
  • Non-Prod-DB-Spoke-Attachment

4.1.5 Non Prod Hub Route-table

Just like the Prod Hub routing table, the Non-Prod routing table imports only the Non-prod spokes that contain Web-server and DB workloads, with below import route distribution policy.

4.1.6 Non Prod DMZ Route-table

The DMZ route table selectively imports routes from the Non-Prod RPC attachment and IPsec Attachments,just like the Prod-DMZ route table. However, it does not import routes from the Non-Prod-DB-RPC attachment, as this connection serves as a firewall bypass channel. Please check the import route distribution policy configuration below.

4.1.7 Non Prod Spoke Route-table

There is only one static (default) route, directing all traffic from the attachment to the Non-Prod-VCN-Hub-Attachment.

4.1.8 Non Prod DB Spoke Route-table

Much like the previous route table, this one also features a single static (default) route, routing all traffic from the attachment to the Non-Prod-VCN-Hub attachment. However, in addition, it imports routes using the Non-Prod-DB-RPC import route distribution policy.

The mentioned import route distribution policy enables the DB spoke to send all traffic to the Non-Prod Hub, except for the routes it learns via the RPC Non-Prod-DB-Link.

4.2 IPsec Tunnel Attachments

Above screenshot displays all IPsec tunnel attachments and their corresponding route table.

Below are the route table configuration of IPsec_RT, it imports routes using route distribution policy called “IPsec_Import”

IPsec_RT
IPsec_Import

Note — both IPsec tunnel attachments use the same IPsec route table (IPsec_RT). As per its above configuration, on prem network can access both Prod and Non-Prod workloads in both regions. So it imports routes from Non-Prod and Prod DMZ VCN attachments, and Non-Prod and Prod RPC attachments.

4.3 remote peering connection (RPC) Attachments

Below are the RPC connections created with the DR region.

RPC Attachments

Please be aware that all four attachments have different routing tables associated with them.

  • Prod-Link will carry traffic exclusively from the IPsec tunnel and Prod-DMZ VCNs from both regions.
  • Non-Prod-Link will carry traffic from the IPsec tunnel and Non-Prod-DMZ VCNs from both regions.
  • Prod-DB-Link will carry only Prod-DB VCN traffic from both regions.
  • Non-Prod-DB-Link will carry only Non-Prod-DB VCN traffic from both regions.

Lets review the routing table configuration.

Prod_RPC_Rt

This route table imports route based on the import route distribution policy — “Prod_RPC”.

Prod_RPC_Rt
Prod_RPC

According to the above configuration, Prod_RPC is permitted to learn routes exclusively from Prod-VCN-DMZ and IPsec tunnel attachments. This is analogous to Non-Prod-Rt, which allows the learning of routes from Non-Prod-VCN-DMZ and IPsec tunnel attachments. Refer following Non-Prod-Rt configuration.

Non_Prod_RPC_Rt

Non_Prod_RPC_Rt
Non_Prod_RPC

The route tables for both Prod and Non-Prod remote peering connections are configured to learn routes solely from their respective DB VCNs. Thus, the route table configurations are as follows.

Prod_DB_RPC_Rt

With the configuration provided below, the routing table for this attachment will exclusively include the Prod-DB-Spoke CIDR. Any incoming traffic through this attachment will be routed to the DB-Spoke VCN CIDR as long as the destination corresponds to it.

Non_Prod_DB_RPC_Rt

Similarly to the Prod_DB routing table for this attachment, the Non-Prod-DB-Spoke CIDR will be exclusively included. Any incoming traffic through this attachment will be routed to the Non-Prod-DB-Spoke CIDR as long as the destination corresponds to it.

We’ve addressed crucial configurations and key aspects to establish the topology and network flow outlined in the reference network architecture. However, please be aware that these details pertain only to Region-1. Your Region-2 configuration mirrors this setup, with the only difference being the change in CIDR. Therefore, I won’t be providing additional configuration screenshots or explanations for Region 2.

You can refer to below video for an overview of the VCN Routing , pfSense firewall routing and DRG v2 routing configuration of both regions.

In order to keep the video concise, I’ve omitted the IPsec configuration, compute instances, and VNIC configuration details. The purpose of this blog is to demonstrate the configuration of DRG v2 routing tables and VCN routing to meet some complex networking requirements. I hope you find this useful!.

Thanks James George for his valuable peer review.

--

--

Karthik Mani
Oracle Developers

Experienced Principal Cloud Security - Solution Architect with strong skills in information security, risk management, and scalable cloud infrastructure.