Apigee-X Network : Part 1 — Fundamentals

Hassene BELGACEM
Google Cloud - Community
5 min readOct 27, 2023

When discussing leading API management products in the market, Apigee invariably makes its way to the forefront of the conversation. However, as Cloud engineers, our commendations for a product don’t end at its features or its standing in the market. The real challenge starts when it’s time to integrate such a product into a client’s landing zone. In this article, we’ll dive into the fundamentals of Apigee X’s network design, providing insights on ensuring a seamless integration process. But let’s start with the basics, Apigee-X internal network design.

Apigee-X internal network design

To fully grasp the intricacies of integrating Apigee X, it’s crucial to first understand its internal network design. When setting up an Apigee X organization, behind the scenes, three primary projects are at play ( based on this video published by Miguel Mendoza):

1. Customer Project: This is the foundational layer where your Apigee organization resides. Here, users are mandated to provide an Apigee-X VPC, which for the sake of clarity, we will refer to as ‘apigee-vpc’ throughout this article.

2. Service Network Host Project: Operating within this project is a shared VPC, named ‘servicenetworking’. This VPC establishes a peered connection to the customer’s ‘apigee-vpc’, ensuring fluid data exchange and robust integration. No apigee-X ressources installed here.

3. Apigee Tenant Project: Acting as the heart of operations, this is where the Apigee Runtime resides. Furthermore, it’s equipped with a network load balancer responsible for exposing all proxies deployed within Apigee X. All of these components are anchored to a subnetwork that’s shared from the servicenetworking VPC.

Apigee-X internal network design

Apigee-X integration network design

When architecting the flow of northbound traffic — essentially, client-to-Apigee X connections — it’s imperative to consider the origin of these clients. Whether they’re located within the GCP network, on-premises, coming from partner networks, or directly from the internet, strategizing optimal connectivity paths is crucial. To address the myriad scenarios, there are many options but i will describe only those i used in production environment :

Option 1 — Direct Access Over VPC Peering (PSA)

Private Service Access (PSA) is network peering connection that supports some Google services and 3rd party managed VPCs. When used, a peering connexion between the customer VPC, and the `service-networking` will be created (see schema bellow). But, given the non-transitive nature of peering, northbound and southbound traffic is limited to the peered networks. This approach is optimally designed for clients housed within the customer VPC. Additionally, it can be used for on-premises access if there’s a direct linkage through Interconnect or VPN to the customer VPC. However, it’s important to note that this option doesn’t provide the flexibility of customer-managed SSL offloading at the L4 layer. It’s also important to note that with the utilization of peering, two CIDRs — /22 and /28 — are allocated for each Apigee X instance.

Apigee-x Networking : Direct Access using PSA

Option 2 — Direct Access Over VPC Peering (PSA) and Network Bridge

Utilizing a Managed Instance Group (MIG) as the backbone, the Internal Load Balancer (ILB) efficiently channels traffic toward the Apigee instance, sidestepping any transitivity issues for Northbound traffic. But southbound traffic is still limited to the peered networks. I will not go deep in the details of this option as can be replace with a PSC endpoint and clients are tasked with both the financial responsibility and management of the instances.

Option 3 — Direct Access via Customer Shared VPC

In this scenario, rather than creating a new VPC for Apigee X, we will utilize the customer’s Shared-VPC. This will be set up to peer with `service-networking`, as depicted in the diagram below.

For both Northbound and Southbound traffic, this approach has no transitivity issues as there is only one peering connexion. It can be used for clients housed within the customer VPC and on-premises access if there’s a direct linkage through Interconnect or VPN to the peered VPC.

Same as the peering, this option lack of support for customer-managed SSL offloading at the L4 layer and two CIDRs — /22 and /28 — are allocated for each Apigee X instance. SSL offloading at Layer 7 can be achieved by introducing an additional load balancer within the ‘customer-vpc’. This directs traffic to the Apigee X instances through PSC NEG, which we will detail further in the subsequent sections.

Apigee-x Networking :Direct Access via Customer Shared VPC

Option 4— Access Through PSC Endpoints

This method is centered around the Private Service Connect feature. The Apigee-X network remains entirely separate from the client’s network (this feature is beta). Northbound traffic is orchestrated via a PSC endpoint established within the customer’s network, and an additional load balancer can be integrated as an entry point if L7 SSL offloading becomes necessary. For Southbound traffic, every backend should offer a service attachment, which will then be referenced when setting up an endpoint for it within the Apigee-VPC.

Apigee-x Networking : Access Through PSC Endpoint

Conclusion

In wrapping up, it’s evident that navigating the complexities of network configurations of Apigee-X requires a clear understanding of internal network design and the available options and their implications. By considering the distinct advantages and limitations of each option, i hope you can make informed decisions that ensure security, and optimize performance.

--

--

Hassene BELGACEM
Google Cloud - Community

Cloud Architect | Trainer . Here, I share my thoughts and exp on the topics like cloud computing and cybersecurity. https://www.linkedin.com/in/hassene-belgacem