Configuring mTLS for Apigee X Northbound Traffic using Envoy

Payal Jindal
Google Cloud - Community
6 min readJan 30, 2023

This blog will discuss how mTLS (mutual Transport Layer Security) can be configured for northbound traffic flow for Apigee X to add more security. Northbound traffic flow refers to the traffic flow between external clients and the Apigee Runtime Instance/s. When mTLS is enabled in northbound traffic flow, both the client and the server (in this case, the Apigee Runtime Instance/s) are verified, ensuring that all traffic between the two is encrypted and authenticated.

Northbound mTLS can be implemented by using a pass-through TCP Load Balancer and terminating mTLS on VMs (running Envoy; managed through MIG) which act as a backend to this Load Balancer.

This blog will mainly focus on how to set up northbound mTLS in Apigee using Envoy on VMs. Apigee is a popular API Management platform that allows one to build, deploy, publish, manage, secure, and analyze APIs. Envoy is a popular open-source proxy that can be used to add features like load balancing, rate limiting, and mTLS to the services. Together, Apigee and Envoy provide a powerful solution for securing northbound traffic to the APIs configured in Apigee.

The basic flow of how a request will travel from a client to the Apigee Runtime Instance is outlined in Fig. 1.

Fig. 1. Northbound mTLS in Apigee X

As it can be seen, in the setup shown, the client uses mTLS to access the APIs hosted in Apigee X.

When a request is sent by a client, it first reaches the External TCP Load Balancer, which acts as a pass-through Load Balancer and routes traffic from the internet to a regional Managed Instance Group or a regional GKE cluster, running Envoy, configured in the Authorized Network (a VPC Network which shares Service Networking Peering with the Apigee Organization). Apigee X deployed with a multi-region setup provides active/active availability, i.e. it runs in multiple geographic regions and has failover capabilities in case of a regional outage. In that case, the Load Balancer will route traffic to the active region ensuring API services remain available to the client. Each mTLS handshake terminates on a dedicated Envoy Proxy that runs on one of the VMs (managed under MIG) or Kubernetes Pods. This blog will talk about how to deploy Envoy, configured to carry out northbound mTLS checks, on VMs.

The L4 XLB mTLS Terraform Module has been used for the reference setup. This Terraform module has an Envoy configuration file which it pulls into the MIG Instance Template (used to create Virtual Machines). The Startup Script of the VMs uses this configuration file to create a Docker container running Envoy, which refers to this configuration file to proxy the traffic.

Configuring Envoy for mTLS

This Terraform setup deploys Envoy on VMs to enable mTLS checks. Shown below is the configuration file used for Envoy.

static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
set_current_client_cert_details:
subject: true
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: apigee_instance_1
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
require_client_certificate: true
common_tls_context:
validation_context:
trusted_ca:
filename: /opt/apigee/certs/cacert.pem
tls_certificates:
- certificate_chain:
filename: /opt/apigee/certs/servercert.pem
private_key:
filename: /opt/apigee/certs/serverkey.pem

clusters:
- name: apigee_instance_1
connect_timeout: 30s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: apigee_instance_1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: #ENDPOINT_IP# # IP of the regional Apigee Instance.
port_value: 443
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext

Let us go through the configuration file.

This configuration file is setting up Envoy on the VMs to handle incoming HTTP requests on port 10000. When a request comes in, it is passed through a series of filters which are responsible for different aspects of processing the request and sending the response. One of the filters used in this configuration is the envoy.filters.network.http_connection_manager filter. This filter is responsible for managing the connection between the client making the request and Envoy. The stat_prefix is set to “ingress_http” which is used to track the statistics for this connection. “set_current_client_cert_details” is set to true to specify that the filter “envoy.filters.network.http_connection_manager” should set the client certificate details in the request headers, which can be helpful in debugging and troubleshooting issues.

This configuration also uses a filter named envoy.filters.http.router which is used to route the incoming requests to the appropriate cluster; the cluster, defined in the “clusters” section, is the backend service that Envoy can route requests to. This filter uses the routing configuration defined under “route_config” where it has a “virtual_host” named “local_service” which listens on all domains (“*”) and routes all incoming requests with a prefix “/” to the cluster named “apigee_instance_1”.

This configuration file also uses the transport socket envoy.transport_sockets.tls configuration to secure the communication between the client and Envoy, as well as between Envoy and the backend service which is why it is used in two different places in this configuration file; once in the listener filter chain and once in the cluster. In the listener filter chain, the transport socket is used to require client certificate by setting “require_client_certificate: true” and it uses the server certificate chain and private key from the files “/opt/apigee/certs/servercert.pem” and “/opt/apigee/certs/serverkey.pem” respectively. And also it uses a trusted CA from the file “/opt/apigee/certs/cacert.pem” to validate the client certificate. Note that the *.pem files need to be made available to all the VMs at the right path/location in order for Envoy to refer to them.

In the example shown above, there is a single cluster defined named “apigee_instance_1” with a connect timeout of 30 seconds, which is the amount of time Envoy will wait for a connection to the backend service to be established before timing out. The “type” of the cluster is set to “LOGICAL_DNS” which means the cluster is using DNS to resolve the IP address of the backend service. The “dns_lookup_family” is set to “V4_ONLY”, which means Envoy will only look up IPv4 addresses for the backend service. The load assignment, which specifies the load balancing configuration, is set to “apigee_instance_1” and points traffic to a specific endpoint which is the IP address (represented as “#ENDPOINT_IP#”; exposed on port 443) of the Apigee Runtime Instance.

Overall, this Envoy configuration file sets up a listener on port 10000, uses filters to handle incoming requests, routes requests to the appropriate cluster, and uses Transport Layer Security (TLS) to secure communication.

Testing

After running the Terraform code, you will need to deploy a proxy in Apigee X. Once the proxy is deployed, it can be accessed by making a request to the hostname exposed in the Apigee Environment Group which maps to the Apigee Environment in which that API Proxy is deployed. Since mTLS is enabled in the northbound traffic flow, you will need to include the client certificate and key, and the server CA in the request.

The following curl command can be used to call the API:

curl https://nb.apigee.example.com/testproxy --cert $CLIENT_CERT \
--key $CLIENT_KEY --cacert $SERVER_CA

The paths to the client certificate, client key, and server CA file should be assigned to the environment variables CLIENT_CERT, CLIENT_KEY, and SERVER_CA, respectively.

NOTE: Creating a separate directory in a standard location for certificates is a best practice, as it helps to ensure that the certificates are easy to find and that unauthorized modifications are prevented. Additionally, it is also a best practice to use a non-user-writable directory, so that only authorized personnel can access and modify the certificates.

If the client certificate, keys, and server CA chains are not included in the request, it’ll result in an error stating “SSL certificate problem: unable to get local issuer certificate”. This indicates that the server is unable to verify the identity of the client without these certificates.

Conclusion

Following the steps mentioned above, northbound mTLS for Apigee X can be configured using VMs (configured under MIG) running Envoy. I hope this article was helpful to you!

Interested in learning more? Check out how to configure northbound mTLS for Apigee X by fronting it by a Kubernetes (GKE) cluster.

Happy Learning!!

Acknowledgements

A big thank you to Anmol Krishan Sachdeva for the review and guidance that made this blog post a success.

--

--