Edge Routing with Envoy and Lua

Jean-Marie Joly
SafetyCulture Engineering
6 min readMar 26, 2019
Photo by William Navarro on Unsplash

Let’s have a look at how SafetyCulture handles edge routing with Envoy, specifically how edge traffic can be easily routed based on application criteria thanks to the Lua filter.

Requirements

This project comes from the creation of a modern edge network to serve traffic to our new platform APIs. These APIs leverage the full potential of gRPC and Protobuf to improve our mobile performance and experience.

At the edge, the main requirements are:

  • HTTP/2 support at the frontend and backend
  • gRPC and REST support (detection handled by the upstream APIs)
  • end-to-end TLS encryption
  • region routing

At SafetyCulture, region routing means routing requests on a per-user basis. So from the beginning, it was rather clear that region routing would be achieved via some middleware API.

Solutions

Unfortunately, very few solutions support the aforementioned requirements. These are some of the main players:

  • Nginx: no introduction needed
  • OpenResty: Nginx on steroids, natively supports middleware development
  • Traefik: powerful and easy to configure edge router
  • Envoy: fully-fledged, high performance reverse proxy

None of the above solutions fill all the requirements but Envoy. Nginx, OpenResty and Traefik do support HTTP/2 and gRPC since version 1.13.10, version 1.15.12 and version 1.4.0 respectively. However, until today Nginx and OpenResty lack proper DNS management of upstream clusters. Workarounds exist, such as the one below, but it returns a configuration error when used with the grpc_proxy instead of proxy_pass.

location / {
resolver 1.1.1.1;
set $backend "my-upstream-cluster";
proxy_pass http://$backend;
}

Traefik edge router fills almost all the boxes, except it currently provides no middleware API, which is one of the key requirements for region routing.

In the same way Nginx and OpenResty provide a middleware API with lua_nginx_module, Envoy includes a Lua filter, out of the box. Even though Envoy’s Lua filter has a limited set of Lua primitives compared to lua_nginx_module (this one is huge! have a look at the documentation), it is powerful enough to implement advanced edge routing logic. Check the documentation here.

Edge Routing

As a business requirement at SafetyCulture, end-users’ requests must be routed to specific geographical datacenters, which are not necessarily the closest. Region routing is based on user characteristics, typically the user’s region in our case.

Basically, a JWT may contain relevant information about the user. This way the edge router is able to extract information from the request JWT and take routing decision based on that information. Let’s have a look under the hood.

The JWT is generated and issued to the end user as part of the authentication process. Bear in mind that the JWT cannot be forged since it includes a specific signature that the edge router can verify before region routing (not detailed here for simplicity).

So how does Envoy route the requests to the right region? The trick relies on route metadata and cluster header. First, let’s define a route:

- match:
prefix: "/hello"
route:
cluster_header: region
metadata:
filter_metadata:
envoy.lua:
api: hello

Note that there is no predefined upstream cluster. Instead of using the cluster statement to directly indicate the targeted cluster, we use cluster_header to pass the upstream cluster from a specific HTTP header at a later stage, namely during the Lua filter process.

With the previous definition, the Lua filter may get the global name of the upstream service (key named api) from the route metadata, extract the user’s region from the JWT, and dynamically route the request by updating the region header to specify the regional upstream cluster. Routing to the selected cluster happens within Envoy’s router filter, which must therefore be invoked after the Lua filter.

- name: envoy.lua
config:
inline_code: |
function envoy_on_request(request_handle)
local edge = require "edge"
local config = {
region = "region-1"
prefix = "edge-router"
}
edge.route(request_handle, config)
end
- name: envoy.router
config: {}

In our case, edge routers are aware of their respective regions. That is, if the user’s region is the router’s region, the request is forwarded to the corresponding upstream cluster. When the user’s region is different from the router’s, the request is deviated sideways to the edge router responsible for the user’s region. Again, the same workflow happens in the next edge router.

local jwt = require "jwt"local M = {}function M.route(request_handle, config)
local metadata = request_handle:metadata()
local token = request_handle:headers():get("authorization") local decoded = jwt.decode(token)
if decoded.region == config.region then
request_handle:headers():replace(
"region",
metadata:get("api") .. "-" .. decoded.region
)
else
request_handle:headers():replace(
"region",
config.prefix .. "-" .. decoded.region
)
end
end

This algorithm requires strict cluster naming as the cluster header must match the upstream cluster name. To reflect our previous configuration and algorithm, you may write the following cluster definitions:

clusters:
- name: hello-region-1
...
- name: edge-router-region-2
...
- name: edge-router-region-3
...

Once you have set up metadata and cluster header, you can achieve any kind of routing, it’s just a matter of changing the Lua algorithm. For instance, you can easily adapt the previous configuration and algorithm to implement some routing to ease GDPR compliance. That is, you may route GDPR users to your datacenters in Europe, whereas other users may be merely routed to the closest ones in order to benefit from low network latency.

Architecture

The following diagram represents the outer/inner rim architecture, on top of which region routing is implemented. A request is routed around the edge network before entering a particular application datacenter in the application network.

Outer/inner rim architecture

The outer/inner rim architecture introduces a complete decoupling of the edge network (POPs) and the application network (datacenters). This way both networks may follow independent expansion paths. Besides security enforcement, this architecture improves the overall network performance with early termination of the network as close as possible to the end user, since the cost of a POP is significantly lower than the cost of a datacenter. For strategic POP and datacenter placement, have a look at this comprehensive article by Dropbox.

Gotchas

Envoy’s Lua filter is incredibly powerful with few primitives. However it requires extra care before receiving any production traffic. Let’s go over a few gotchas.

Exported Symbols

To fully support Lua, Envoy must be compiled with exported symbols. This is highlighted at the beginning of the filter’s documentation. For example, during the compilation process, you may execute a command similar to the following:

bazel --bazelrc=/dev/null build -c opt //source/exe:envoy-static.stripped --define exported_symbols=enabled

Without exported Symbols, Envoy may only partially support Lua scripts, resulting in unexpected behaviors.

Lua Memory Leak

As stated in the documentation, Lua scripts are executed via a C++ coroutine. That is, the execution context is suspended and resumed as requests go through the filter. Therefore any variable declaration that is not local will result in a memory leak, even when the variable already exists as part of the Lua environment (e.g., LUA_PATH and LUA_CPATH). This can be very tricky to detect and identify.

The fun part is that if you do have a leak in your code, Envoy is very likely to fail after around 920K requests (check this Github issue) which roughly corresponds to 1 GB of memory usage for the coroutine. So it might go undetected unless you perform some serious load tests during the development phase.

I/O Blocking

Since the filter is on the request path, needless to say, that you should never perform I/O blocking operations as the Lua script execution is synchronous. It is better to use the Envoy API if it is necessary. Otherwise, you will severely degrade Envoy’s performance.

File Descriptors

One last gotcha, which is not specific to the Lua filter, is the fact that the number of file descriptors should be significantly increased for the Envoy process. Because Envoy creates a file descriptor for each network connection, it may run out of file descriptors when the server is under heavy load. For example, under systemd, you may add the following line to your service definition:

LimitNOFILE=262144

or the following on Docker:

--ulimit nofile=262144:262144

Ideally, the number of connections should be closely monitored and trigger an alert based on the file descriptor limit. Hence, any horizontal auto-scaling policy should take that metric into account, in addition to classic resources such as CPU or memory usage.

For general performance tuning on webservers, follow this very detailed blog post by Dropbox.

Future directions

We’ve seen throughout the article that Envoy’s Lua filter is particularly powerful.

Nevertheless, in terms of architecture, the filter configuration presented earlier somewhat blurs the line between the control and data planes. While Envoy aims to be a data plane proxy, the Lua filter should be actually configurable via the xDS APIs within that context.

Overall, the Lua filter provides a quick and relatively easy way to control Envoy. Just remember that for high-performance use cases and better reusability, one should take advantage of the native C++filter API.

--

--