How we built the Tinder API Gateway

Tinder
Tinder Tech Blog
Published in
9 min readOct 24, 2022

--

Authored by:

Introduction

Tinder API Gateway (TAG) is one of the critical frameworks at Tinder that solves the need of exposing public APIs and enforcing strict authorization and security rules. It’s engineered to meet Tinder’s custom need to fit perfectly in its current cloud infrastructure, and can be scaled as required and maintained without any external support. It also implements RAC (Route As Configuration), which helps developers ship their modules to production faster. There are various features that make TAG a unique solution, but before we dive into that, let’s look at why Tinder needs a custom gateway.

We have more than 500 microservices at Tinder, which talk to each other for different data needs using a service mesh under the hood. All external-facing APIs are hosted on TAG. We needed a gateway solution that could centralize all these services, giving us more control from maintenance to deployment. The custom gateway also helps in ensuring that services go through a security review before being publicly exposed to the outside world.

Services like Recommendations APIs also receive frequent feature updates, both on the backend and the client side. This is just one example; we have several other critical services like Match APIs, Revenue APIs, etc. that need a streamlined process to ship faster to production. So we needed a custom gateway solution that could help us configure external routes with minimal effort to expedite the release process.

Looking at API Gateway from a security aspect, Tinder is used in 190 countries and gets all kinds of traffic from all over the world. Traffic from real users as well as traffic from bad actors. Imagine how important it is to scan and avoid vulnerabilities that might attack any of these services. Hackers try to find cracks to get into corporate systems so that they can steal any valuable information, and one entry point for them is the gateway. We needed a custom gateway solution that could help us identify such traffic and avoid possible vulnerabilities.

Challenges Before TAG

Before TAG existed, we leveraged multiple API Gateway solutions and each application team used a different 3rd party API Gateway solution. Since each of the gateways was built on a different tech stack, managing them became a cumbersome effort. More to the point, there were compatibility issues in sharing reusable components across different gateways. This would often result in delays in shipping the code to production. Moreover, different API Gateways had maintenance overheads.

We also saw inconsistent use of Session Management across APIs as the API Gateways were not centralized, as shown in below figure 1.

Figure 1 — Session Management across APIs at Tinder before TAG

We were trying to address some major concerns by looking for:

  • A solution to bring all external facing services under one umbrella
  • An artifact that could be used by any application team to spin off their API Gateway to scale their application independently
  • A framework that could provide the capability for applications to run as a Kubernetes microservice along with other Kubernetes services
  • A design that could support configuration-driven API Gateway development for increased development velocity
  • A generic component that could be extended based on Tinder’s custom needs
  • Adding Request/Response transformations
  • Custom middleware logic for various features like Bot Detection, Schema Registry, and more

We also wanted to control the framework level development and support so that we could build the gateway the way we want. All of these features were the motivation behind designing TAG.

Existing API Gateway Solutions

There are many open-source and commercial gateway solutions available in the public domain. Some of them are really heavy and focused on B2B integrations, and some of them are very complex to deploy and maintain. Existing solutions including Amazon AWS Gateway, APIgee, Tyk.io, Kong, Express API Gateway, and KrakenD were not optimal for reasons:

  • Some of these solutions are not well integrated with our existing Envoy mesh solution
  • A few of them are configuration heavy and use built-in plugins to support different features like spike arrest, service callouts, etc. Their adoption has a steep learning curve and doesn’t fit well with our current application/network stack
  • Some solutions have less support for languages we heavily work with
  • Finally, we need flexibility in building our own plugins and filters quickly when needed

Note: All these observations were made based on the documentation available on the official site of these products. Documentation is included in the reference section of this blog.

Let’s Explore TAG

TAG is a JVM-based framework built on top of Spring Cloud Gateway. Application teams can use TAG to create their own instance of API Gateway by just writing configurations. It centralizes all external facing APIs and enforces strict authorization and security rules at Tinder. TAG extends components like gateway and global filter of Spring Cloud Gateway to provide generic and pre-built filters.

These filters can be used by application teams for various needs:

  • Weighted routing
  • Request/Response transformations
  • HTTP to GRPC conversion, and more

From the developers’ point of view, TAG was created keeping their experience at the center of the design, and for that reason, TAG supports configuration-driven development.

TAG, by design, helps in improving developers’ velocity, provides ease to set up routes and services using environment-specific YAML or JSON configurations without writing any code, and helps them to reuse components by sharing filters across the application routes. It leverages all major components of Spring Cloud Gateway to build custom framework-level support for developers at Tinder to use.

Here are some additional reasons why we developed TAG:

  • Complete control to develop custom components, and to share and use them as configurations
  • Request and Response scanning
  • For Schema Registry to auto-generate API Documentation
  • To detect vulnerabilities like Bot Detecting and Real Time Traffic Detection
  • Dynamic Routing: we’re building a pipeline on TAG that will help in dynamically updating routes and their related configurations without the need of deploying the application cluster
  • TAG will enable future initiatives like API Standardization and Auditing Process
  • It enforces consistent and uniform experience of Session Management across different applications as it’s developed once and shared across all API Gateways (created using TAG)

A Deeper Look Inside TAG

Figure 2 — High-Level Design of TAG

High-Level Design, as shown in figure 2, showcases the following components:

  • Routes — Developers can expose their endpoints using Route As a Config (RAC); we’ll see in detail how routes are set up in TAG later on
  • Service Discovery — TAG uses Service Mesh to discover backend services for each route
  • Pre-Built Filters — We’ve added built-in filters in TAG for application teams at Tinder to use;
    example: setPath, setMethod, etc.
  • Custom Filters — We’ve added the support of custom filters so that application teams can write their own custom logic if needed, and implement them in a route using configurations. Custom filters are applied at Route Level (i.e. per route); example: custom logic to validate the request before calling backend service.
  • Global Filters — Global filters are just like custom filters, but they’re global in nature, i.e. they are applied to all the routes automatically if configured at the service level.
    Example: Auth filter or metrics filter applied to all routes specific to an application.

Below is the step-by-step flow of how TAG builds all the routes at application startup:

Figure 3 — TAG processing flow at application startup

Step 1: TAG triggers the Gateway Watcher that calls the Gateway Config Parser to load the YAML file

Step 2: The Gateway Config Parser validates and parses the environment-specific YAML configuration file

Step 3: The Gateway Manager looks up pre-filters, custom filters, and global filters and creates a map of the route ID and those filters

Step 4: The Gateway Route Locator loads predicate and its related filters from the map for each route into Spring Cloud Gateway

Step 5: The Gateway Manager then builds all the routes and prepares the gateway to receive traffic

Spring Cloud Gateway facilitates TAG to pre-configure all the routes and filters and seemingly execute them at runtime. Due to this design, TAG does NOT add any configuration processing latency at runtime. This helps TAG to scale up and handle high traffic with ease.

Real World Usage of TAG at Tinder

Figure 4 — Request processing by TAG

Executing a request in the above TAG configuration (as shown in figure 4) results in the following steps:

Step 1: Reverse Geo IP Lookup (RGIL)

RGIL is implemented as a global filter in TAG. The IP of the client request is mapped to three-digit alpha country code using the RGIL filter. We use RGIL for rate limiting, request banning, and other purposes.

Step 2: Request/Response Scanning

An Async event is published to capture the request semantics. Request/Response Scanning Global Filter captures just the schema of the request and not the data attributes. Amazon MSK is used to securely stream the data, which can be consumed by applications downstream for a variety of use cases like automatic schema generation, bot detection, etc.

Step 3: Session Management as a Filter

A Centralized Global filter is written in TAG to validate/update and control Session Management.

Step 4: Predicate Matching
The path of an incoming request is matched with one of the deployed routes using predicate matching.

Step 5: Service Discovery

The service discovery module in TAG uses Envoy to look up egress mapping for the matched endpoint.

Step 6: Pre-Filters

Once the route is identified, then the request goes through the chain of pre-filters configured for that route. Pre-filters are filters that are executed before the request is forwarded to the backend service. Once the list of pre-filters is executed, the request is forwarded. Weighted Routing per route and HTTP to GRPC Conversion are some of the pre-built filters available in TAG. One can also write custom filters like Trimming Request Headers.

Step 7: Post-Filters

After receiving the response from the backend service, the response goes through the chain of post-filters configured for that route. Post-filters are filters that are executed after the response is received from the backend service. Logging error is one example of post-filters.

Step 8: Return Response

After completing the list of post-filters, the final response is returned to the client.

Note:

  • Pre-filters/post-filters can contain custom logic or any type of request/response transformation
  • One can configure the order of sequence in which pre-filters/post-filters should run

API Gateway at Tinder Today

Application teams at Tinder are using TAG as a standard framework for building their own instance of API Gateway by just writing their application-specific configurations. These instances can individually scale as needed. TAG is also used by other Match Group brands like Hinge, OkCupid, PlentyOfFish, Ship, etc. Thus TAG is serving B2C and B2B traffic for Tinder. Below is a general depiction of how TAG is used in Tinder today.

Figure 5 — API Gateways powered by TAG at Tinder

In this blog, we looked at the state before TAG existed, why we created TAG, and how TAG is helping Tinder serve traffic at scale. We hope you enjoyed reading about it! In the next blog, we’ll also take a deeper look at how configurations are written to set up a route in TAG.

References:

--

--