How Orbee Routes Traffic to Services

Orbee is in the process of transforming our technology stack to match growing business needs. We are at a point in our growth that the flexibility and ease of development afforded by a microservices architecture is essential to our continued progress. As we enter this new paradigm, we need to address the challenges our infrastructure faces due to the growing complexity of our business model.

To accommodate these growing demands, we are implementing a solution reasonable in scope to support our communication infrastructure. After much iteration and deliberation, the solution we chose allows us to satisfy both current and future concerns as we mature into a complete microservices architecture. This post is an overview of how we have achieved our current level of service communication, both internally between services as well as externally to UIs and clients.

Communication Across the Spectrum

The intersection of current Orbee products and Orbee’s evolving infrastructure requires multiple different communication use cases in order to achieve our goals. This manifests in three principal avenues of communication for our services: public, authenticated, and private. Existing within AWS, many services need to expose some portion of their functionality publicly in order to function in an automated fashion. Requests to public aspects of Orbee services are routed through an AWS application load balancer which resolves the request to a target based on the path of the request. We use a pattern which corresponds the first part of the path to the service’s name in order to ensure there are no overlapping routing rules.

We also provide routes for requests that need to be authenticated using credentials, tokens, or API keys. This is achieved by a different application load balancer which forwards requests to an API gateway, implemented using Kong. The API gateway first applies authentication checks before continuing, returning a 401 or 403 response if authentication fails. It then routes the request to a service using ECS’s SRV Service Discovery method. The request keeps its original Host header so that the service can route requests from both public and authenticated domains within the same framework. An additional benefit of this approach is in the centralized authentication of requests, simplifying validations that services need to perform and enabling rapid modifications to our authentication and authorization scheme independent of individual services. In future developments, we hope to bring authentication and authorization to the service level and use the API gateway as a means to do global authentication against an identity provider.

Request Routing Flow

Communication Between Services

So far we’ve covered how we have achieved the routing of external requests, but not the capability for service to service communication. To further accommodate the rapid development of services, providing a channel for services to communicate to each other without the complications of existing methods is essential. We decided to use a private application load balancer, which only accepts requests within the subnet’s internal network. Requests are sent to an API gateway, which leverages the same SRV record scheme to route requests to services. This allows us a relatively secure manner of communication between services that are easily managed by services themselves. Future developments should include additional security barriers, including authentication and authorization between services that communicate with each other.

This service communication architecture is a major step we are taking to make our development faster and simpler as we move to a microservices-based architecture. We’re planning to support additional authentication and authorization schemes at the service level to improve security in our next iteration. This will solidify the current communication concepts around public, private, and authenticated communication as an authorization level throughout all APIs and services.

We hope that this post illustrates part of the process we took to migrate to microservices, and how there can, and should, be steps from one architecture to the next.