API Gateways “a la Mano”

Bastien LACOMBE
ManoMano Tech team
Published in
9 min readFeb 18, 2021

ManoMano has started to migrate its old legacy infrastructure to a microservice architecture more than one year ago… and it is not finished yet! In this post, I’ll describe the journey we made to implement a very useful pattern: the API Gateway.

Photo by freestocks on Unsplash

TL;DR

A little less than 3 years ago, ManoMano started moving towards a microservice architecture for its backend. We implemented an API Gateway pattern for two applications:

  • The Toolbox, our sellers’ back office.
  • Our own internal back office.

In both cases, we decided to code ourselves the gateway. Even though it had some benefits, our experiences strongly suggests using a market solution such as Kong.

From a homemade application…

A bit of context

In December 2017, I joined ManoMano in the team responsible for what we call the Toolbox — an application used by our sellers to manage their catalog, their exchange with marketplace customers’, their orders and so on. The company had started to migrate from a homemade framework to Symfony 3 (see also: our IT odyssey for an overall view). The Marketplace application migration was just over and our time had come. We began working on a new stack for the Toolbox: a Symfony backend called by a React frontend. A few months later, a decision was made to move to a microservice architecture in the all company. As our application was not used in production, we were asked to implement a new pattern: the API Gateway.

The gateway

Gateway of India, Mumbai. Photo by Siddharth K Rao on Unsplash

In a microservice architecture, a gateway is the single entry point of backend APIs. It is a reverse proxy for external requests. As a result, client’s requests are completely decoupled from the upstream calls: one can be proxied — or spread — to the appropriate(s) microservice(s) or different protocols can be used to communicate. All the backend logic is hidden to the client. The gateway can also transform the data in order to fit for a client’s needs.

Among all the benefits, we can mention:

  • Decoupling: If your clients which you have no control over communicate directly with many separate services, renaming or moving those services can be challenging as the client is coupled to the underlying architecture and organization.
  • APIs are now exposed in a way closer to client’s needs.
  • Complexity of the backend — and its evolution — is no longer an issue for the front.

But it has some drawbacks:

  • Response time on the network can slightly increase as all requests are passing through the gateway.
  • It adds complexity, development and maintenance costs. Yet, it is largely compensated by the work saved in the services behind it.
  • The gateway is a single point of failure: if it crashes, your microservices are not reachable anymore and your whole application might be out of use.

The subject is beyond the scope of this post. You can have a look to the definition by Chris Richardson: or this article by Bibek Shah.

Going back to the Toolbox, the pattern was interesting for several reasons:

  • Some APIs we needed for our client were not available in a microservice. Thus their location was sure to evolve. Using a gateway would prevent our frontend client to concern where to find them.
  • The company, especially IT, was growing very fast. We were defining standards and API — when they existed! — did not have fixed contracts. We wanted to expose frontend endpoints with contracts that would not change over time, whatever our backend was.
  • Backend APIs were consumed by all kinds of clients: the Marketplace, a back office, the Toolbox... Thus, they were not designed to fit with our application needs: they could for example expose useless data for the Toolbox. We needed a layout that would transform them into something our frontend application could consume.

So we took our backend application and transformed the code to be a REST API gateway. We had two kind of endpoints:

  • Proxy: request is proxied to the appropriate microservice without any action.
  • Others that do some transformation or aggregation in case the data is spread into several microservices.

All components were coded. As you can see below, exposing a new endpoint that requires data transformation without aggregation (the most common use case) would require:

  • A controller that deserializes the request in a Data Transfer Object (DTO) in order to validate it. At that time we wanted this step to be mandatory.
  • A http client to call the backend microservice and deserialize the response in another DTO. This component could be later shared with other endpoints.
  • A handler or service(s) that would conduct business logic, like creating the body to call the backend microservice or change its response to a valid response for the front.

Let’s face it, it’s a lot of code for an endpoint — imagine what it would be when we had to perform data aggregation!

Fig 1. Gateway data flow. If the microservice client can be shared, all others were theoretically mandatory.

Even if this application was not a real gateway — we keep some business logic in the endpoints — we could see the benefits of the pattern. Our frontend was protected from our backend disorder, and our exposed APIs better fit its needs. Decoupling between front and backend was strong. Coding our components gave us a lot of freedom to conduct data transformation, we could have client requests completely different from those proxied to the backend. However, there were significant drawbacks…

  • The developer experience (DX) was not very nice. Coding everything added complexity for very basic functionalities. As we saw, adding our most common endpoints was not straightforward, as it required adding at least 7 components, plus all the tests (functional at least); and then all the CI/CD processes. Not to mention the huge amount of boilerplate code we had…
  • Maintaining the application and its dependencies has a cost.
  • At last, we suffered performance issues, especially for endpoints that were calling multiple microservices. One could argue that PHP is not the most appropriate language for a gateway and we should have used another backend or even front one; or that REST is not appropriate for this pattern. Unfortunately, those options were not possible regarding the team we had back then.

The main objective of this experiment was to test the architecture to use it in other projects. Unfortunately, we realized that coding everything prevents us from industrializing the gateway we had created. In addition, a long term objective was to have its responsibility shared between teams. From the moment the refacto project started to one year later, more than 30 developers joined the company and we were not the only one to code on the project. Thus, keeping our coding standards, consistency of code, avoiding boilerplate code was becoming more and more difficult. We had to find another way.

Through an internal Symfony bundle…

A few months after we had started working on the new Toolbox application, another team began implementing a new internal Back Office (BO). Its architecture was the same as the Toolbox, and it was decided again to code the gateway. However, this time, it integrated a homemade bundle that would ease the creation of proxy endpoints. We hoped to have something we could industrialize. This bundle provided the controller and http client required to handle front’s requests and proxy them to the appropriate microservice. Devs had just to declare the endpoint in a yaml file — as it is done for Symfony Route — and the microservice http client in service.yaml. For example, for the customer screen of the application, several endpoints are declared in a file customer.yaml:

And the http client in Symfony service.yaml

More complex endpoints were still manually coded as for the Toolbox. This bundle definitely solved some issues we had with the previous gateway: boilerplate code was reduced and DX was improved. In theory it was supposed to have many benefits, yet results were below expectations. As the BO gateway was used by a lot of teams, its ownership was not very clear. Developers did not pay attention to keep it clean. When they needed something outside the bundle scope, they rather created new endpoints instead of improving it. Besides, as time went by, knowledge of its content vanished and it became a burden of legacy code. Thus, the number of endpoints declared using it was very low compared to what it should have been.

To finally, Kong

Photo by Ryan Quintal on Unsplash

And then arrived the second half of 2019. The Apization of Manomano was officially launched. The objective was to accelerate the migration that had started about a year before and to be ready to be consumed by our new customer’s apps. We wanted to migrate the marketplace application — a Symfony monolith with its embedded JavaScript and Twig templates — in APIs provided by microservices. We needed a gateway. It was much more ambitious than the Toolbox and the BO as those applications are with less traffic than the Marketplace. We had learned several lessons from our previous experiments, especially that to code again a project was not an option. We wanted to have the best DX possible, so we wanted our tool to be code agnostic and user friendly. We wanted both front and back developers to easily add and update endpoints exposed. We wanted them to be focused on the APIs, not the gateway. In addition, we would need some features that can be found in market solutions — rate limiting, CORS. We did not want to have another application to maintain. And above all, we needed something scalable.

After a benchmark, we decided to use Kong. This solution has two versions, enterprise and community. In addition to an API Gateway (which embeds a lot of available plugins to enrich your requests and responses) it has several tools for API management (but more on the enterprise edition). It took us about a quarter to have an infrastructure compliant to our standard; a dev workflow and our first APIs behind it. Contrary to what we did in our homemade gateways, we use it only for routing. Data or request transformations are handled by other applications. We also use other plugins such as CORS, Datadog or rate-limiting. Needless to say that this tool has strongly eased our daily life. It’s easy to use, quick to deploy or to change exposed endpoints. We have good monitoring and the performance is much better than what we have with the other projects.

Last words

You might be thinking: “Why the hell didn’t they decide right from the start to use Kong — or another solution ?”. Well, ManoMano was my first dev experience: as a junior developer I didn’t take part in the architecture decisions at that time; and those who did left the company. All I can do is make assumptions.

My first idea is because of the legacy code we had in the project. Remember, before being a gateway, our application was a backend consumed by the front. We had some APIs with business logic that could not be moved to any other application. I think the objective was to progressively migrate them to microservices when those would be ready; and little by little make our application with a 100% proxy endpoints. However, maybe it would have been easier to put a market solution — like kong — between the backend application and the front.

Another guess is at this time, noone involved in the Toolbox or the Back Office knew about market solutions such as Kong. Maybe someone in the company did, but we we were not aware of it. This puts the stress on how important communication among teams is. Before starting a project like we did, make sure that you don’t miss someone that can give you valuable help.

I’m sure you want to discover how Kong was implemented in our infrastructure that would be for another post !

--

--