Leveraging OpenApi strengths in a Micro-Service environment

Nicolas Jellab
Unibuddy Technology Blog
9 min readApr 21, 2021

How using OpenApi can solve several pain points in a transition from a monolithic architecture to a Micro-Service architecture while providing a better developer experience.

Context

Unibuddy has been growing at a very fast pace for the last few years, its seen its engineering team grow exponentially in multiple offices in different time zones. While at first having a monolithic architecture was a great technical choice, it has become a very complex project with more than 30 engineers working on it at the same time.

As the team keeps growing, Unibuddy has evolved to the Spotify squad system where each squad vertically owns a product, from design to deployment.

The monolithic approach can be limiting, as each team typically wants to deploy new components and features without having to worry about the potential impact on other teams. That’s why we are transitioning to a micro-service oriented architecture with the following goals and constraints:

Goals

  • Easily maintain high-quality code and deployment pipelines
  • Empower each squad to be independent

Constraints

  • Each squad want to have their own repositories and deployment pipelines
  • Maintain a good level of communication across the Engineering team
  • Ensure we retain a good release cadence

As each team works on separate parts that are connected, we need some kind of framework to be able to empower and give autonomy to each team. OpenApi is one of the tools we can utilise to better work together.

What is OpenApi?

Before we dive in on how we use OpenApi in a Micro-Service architecture, let’s start with a quick introduction on what OpenApi is and what an OpenApi Specification (OAS) represents.

OpenApi is an initiative from a group of industry experts to create a standard in how RESTful APIs are described. If you are familiar with Swagger, the OpenApi world will feel like home.

An OpenApi Specification defines a standard way of describing RESTful APIs in a language-agnostic way while being readable and understandable by both humans and machines.

An OpenApi specification written in YAML

By reading a well-defined OpenApi specification, you can easily understand and consume a service without having to dive into the source code.
It is formatted as either a YAML or a JSON file describing endpoints, how to query them, what response to expect in a success or failure situation, data structure, and more. You can do more advanced things like specifying API gateways, describing authentication mechanisms among others.

You could wonder why writing documentation in YAML would benefit and empower our developers, and it is a legitimate interrogation. OpenApi is a standard that has been around for decades now, which means you can find a lot of documentation on how to use it properly, and more importantly a lot of tooling around it to make your life easier.

Let’s see how we use OpenApi and the tooling around it to empower our teams.

How OpenApi can be helpful in a micro-service architecture?

After this brief introduction, we can break down concrete use cases we have in our everyday usage of OpenApi.

Online documentation

When you write documentation, your end goal is to produce a document easy enough to read that third parties (in our case other squads) can consume in their own services. Reading an OpenApi specification directly from a YAML or JSON file does not check the easy-to-read checkbox but there are tools that will handle that part.

If you are consuming public APIs often, ReDoc interface will seem familiar.

ReDoc web interface

There are other tools doing the job, but ReDoc stood out to me for two main reasons:

  1. It is familiar to a lot of developers, and when introducing new tools, it makes the adoption smoother if developers already have some sense of familiarity with it.
  2. Deploying a ReDoc documentation is really easy, as you can find in the Deployment section.

Contract-Driven development

Using OpenApi gives you the opportunity to rethink your software creation workflow and potentially transition to Contract-Driven development. The idea behind Contract-Driven development is to start with specifying the behaviour and the boundaries of a service and validating it before moving to the implementation. The output of this specification work is called the contract. In our case, it is an OpenApi specification but it could take any kind of format depending on what you build. For instance, if you build GraphQL APIs, your contract would be the GraphQL schema.

In a perfect world, no line of code is written until the API has been designed properly. In practice, once you have written your contract and have validated it, you can start the implementation. OpenAPI is pretty flexible and fits well with agile organisations. You can decide later on to either update your contract or release a new version of your API.

OpenApi is a great tool for Contract-Driven development since it provides a framework for teams to reflect and polish the specification until it is finished. Rather than developing an API and specifying it at the same which leads to rewriting the code multiple time, you iterate on the contract and implement it once. The OAS file is then stored directly in the repository, keeping the contract close to the implementation and easy to find.

If you decide to use OpenApi generation tools, you can generate API models and use them directly in the code, making any change to the specification break the code, ensuring the code is aligned to the contract at all times.

Contract testing

Contract testing is a technique for testing an integration point by checking each application in isolation to ensure the messages it sends or receives conform to a shared understanding of what is documented on the contract.

In practice, a common way of implementing contract tests is to check that all the calls to your mock service return the same results as a call to the real application would.

Contract testing is really useful when you are working with multiple components like we are in a micro-service architecture. It is also a solid alternative to end-to-end testing that tends to become very costly and hard to maintain as your system gets bigger and more complex.

Let’s look at how we can do contract testing.

I have taken a simplified example of one of our OpenAPI specifications.

Our endpoint definition
Our data model definition

According to our specification, if I query the blogs/topics?show_inactive=true endpoint, I should expect a 200 response and a body containing an array of BlogTopics filtered to show both active and inactive topics.

If I query the endpoint:

curl -X GET “https://unibuddy.com/v1/blogs/topics?show_inactive=true" -H “accept: application/json”

I get a 200 response status code and an array in the body of the response containing:

[{
“id”: ObjectId(“599f0713987b48000bd88263”),
“name”: “Student Life”,
“created”: ISODate(“2017–08–24T17:04:19.906+0000”),
“authorId”: “599ebda709a8080004b7499b”,
“isActive”: true
},
{
“id”: ObjectId(“59db67d770492d000be5b518”),
“name”: “Finance”,
“created”: ISODate(“2017–08–25T16:27:37.465+0000”),
“authorId”: “599cccd3784b4d000430eaf9”,
“isActive”: false
},
{
“id”: ObjectId(“5a57946b9afbd30012fef1ae”),
“name”: “Sports”,
“created”: ISODate(“2018–11–13T14:28:17.333+0000”),
“authorId”: “”5a5793ee9afbd30012fef1ad””,
“isActive”: true
}]

The result is as expected given the specification we are based on.

If we decided to change the behaviour of this endpoint or remove it, you would be notified by your tests, making sure you stay up to date with this service.

You can repeat this action for all the endpoints in the documentation, testing edge cases and ensuring the API behavior is consistent with the contract.

OpenApi can be helpful to build a framework of contract testing, especially with Swagger Mock Validator. Other tools can do the job as a standalone like Pact.

Since contract testing only tests what is on the contract, it doesn’t replace unit testing, functional testing, or integration tests. It ensures a service responds as advertised in terms of its agreement.

API client SDK

Using an OpenApi specification, you can use tools like OpenApi Generator to generate source files complying with the specifications. It can generate models, markdown documentation, or even a complete API architecture following templates built by the community and the owners of the repository.
If you want to contribute to the generator or the templates, you can find more information on their GitHub.

I would not advise using the generated API code but rather only generate the models so you can architecture your API following your standards while having OpenAPI data model enforced.
In our case, what is of interest in this scenario is the ability to generate an API client tailored to our defined endpoints and data structures, available in most of the popular languages, through a simple CLI interface.

You can try using the generator and see how it fares.
If you have an OpenAPI specification (you can find one example here) file named schema.yaml, you can directly use the generator:

npm install openapi-generator-cli
openapi-generator-cli generate -i schema.yaml -g python — package-name=api_client -o /tmp/api_client

The generated API client can then be turned into a package, hence calling the API from another service can become really simple as you call a method from a package rather than using an HTTP client with the correct parameters and headers by hand.
It also provides a standard way of calling services throughout multiple micro-service.

This is great, but it involves generating this API client package and then publishing it, it is longer than just using an HTTP client right?

Well, how about we automate this process during our deployment process?

Deployment

The main reason we transitioned to a micro-service architecture is to empower squads and give them the ability to be independent in how they work and how they deploy.

Moving to a pipeline for each service is something we did and has saved a long building time while allowing squads to deploy software more often.

We have a (simplified) deployment configuration that looks like

deploy:
when: << pipeline.parameters.deploy >>
jobs:
- deploy_service:
target: << pipeline.parameters.deployment_target >>
version: << pipeline.parameters.deployment_version >>

- deploy_open_api_documentation:
target: << pipeline.parameters.deployment_target >>
version: << pipeline.parameters.deployment_version >>
requires:
- deploy_service

- publish_open_api_sdk:
target: << pipeline.parameters.deployment_target >>
version: << pipeline.parameters.deployment_version >>
requires:
- deploy_open_api

Following the deployment of a service defined with an OpenApi specification, we deploy the corresponding documentation and an API client SDK.

Documentation

We talked about online documentation earlier and mentioned ReDoc as a tool to generate an HTML file from our OpenApi specification.
We want to automate the process of generating and publishing the HTML documentation so every team can consult an up to date documentation.

To give a high-level overview of how we publish the documentation, we first deploy an AWS S3 bucket and upload the OpenApi specification file.

Then, we retrieve the specification file and generate the ReDoc html documentation using redoc-cli. This HTML is upload to a new S3 bucket.

Finally, we deploy the documentation webpage from the S3 containing the Html documentation

So now, when team A works on a new micro-service, they can write the OpenApi specification, set up a basic CI/CD process that deploys the online documentation ready to be used by other teams, without having to wait for this micro-service completion.

Publication of the API client SDK

Our goal here is to automate that API client generation to turn it into an SDK to consume our micro-service and then publish it as a private package during our deployment process.

With that workflow, whenever we create a new service, we also publish the SDK to contact the API, making our API callable from other services right away.

How does it look in practice?

First, we use openapi-generator-cli to generate the library corresponding to our OpenApi specification, in the language of our choice.
For this example, I chose to use Python.


openapi-generator-cli generate -i schema.yaml -g python — package-name=api_client -o /tmp/api_client — additional-properties=packageVersion=”${version}”

I specified the version of the package I am creating to be able to automatically manage the publication.
This version is generated following SemVer principles and its creation is automated using the build number.

The generated code is then turned into a Python compliant package


python3 setup.py sdist bdist_wheel

I don’t want to go into too much detail on how Python packages work, but this step basically generates a `dist` folder that will act as the entry point of the library.

Finally, we upload the package


twine upload dist/*

You may want to upload these packages in a private package repository to restrain the access.

Other teams can now install the SDK and consume it as they would with any other packages.

Conclusion

OpenApi provides a great framework to ensure interoperability between multiple squads while guaranteeing high-quality deliverables. The tooling around OpenApi allows us to gain time in our development process, with the ability to do Contract-Driven development, generated API clients, and deploy online, easy to read documentation.
If you are working in an Event-Driven architecture, tools very similar to OpenApi exist like AsyncAPI with the same kind of objectives and formalism.

As you start using OpenAPI and releasing multiple versions of an API specification, managing these versions can be challenging. Stay tuned to learn more about API and OpenAPI specifications versioning.

--

--