Embracing the API-First Approach: write a plan before build the wall

Mathias Deremer Accettone
ADEO Tech Blog
Published in
9 min read22 hours ago

It is a fact, the use of APIs (Application Programming Interfaces) is increasingly massive within distributed architectures. Beyond the communication of our frontends with the backends, the vast majority of inter-application exchanges are done via APIs.

APIs being communication interfaces between different systems, which says API also means contract.

But what is a contract? In a way it is an “agreement” between those who define an API, and the teams who use it: what actions are authorized, what information must be passed during a call, …

Today there are different approaches to writing APIs and very often the one chosen by developers is to favor writing the API code and then generating the documentation (the contract).

Here we invite you to discover a different approach, which some would describe as more pragmatic: Contract First.

After all, you don’t build a house before having drawn the plans ;)

Code First / Api First, késako ?

Before presenting in more detail the advantages of API First and its implementation, it seems essential to define what the two approaches consist of.

The code-first approach involves writing the API code first, then documenting it after the fact.

With this approach, we create the models and data access classes of the API, and secondly we generate the documentation from the code by relying on tools like springdoc-openapi (for Rest APIs written with the Spring framework).

By prioritizing writing code, before focusing on documentation, we reduce time to market. However, this also brings a significant set of constraints, with at the top of the list the difficulty to understand how the API is used: consumers will sometimes have to dig into the API code to understand its workings, while waiting to have documentation available. The other problem is that in the event of non-compliance of the API (if it does not meet the needs of the consumer), the team responsible for its development will have to redo one (or even several) iterations on source code to adapt. In the end, we waste time.

Conversely, the API First approach suggests writing the API specification, its documentation, before writing a single line of code. It is this specification which will allow discussions with the various protagonists in order to reach a consensus on the format of the objects passed to this API, its parameters, etc. Once validated, it will serve as a “plan” for building the API , but will also allow developments to be parallelized since consumers, by knowing in advance the format of the parameters to pass to the future API, will be able to begin preparatory work on their side.

To summarize, and if we had to make a metaphor of it, the code first approach is in a way equivalent to building a house before having drawn up the plans, and having validated them with the client. Where the API First approach offers a more pragmatic approach: we design the plans for the future construction, we validate with the client, then we begin the construction site.

Advantages of API First

The API First approach therefore makes it possible to avoid back and forth between teams and the numerous code modifications that result from this. But that’s not its only advantage. Beyond more fluid exchanges between teams, establishing the API contract first allows the producer AND the consumer to work in parallel. Indeed, the consumer, by knowing in advance the stereotype of the API that he will have to consume, can begin his development even before the API is available.

This mode of operation is simplified by a number of tools which allow both testing the contract in advance, but also creating mocks of future endpoints on the fly (Postman for example).

Generating code from the API contract is also possible, so controller skeletons, DTOs, … are all classes that developers will not have to rewrite because they are automatically created from the API’s documentation. Here we therefore only focus on the business logic of it. This therefore facilitates evolution and ensures that the documentation and the implementation are always consistent.

Providing an API is not just a matter of simply writing its source code : it’s generally necessary to reference it within an API Gateway. API Management solutions generally offer simplified integration based on documentation. Writing this documentation upstream will therefore make it possible to pre-configure the future publication of the API.

Tools used for implementation

API Rest use case

Reminder about Rest APIs

It is clear that today every project (or almost) involves APIs. Historically we largely used the SOAP protocol for their implementation, a protocol which was largely supplanted by REST over the last ten years.

REST, for “REpresentational State Transfer”, allows you to implement services useful for manipulating web resources (in our case, a resource can be a product, an order, the representation of a customer, etc.).

The big advantage of REST is that it relies on API calls based on their URIs, and generally uses HTTP protocol methods to define the “intent” of this or that action (GET for searching for information, POST for creating data, DELETE for deleting, etc.).

The definition and use of such APIs is therefore very simple, and is based on existing standards that are easy to implement.

However, the question still arises of how to convey the information necessary for the use of this API? How can we communicate to future consumers of this API the format of the parameters to pass to call this or that endpoint and for example search for all orders from the last 6 months of a particular user?

Well, this is precisely what is documented within the OpenAPI specifications, documentation which is often exposed by development teams within an API Manager or simply in a Swagger UI type portal.

What is often encountered, particularly within Java / Spring projects, is the generation of documentation via annotations added to the code… in addition a Code First approach that can raise all the organizational issues mentioned a little above.

Contract First, with what tool ?

As already mentioned, a more pragmatic approach would be to write the specification, the API contracts, BEFORE starting to code and ideally be able to generate on the fly the controllers skeleton and DTOs that will be used.

And we have a fantastic tool available for this: the openapi-generator, which goes very well with Spring projects!

This generator is based on a YAML definition of the API, including the URI of each endpoint, their query / path parameters, the structure of the objects passing through the body, the format of the response, the optional or mandatory character of certain parameters, …

It is from this that the Java classes will be generated which will act as a foundation for building your API.

The next article will present in detail the use of this generator, within a small example project in Java / Spring.

Event Architecture use case

Reminder about event architectures and Kafka

Event-based architectures are increasingly implemented in current information systems. In addition, Kafka is currently widely used within large companies (for example Orange, Meetic, Linkedin, Netflix, etc.) offering them many advantages such as the resilience of applications (no loss of data, if a message is unconsumed during a crash it will be when consumer restart), the possibility of consuming published data in parallel with a view to improving performance, the limitation of adhesions (a data producer will not be “blocked” if a consumer does not consume enough quickly the messages published in this or that topic), …

In short, slightly more complex aspects to implement via classic REST architectures. The aim of this article is not to completely detail the possibilities of technologies like Kafka (or RabbitMQ, or ActiveMQ, or…) or their internal functioning but in essence Kafka works on a producer/consumer basis:

  • Producers publish data in the form of messages on “topics” (communication channels)
  • Consumers subscribe to this or that topic and consume messages as they come.

People interested in this technology will find all the necessary information on the official website.

But back to our point, we know that to communicate a topic must be set up between producers and consumers. As it stands, regardless of the format of the data pushed on this topic, it is up to consumers to know how to decode it: the producer will not be “upset” if the consumer does not manage to decode the messages sent… not very reliable as a mode Operating !

In order to make exchanges more reliable, it is possible to associate a schema with the topics, which will describe the format/structure of the messages published. Thus, if a producer tries to publish a message that does not respect the structure of the schema, the latter will be rejected.

Overall, the definition of the schema works in the same way as for API contracts in swagger (field type, optional or mandatory field, nesting of structures, etc.) but the “language” to do this definition is different. Kafka requires schemas defined in one of these three technologies :

  • Avro
  • JsonSchema
  • ProtocolBuffer

Problematic

The approach often encountered on projects is as follows:

  • development teams write Java (or Kotlin) classes which will represent the objects passing through the topics, and which will be used, among other things, for the “serialization/deserialization” of messages.
  • developers then transcode the structure of these classes into Avro schemas, and reference them in the Kafka “schema registry” (so that they can be searched and used by the producers/consumers of their projects).
  • developers from other teams will be able to manually search for this or that schema in order to rewrite the classes necessary for message decoding.

This is again a very unpragmatic approach in the sense that the work of writing classes is carried out twice, by different teams (with all the potential risks already mentioned a little earlier in the article ).

But as with Rest APIs, there is a solution! So it is entirely possible to write an Avro schema in advance, to validate the format of the objects with future consumers, and to rely on this Avro to generate the Java classes which make it possible to encode and decode the Kafka messages. One technology goes even further, by providing a small documentary “portal” of Avro schemas… like Swagger UI. It’s about Async API.

Async API

Async API is not a technology in itself but rather an initiative aimed at standardizing the representation of Asynchronous APIs. The idea is to provide an easily understandable format to define our interfaces, regardless of the technology used for the implementation (Java, C#, JS, etc.) or the MOM used (Message Oriented Middleware… Kafka or RabbitMQ for example) .

We will therefore find in an AsyncAPI specification file the description of the API, the structure of the message, the name of the topics (“channels” section) and the operations carried out on them (subscription, publication, etc.).

Async API is therefore strongly inspired by OpenAPI, making it much easier to increase your skills (if you already know OpenAPI ;)).

OpenAPI / AsyncAPI, official documentation

But this initiative is not limited to the definition of an API description standard because it is entirely possible to specify the message format in anything other than the AsyncAPI DSL.

Indeed a “payload” section can be added to the specifications files in order to include the structure of the messages in the format of our choice. So we can very well integrate an Avro definition within an AsyncAPI specification! No need to re-translate the Avro files of an existing product into the AsyncAPI specification.

And it’s not over, since we also have a certain number of tools available for, among other things:

  • Generate code from specification files (Java, Kotlin, C#, etc.)
  • Generate documentation from code (although this is not the preferred approach)
  • Generate mocks from these specifications
  • View documentation like Swagger UI
  • Recently, manipulate and design APIs via an online studio (available in beta version here).

Ultimately, although recent compared to OpenAPI, Async API is already very rich in functionality and is a tool of choice for the design and specification of your Asynchronous APIs. A future article will present its implementation in detail.

Waiting for the next article …

This article is actually just an introduction to the technologies for moving to Contract First, the first in a small series devoted to this subject. Others will follow and allow us to get our hands dirty by discovering in detail how to implement OpenAPI and AsyncAPI. Until then, interested readers will find something to wait on :

--

--