More and more enterprise software nowadays is implemented as distributed systems, most likely using micro services to a certain extent. Loose (or at least less) coupling between those services is the main motivation for this architectural style. However, loosely coupled interfaces may lead to less robustness, as well. Communication between micro services is still necessary, unless their domains are strictly segregated, which I would not call a distributed “system” anymore. Hence, robust communication between micro services is vital, which is why most of us will have heard of Jon Postel’s law, stating:
Be conservative in what you do, be liberal in what you accept from others!
Although first stated with reference to TCP/IP, the law has been applied in many other areas. Martin Fowler derived from it the service design pattern, most commonly known as tolerant reader. The pattern focuses on the collaboration of services and the contracts they share with each other regarding their APIs. Evolving those APIs should never compromise existing API consumers, thus assuring backward compatibility. But to achieve this not only API providers have to make sure they don’t break the contract, also API consumers have to read data tolerantly.
In this blog post I will examine different use cases of the tolerant reader pattern, in order to give you a more thorough understanding, how to deal with it in practice. For the sake of simplicity, I will assume data is exchanged using JSON.
Attributes added to existing JSON objects is by far the most popular example for tolerant readers. For instance, an API provider may have decided to migrate a
fullName JSON attribute to individual
lastName, preserving backward compatibility by retaining
It seems more than obvious that consumers of the previous API version should not struggle with the two additional attributes, giving them the chance to migrate, whenever they want to. However, many developers prefer to use client code generators, such as Swagger Codegen, for consuming APIs. These are not necessarily guaranteed to tolerantly read over the two additional attributes. Often such libraries tend to deduce a fixed set of expectable attributes during code generation. This breaks communication, once new attributes are added, especially for client applications generated based on the old schema.
Moreover, consuming APIs tolerantly implies that some developer needs to state explicitly, which attributes are needed and which not. In the above example it may as well be that at some point, all consumers have migrated to the new attributes, thus enabling the API provider to finally remove the original
fullName attribute. With client side code generators in place, however, these will have remembered the attribute during code generation. Once, the attribute will have been removed, the generated code starts breaking, although the attribute isn’t actually consumed at all.
A proper solution to deal with such problems is to always maintain DTO classes manually and let frameworks such as Jackson map response payloads to them automatically. Such frameworks are intentionally built to map, what is possible, but silently ignore content, for which there are no attributes in the target DTO classes.
Even if consumers are tolerant with respect to added JSON attributes, they may be tightly coupled to their respective values, especially when it comes to enumerations. The gender enumeration traditionally only included male and female, before the introduction of the third gender. This is not a problem for JSON as is, since enumeration values are typically serialized as regular strings:
However, client side code often tends to derive that such attributes are closed quantities, hard-coding the expectable values (from a client’s perspective). Once, new values are introduced, as in the above example, consumer’s fail to tolerantly read such responses. This is even worse, if the consumer doesn’t even rely on the semantics of the attribute in question, e.g. simply displaying it on the UI. Direct deserialization of these attributes to language specific enumerations, e.g. Java enums, represents an overly tight coupling.
Tolerantly dealing with enumerated values in client-side code can be achieved by following these rules:
- Whenever possible, enumerated values should be preserved and dealt with as regular strings with no further assumption regarding the set of valid values, whatsoever. This, of course, is only applicable, when the consuming service does not need to interpret the value semantically.
- If enumerated values need to be interpreted in some way, it is a valid approach to map them to language specific enum classes. However, a special unknown enum should be reserved right from the start to capture unknown future values. Rarely, code generators can cope with such situations. Accordingly, the deserialization and mapping of unknown values has to be manually implemented.
- Of course, it highly depends on the business logic of the consuming service, if dealing with unknown values makes sense or not, but explicitly having to deal with a special unknown enumeration value from a compiler perspective, at least avoids forgetting about such cases. In the end, a service, when given a list of JSON objects including enumerated values, may at least handle those, known to him, gracefully dropping or returning unknown ones in favor of failing completely.
A special case of enumerated values are type discriminators. These denote the type of the containing JSON object and are often used with mixed-type arrays being sent to the consumer. The following is a JSON example for stock quote listings, which contains type specific attributes:
Again, as with enumerations, it is very important to keep in mind that new types may appear in such an array structure in the future. Accordingly, the type discriminator should never be considered a closed quantity. Ideally, a special
Unknown type is reserved right from the start, which may then be filtered, if necessary.
The Jackson library, for instance, provides appropriate annotations to deserialize (and serialize if necessary) such JSON structures, using the
type attribute as discriminator. When using Kotlin, such sub-types are ideally mapped to so-called sealed classes then:
As can be seen, the
Stock sealed class defines the two known sub-types, which are explicitly mapped according to the
JsonSubTypes.Type annotation, while the special
Unknown class acts as default implementation, tolerantly capturing all unknown types. These may then be filtered, as shown in the
So far, I have covered the aspects of the tolerant reader pattern with respect to the payload data read by the consumer. While, from this point of view, it seems best to read as few data as necessary and ignore the rest, this often leads to a special case of coupling, specifically related to REST APIs. Applications consuming REST APIs rarely tend to use a single API end-point only, but rather interact with API providers in different ways. HTTP verbs help in distinguishing the general type of operation, when accessing APIs. However, the context is intentionally very limited, i.e. focused on a single resource.
Whenever different resources or more complex operations need to be addressed, it is up to the consumer to know, which other end-points to call. This knowledge, if built into the client application, causes incompatibilities, when APIs need to change. In a way, the client application lacks the contextual information of resources and starts building up assumptions about their related APIs.
Hypermedia As The Engine Of Application State (HATEOAS) addresses this problem by relieving the client application from built-in fixed interfaces, in favor of providing more contextual information by means of dynamic links accompanying each resource sent from the API provider.
As can be seen in the example JSON, stock quote listings are accompanied by relation links:
selfreference pointing to the resource itself
buyaction link stating, where orders can be placed
This reduces the client side coupling to only knowing that stocks can be ordered, if a
buy link is embedded into the resource representation, and simply following the link, if necessary. This makes the client more tolerant to API changes in the future.
In this post, I walked you through the different aspects of the tolerant reader pattern, starting from simply ignoring additional JSON attributes to dealing with enumerations and type discriminators, and finally dealing with a resource’s context using (often underestimated) hypermedia links. While in general the tolerant reader can be reduced to reading as few data as possible, it is also worth denoting that contextual REST operations should be added as meta information to enable API end-point changes in the future. Of course, the latter also involves the API provider, which has to provide the necessary information, e.g. using a library such as Spring Data REST.