Five Inconvenient Truths about REST: 4 — HATEOAS ⇒ YAGNI

Filip Van Laenen
Compendium
Published in
4 min readMay 28, 2018

Hypermedia as the engine of application state (HATEOAS) lets clients interact with the server entirely through the hypermedia provided by the server. In short: the logic about where to go next, and in particular the URLs for the resources where to go next, is implemented on the server side. Clients can then simply extract any necessary URL from the responses they receive from the server. The advantage of this principle is that the server can relocate services to new URLs completely transparant to the clients, because they will pick up the new URLs automatically. But this advantage comes at a cost, and this cost has to be weighted against the benefits.

HATEOAS involves two costs. First of all, all the URLs to be included in the messages have to be calculated by the server. Second, the hypermedia will increase the size of the messages going over the wire, thus requiring more bandwidth than otherwise would have been necessary.

The first cost is probably not so big, especially since calculating the URLs usually doesn’t involve much more than pretty straightforward string concatenations. But already here we see an aspect of YAGNI (“you aren’t gonna need it”) showing up: in most cases, the client will need only one of the many URLs included in the message. The calculation of all the other URLs was just a waste of CPU cycles.

The bandwidth problem will be the most obvious issue though. Unless the resources in your system contain many fields or at least some very long fields, the part of the message containing the hypermedia will quickly outgrow the other part. If you really want to include every possible step the client might decide to take next, the resource itself will drown in all the hypermedia included in the message. The question then becomes: should you only include the most important URLs as hypermedia links in your messages, leaving it to the client to figure out the URLs for the remaining not so common services? Or do you stick to strict HATEOAS, and therefore double or triple your bandwidth requirements?

In order to answer these questions, lets revisit the advantages of HATEOAS again. Basically, HATEOAS saves the clients from having to figure out and calculate the URLs for the next interaction with the server. But like we already mentioned, calculating the URLs usually doesn’t involve much more than pretty straightforward string concatenations. If the server can do that, surely the client can do it too.

Moreover, HATEOAS solves only part of the puzzle of figuring out how to call the next service. Knowing the correct URL and the method isn’t enough: the client also needs to know which parameters to use, and how, or how the payload to the service call should look like. That means that even if the client can automatically pick up the correct URL and method, there still needs to be a person to program the client to construct the payload correctly. And if a person is involved anyway, how much does it really cost to add the calculation of the URLs too? Again: that calculation will in most cases just be a pretty straightforward string concatenation.

The next question is then: if a change is going to happen on the server side, which of the following will be the most likely: a change in the business logic, a change in the structure of the payload or the format of the parameters, or a relocation of the URLs? In my experience, a change in the URLs is the least likely thing to happen. And if it happens, it will not be the only thing that happens. Also, of these three types of changes, a change in the construction of the URLs will be the easiest to fix. The reason for this is not only that it’s usually just a pretty straightforward string concatenation, but also that it’s one of the easiest things to write unit tests for.

So this is what HATEOAS does for you: it saves you from implementing the easiest part of the business logic on the client side, which is the least likely to change, and the easiest part to update if it ever changes. Is that really worth all the extra CPU cycles and bandwidth every single time a message is exchanged?

Is there a use case for HATEOAS? I think there is: when there will be many clients implemented by different people or organizations. But even then, HATEOAS shouldn’t be overdone, and you shouldn’t expect all clients to actually use the hypermedia. Also, think twice before you change your URLs — it kind of breaks the “L” in “URL”. But if you’re implementing a REST client, and the server is offering HATEOAS, you should use it whenever possible, because those URLs, you can be sure they’re going to be changed.

--

--

Filip Van Laenen
Compendium

Senior Application Development Adviser at Computas • Fagdirektør Applikasjonsutvikling hos Computas