credit: Wiros from Barcelona, Spain

Walking the wire: Mastering the Four Decisions in Microservices Architecture

Srinath Perera
Mar 10, 2016 · 9 min read

Microservices are the new architectural style for building systems using simple, lightweight, loosely coupled services that can be developed and released independently of each other.

If you are new to microservices, I recommend reading Martin Fowler’s Post. If you like to compare it with SOA, watch Don Ferguson’s talk. Also, Martin Fowler’s “trade-off of microservices” and “when it is worth doing microservices” help you decide when microservices are useful.

Let’s say that you heard, read, and got convinced about microservices. Followers of the microservices architecture face several practical challenges. This post discusses to handle some of those challenges. I will use MSA in the rest of the discussion to mean Microservices Architecture.

Decision 1: Handling No Shared Database(s)

Each microservice should have its own databases and Data MUST not be shared via a database. This rule removes a common cause that leads to tight coupling between services. For example, if two services share the same database, the second service will break if the first service has changed the database schema. Then teams will have to talk to each other before changing databases, leading to delays, taking us backward.

I think this rule is a good one and should not be broken.

However, there is a problem. We often share the database when two services share the same data (e.g. bank account data, shopping cart) and need to update the data transactionally, using database transactions to enforce consistency.

Any other solution is hard. Let’s explore a few.

Solution 1: If updates happen only in one microservice (e.g. loan approval process check the balance), you can use asynchronous messaging (message queue) to share data.

Solution 2: If updates happen in both services, either consider merging the two services or use transactions. The post Microservices: It’s not (only) the size that matters, it’s (also) how you use them describes the first option. The next section describes the transactions in detail.

Handling Consistency of Updates

Often, we use transactions to solve this problem, which is slow, heavy, and expensive applied in a distributed setting. However, we can sometimes solve the problem without transactions.

There are several options.

Option 1: Put all updates to the same Microservice

When possible, avoid multiple updates crossing microservice boundaries. However, sometimes by doing this, you end up with few services or worse one big monolith, taking us backward. Hence, sometimes, this is not possible.

Option 2: Use Compensation and other lesser Guarantees

As the famous post “Starbucks Does Not Use Two-Phase Commit” describes, the normal world works without transactions. For example, the barista at Starbucks does not wait until your transaction is completed. Instead, they handle multiple customers, same time, compensating for any errors explicitly. You can do the same if you are willing to do a bit more work.

A key idea is that if an action failed, you can compensate. For example, if you are shipping the book, first deduct the money, then ship the book, and if the shipping failed, then return the money.

Another simple idea is to give a button to the user to forcefully refresh the page if he can tell that it is outdated. Also, sometimes eventual consistency or timeouts are sufficient. You bite the bullet and settle for lesser consistency (e.g. Vogel’s post is a good starting point).

Finally, Life Beyond Distributed Transactions: An Apostate’s Opinion is a detailed discussion on all the tricks.

Having said that, some use cases need transactions for correct results. And those MUST use transactions. See Microservices and transactions-an update, which weigh the pros and cons and choose wisely.

Decision 2: Handling Microservice Security

In MSA, we can replace the identity server with a microservice, which, in my opinion, leads to a big complicated dependency graph.

Instead, I like the token-based approach described in the book “Building Microservices” and depicted by the following figure.

The client talks to an identity or SSO server, authenticates itself, receives a signed token that describes the user and his roles with SAML or OpenIDConnect, and sends the token to microservices with each request. Each microservice verifies the token and authorizes the calls based on the user roles described in the token. This model pushes the authentication to the client and does access control at microservices while simplifying dependencies. For example, with this model, for the same query, a user with role “publisher” might see different results than a user with role “admin” because they have different permissions.

It is worth noting that the client can get the token once and reuse it, which will happen only once per session. Therefore, the overhead of additional call is small.

How To Control User Identity Within Microservices? provides more information about this approach.

Decision 3: Handling Microservice Composition

Most compositions with SOA looked like following. There is a central server that runs the workflow.

SOA composition using a centralized server (e.g. ESB or a workflow engine). MSA discourage use of ESB (e.g. Top 5 Anti-ESB Arguments for DevOps Teams). On the other hand, Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus? provides counter-arguments.

I do not plan to get into the ESB flight in this post. However, I want to discuss whether we need a central server to do the microservices composition. There are several ways to do the microservices composition.

Approach 1: Drive flow from Client

This approach has several problems.

  1. Now multiple calls need to be triggered by the client. Therefore, if the client is behind a slow network, which is the most common case, the execution is slow.
  2. Might add security concerns as some logics run in the browser ( I can hack my app to give me a loan)
  3. Above example focus on a website, but most complex compositions often come from other use cases. So general applicability of composition at the client to other use cases yet to be demonstrated.
  4. Where to keep the State? Can the client be trusted to keep the state of the workflow? Modeling state with REST is possible. However, it is complicated.

Approach 2: Choreography

Typical implementation includes an eventing system, where each participant in the process listens to different events and carries out his or her parts. Each action generates asynchronous events that trigger participants down the stream. Environments like RxJava or Node.js use this is the programming model.

For example, let’s assume that a loan process includes a request, a credit check, other outstanding loans check, manager approval, and a decision notification. The following picture shows how to implement this using choreography. The request is placed in a queue, picked up by the next actor, who puts his results in the next queue, and the process continues until it has completed.

The choreography is like a dance. Both are complicated and need practice for correct execution. For example, the programmer knows not when the process has finished, if an error has happened, or if the process is stuck. Choreography needs extensive monitoring, to track progress, to recover or to notify about an error.

On the other hand, the choreography creates systems that are much loosely coupled, which is its main advantage. For example, you can add a new actor to the process without changing other actors. You can find more information from Scaling Microservices with an Event Stream.

Approach 3: Centralized Server

SOA’s implements composition using two methods: ESB or Business Processes. MSA proposes an API Gateway (e.g. Microservices: Decomposing Applications for Deployability and Scalability). I guess API gateway is more lightweight and use technologies like REST/JSON. However, in a pure architectural sense, all those uses orchestration style with a central server.

Another variation of the centralized server is “backend for frontends” (BEF), which build a server-side API per client type ( one for desktop, one for iOS etc). This model creates different APIs per each client type, optimized for each use case. See the pattern: Backends For Frontends for more information.

I recommend not to go crazy with all options here and start with the API gateway as that is the most straightforward approach. You can switch to more complicated options as the need arises.

Decision 4: Avoiding Dependency Hell

Let’s consider microservices “A” with the API “A1” and have upgraded to API “A2”. Now there are two cases.

  1. Microservice B might send messages intended for A1 to A2. Supporting this is backward compatibility.
  2. Microservice A might have to revert back to A1, and microservices C might continue to send messages intended to A2 to A1.

If microservices are to released independently, you must handle above scenarios. If not, all your effort to build MSA is wasted.

Often, handling these cases is a matter of adding optional parameters and never renaming or removing existing parameters. More complicated scenarios, however, are possible.

The post “Taming Dependency Hell” within Microservices with Michael Bryzek discuss this in detail. Ask HN: How do you version control your microservices? is also another good source.

Finally, backward and forward compatibility support should be bounded by time to avoid complications. For example, you can have a rule that no microservice should depend on APIs that are more than three months old. That would let the microservices developers to eventually drop some of the code paths supporting old versions.

Finally, I would like to rant a bit about how your dependency graph should look like in a microservices architecture.

One option is freely invoking other microservices whenever it is needed. That will create a spaghetti architecture from the pre-ESB era. I am not a fan of that model.

The other extreme is saying that microservices should not call other microservices and all connection should be done via API gateway or message bus. This will lead to a one level tree. For example, instead of the microservice A calling B, we bring result from the microservice A to the gateway, which will call B with the results. This is the orchestration model. Most of the business logic will now live in the gateway. Yes, this makes the gateway fat.

My recommendation is either to go for the orchestration model or do the hard work of implementing choreography properly. Yes, I am asking not to do the spaghetti.

Conclusion

When you designed with microservices, you must keep the eye on the prize, which is “loose coupling”. There are quite a few challenges and this post answers the following questions.

  1. How can I handle a scenario that needs to share data between two microservices?
  2. How can I evolve microservices API while keeping loose coupling?
  3. How to handle security?
  4. How to compose microservices?

Thanks! love to hear your thoughts.

If you enjoyed this post you might also find the following interesting.

Also, check out some of my most read posts and my talks (videos). Talk to me at @srinath_perera or find me.

Ripples in Middleware

Thoughts about system design, SOA, REST, microservices, scalability, messaging

Srinath Perera

Written by

A scientist, software architect, author, Apache member and distributed systems programmer for 15y. Designed Apache Axis2, WSO2 Stream Processor... Works @WSO2

Ripples in Middleware

Thoughts about system design, SOA, REST, microservices, scalability, messaging

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade