Managing Microservices

Lev Perlman
Frontend Weekly
Published in
5 min readOct 19, 2018
VEEP. Credit: HBO

You have decided it is time to implement the hottest architecture on the market — microservices. You’ve researched all the buzz-words and what the noise is about. Now what?

This article is going to cover the following subjects:

  • Microservices architecture — quick recap
  • Running microservices locally at the same time (via docker-compose)
  • Setting up an API gateway to manage microservices

Quick recap

Let’s take a step back and look at the key concepts of microservices:

  • Small, single responsibility services
  • Decoupled from the rest of the program logic
  • As self-contained as possible
  • Unaware of other services’ implementations, just of endpoints if necessary
  • Communicating with each other via web requests (might be via direct http calls to REST endpoints, websocket and bayeux requests, queueing of requests to a certain queue, etc).

If we follow these basic guidelines, then a typical fintech payments-service system would look like that:

  • customers-service: stores customers private information and data
  • transactions-service: manages transactions
  • communication-service: manages notifications, sms, e-mails, etc. to users
  • analysis-service: KYC and analysis management, manual and automatic processes
  • persistence-service: manages documents and files sent by users
  • contracts-service: manages contracts that the users sign

etc…

Each of these services can be developed in any language you desire, and in the end — they will be managed by some kind of orchestrator.

The orchestrator can either be developed by yourself, or a shelf product like Kong, AWS API Gateway, WSO2 Gateway, etc.

Let’s say you’ve finished developing your posh microservices, and got to the stage of managing them via the orchestrator. All your services need to be running at the same time for the orchestrator to orchestrate them, right?

So you can run each of the services separately via the CLI or whatever, but that wouldn’t be very efficient, would it? It’s time consuming and can become a very painful process once your services count grows to above 5.

The best way to win this, in my opinion, is to dockerise each of the services, and then use docker-compose to run all of them at the same time, with specified dependencies, environment-variables, etc.

I do not want to repeat the instructions on how to dockerise an app — it has been done many times and in many variations. A quick search and some configuration work will get you to a state when all your apps are dockerised.

I am assuming that at this point — that is exactly your state. All your services are dockerized. Now it is time to run all of them efficiently, so please welcome our next guest — docker-compose!

docker-compose

A docker-compose file specifies a desired state:

“ I want apps 1, 2 and 3 running at the same time with THIS configuration, THESE dependencies and in THAT order”.

You can specify which service depends on what, which environment variables each one of the services should use, how exactly each service should run, etc.

In addition, you can set which dependencies should be installed — i.e — PostgreSQL database, a Redis instance, a Kong gateway, etc.

You can add your own containers hosted locally, you can add in-house developed containers hosted on your own Docker repo, or even add publicly available containers hosted on public repos.

docker-compose will also create a default network, which will allow all services to access each other by using the service-name as the domain-name. i.e — if we have a service named analytics-service and another service named users-service , they can access each other by using this pattern: http(s)://SERVICE_NAME/ . So the analytics-service can access users-service by using this url: https://users-service:3000/api/someRequest . Awesome, isn’t it? Docker-compose takes away the pain of specifying the network interface and the sockets file! (It is still possible to do that, but in simple scenarios like this one — you don’t need to).

The best way to get your head around this whole shebang would be to look at a live example:

Specification:

  • A DB service (which gets a mongo-db from a public Docker repo.
  • A users-service, which depends on the DB to be hosted first, exposing a 3000 port, receiving this string as ENV variable DB_CONNECTION=mongodb://db:27017/users
  • An analytics-service that builds a docker from the analytics-service directory, which depends on the users-service and the database, and exposes the 3002 port
  • Last but not least we have the TransactionsService which exposes port 3003 and does not depend on anything.

Once this file specification is complete, we can run one command that will set this whole thing up:

docker-compose up

And that’s it. All your services will start based on the order and dependencies specified. (running up will also build any unbuilt containers).

The orchestrator

Now that all of our services are running, it’s time to manage them. Why? Because our front-end shouldn’t be familiar with all of these services.

The client side should be familiar with a flow. i.e — ‘register a user’ — which includes creating a user’s entry, saving the user’s documents, setting up the user’s bank details, and sending a welcoming push notification. As you can see — this flow requires several microservices to perform their magic. Might be in parallel, might be synchronously, doesn’t matter right now.

In order to expose a flow, we need an orchestrator that will actually make these requests AND will contain an API layer that exposes ‘flow’-ish endpoints to the front end.

You can achieve that by either adding an API gateway solution (like WSO2 Gateway, AWS API Gateway, Kong, etc) OR by developing your own orchestrator.

Why use ready products?

  • Save time and resources on developing and maintaining your own product.
  • Trust the industry, which relies on these products, instead of reinventing the wheel.
  • Products usually support many features and plugins (which is somehow related to the first point).

Why develop in-house?

  • Get exactly what you want from the beginning: Most likely, at some point you will encounter a use-case that is not supported by the gateway you chose to use. Then you can either try and fork it if it’s open-source, or start rushing into your own, in-house workarounds and development. That might be too little, too late.
  • Most products on the market might be too verbose for you. A good architecture is a clean and simple one, even for complex products. Overkilling it with useless features and bloatware — is always bad.
  • After development is complete, hosting costs and maintenance might be cheaper than an enterprise license for a shelf product.

I will write a separate article comparing industry-leading products in the API gateway & management realm, but for now — I suggest you do your research, play with Kong, design an in-house solution, and compare them.

Cheers, and see you next time!

--

--

Lev Perlman
Frontend Weekly

Tech Lead | Co-founder @ STATEWIZE | Host @ Smart Cookies | TechNation Exceptional Talent | https://statewize.com