Latency in Microservices

Bruno Pedro
Feb 24, 2016 · 1 min read

Generally speaking, you can define latency as the time delay between the cause and the effect of some change in the system being observed. In a point to point communication, you calculate latency as the time it takes to get a response from the system. When you involve many services in a single workflow then the latency is the sum of all the response times.

Latency is the sum of all response times

How can you avoid latency in microservices? Learn more about it with this presentation about Asynchronous Microservices.

Bruno Pedro

Written by

#API and #Product Technologist