Latency in Microservices

Generally speaking, you can define latency as the time delay between the cause and the effect of some change in the system being observed. In a point to point communication, you calculate latency as the time it takes to get a response from the system. When you involve many services in a single workflow then the latency is the sum of all the response times.

Latency is the sum of all response times

How can you avoid latency in microservices? Learn more about it with this presentation about Asynchronous Microservices.

Like what you read? Give Bruno Pedro a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.