What is Network Latency?

Latency is about the speed of the internet. Content providers focused on web performance are on the forefront of low network latency innovation. Network latency is defined as:

“Latency is an expression of how much time it takes for a packet of data to get from one designated point to another.” — TechTarget

But in terms of real application and detailed discussion, network latency is a big issue. It has the largest impact for professionals looking to optimize performance. High network latency can cause poor application performance, poor web performance, which can lead to poor customer experiences and lower conversion rates. In the world of Wall Street, it can mean millions of dollars.

Taking the definition further and discussing latency and real application, the goal is low network latency.

Low Latency

Low network latency essentially means decreasing the amount of time it takes for data to move from one location to another. In the network latency discussion, geographical distances and the speed of light become the limitations.

Low network latency vs. high network latency become the difference between good performance and bad performance. Low network latency means good web or application performance, and high network latency means poor web or application performance.

What is the Speed Benchmark

Before we begin discussing what low network latency is, we need to understand our benchmarks. From a limitation point of view, data cannot move faster than the speed of light. Therefore, our benchmark is the speed of light, which the reference is “20ms [Round Trip Time] RTT is equivalent to ~3000km, or an 1860-mile radius for light traveling in vacuum”.

Looking at 20ms as our baseline number, which we cannot move faster than, we can begin a meaningful discussion. As Google currently runs about 100ms of network latency, the goal is to get to and improve this number.

The Bottleneck is Network latency

A big concern for application performance over a distributed team, or through a cloud platform, is network latency. There are a number of factors when looking at application performance, but once running on a distributed system, network latency becomes the bottleneck to performance.

Why is network latency the bottleneck?

Network latency, in short, is more difficult to optimize. It is more difficult to optimize than bandwidth, which is the more advertised performance metric. This places network latency as the bottleneck, resulting in the more difficult conversation to have.

The Importance

Network latency matters to a range of industries ranging from FinTech to gaming. There are a number of industries in between. The reason network latency matters is it will essentially effect deliverability of content. The deliverability of content effects web performance, application performance, and data transfer capabilities.

When building an application and designing performance, it is difficult to predict network latency. This is where the problem arises. There is an expectation of how an application is to run, but unless there is predictable network latency, ideally low network latency, the result could be packet loss or congestion. This becomes the DevOps teams trouble.

Network latency is more than just a term to define. It is a problem worth solving and a metric worth optimizing. It will increase web and application performance, as well as provide developers a more consistent network to rely upon for developing applications.

Article originally published on the Datapath.io Blog.

Like what you read? Give Datapath.io a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.