What Is Latency?

Zak Cole
4 min readOct 11, 2017

In relation to network performance, when we say latency, we’re generally referring to the amount of the time it takes to send a packet of data from one location to another. Users generally aren’t affected by the latency itself but by the effect of the latency on applications. A 100ms latency to download a webpage would be wonderful, but 100ms of latency causes the underlying protocols to limit the transmission speed, so what the end user sees is seconds of lag, not 100ms of physical layer latency. Whether you’re playing a game online, sending an email, or browsing the web, the way applications respond to latency is critical in successfully accomplishing any task.

On an enterprise level, milliseconds can make a world of difference and application responsiveness can make or break a company, so it’s vital to properly understand the concept of latency. No matter the network, three primary factors significantly contribute to latency; propagation delay, routing and switching, and queuing and buffering.

The term delay is often used interchangeably with latency, but there is a subtle difference between the two. Propagation delay refers to the amount of time it takes for the first bit to travel over a link between sender and receiver, whereas latency refers to the total amount of time it takes to send an entire message.

Propagation delay is computed as a function of distance over wave propagation speed ( d / s ). In this equation, speed is expressed in relation to the speed of light, but that doesn’t mean you’ll be able to share cat memes with your friend on the other side of the world in the blink of an eye. The speed of light through fiber optic cable is approximately 1ms per 200km, which is around 30% lower than the speed of light in a vacuum (299,792,458 meters per second).

If you’re sending a .JPG from Los Angeles to your friend in New York, you may expect a latency of ~22.45ms to transmit that data over a distance of 4,488.9km on a fiber link, but end-to-end latency is usually measured as a round trip time instead of 1-way. Since the path is usually not direct between any two points, cross-country latency can usually sit around 100–120 ms as opposed to 22 ms.

Propagation speed also varies based on the physical medium of the link in use. When transmitting over copper cable, propagation speed can be as low as 60% less than the speed of light. This degradation in speed is referred to as the velocity factor (VF), but whether you’re using fiber, copper, or coax, the primary contributing factor to latency is propagation delay. Since the primary contributing factor to propagation delay is distance, whenever dealing with network latency, one should mostly be concerned with distance.

The difference between transmission of the first and last byte in a packet is the transmission or serialization delay. This is infinitesimal with a small packet on a high bandwidth backbone link (10G or 100G), but could add hundreds of milliseconds for a large packet on a low bandwidth link.

During its journey, data passes through various controllers, routers and switches that help it reach its destination. Each one of these gateway nodes is responsible for a different task in figuring out what to do with the data. With the advent of software defined wide area networking (SD-WAN), the routing of data can take a minimal amount of time. For example, an SD-WAN controller can constantly monitor each available path and dynamically choose the least congested available to route data most efficiently. Routing and switching delay is infinitesimal. The main delay through routers and switches is the queuing delay.

No two networks are exactly alike. One network may not experience a high volume of traffic while another may serve a multitude of users. If one link is heavily saturated with traffic, TCP/IP attempts to avoid packet loss and preserve data integrity by placing packets in a queue for later processing. This ensures packet loss is kept at a minimum within the network.

The amount of time a packet sits in a queue is referred to as queuing delay and the number of packets waiting to be processed is referred to as a buffer. As the number of packets increase, so too does the length of the buffer, which consecutively contributes to network latency. There are different types of queues and buffers which handle packets using various algorithms, but it’s important to understand how this can affect latency and the transmission of data.

Of course, this may be an oversimplification of complex networking events and there are a multitude of other factors that can cause latency. Either way, the idea is that latency significantly affects application responsiveness and the primary contributing factor to latency is distance. Applications which appear highly responsive on a local network may perform terribly when deployed on a wide area network once distance is introduced. Though it can never be reduced to zero, as long as applications are developed to handle it, we can keep a step ahead of latency.

Originally published at www.apposite-tech.com on October 11, 2017.

--

--