Under the hood of HTTP requests in node

Seth Hodgson
9 min readJan 26, 2019

--

My HomeAway colleague, Trevor Livingston, recently wrote about instrumenting outbound HTTP requests in node, motivating me to share a brief walk down the stack for those who aren’t yet familiar with the specifics of what is happening ahead of an HTTP request and response message exchange with a target server.

Specifically, the focus on the socket event emitted by the http.ClientRequest object, and his excellent point that delays or errors while acquiring socket resources, establishing a connection at the TCP/IP protocol layer, and completing a TLS handshake need to be considered separately from higher- level HTTP protocol errors or client-initiated timeouts in order to build correct, robust systems.

And for bonus points, there are a few fascinating implications of the TCP protocol’s window size, flow control, and congestion control algorithms to consider as well. Does a new socket come into this world transmitting data quickly or slowly? Are you familiar with TCP incast? Let’s dig in and find out more.

What’s a “socket” anyways?

This cute name comes to us courtesy of the University of California, Berkeley, where the BSD Sockets API was birthed. The Berkeley Software Distribution (BSD) was one of the early Unix operating system distributions, and faculty and students at Berkeley defined the Sockets API to provide a C programming language interface for the internet protocol suite as it was being defined.

They were inspired by physical cable connections between devices, where connectors at the ends of a cable plug into sockets (e.g. the electric socket in the wall that your charging cable may be plugged into right now). So the Sockets API defined the programming abstractions for the endpoints of these new virtual connections being implemented between computer hosts.

They did a great job, and the API and naming worked well and spread across the Unix landscape and into all of our networking APIs (Nodejs, Java, Go, etc.)

Where the Sockets API is situated

Socket Resource Limits

In the Unix Sockets API, the OS manages network sockets very similarly to the way files are managed. Each socket connection gets a corresponding descriptor that indexes into internal OS data structures used to manage the state of the socket. There are a finite set of descriptors available. In addition, the OS’s TCP/IP protocol stack maintains additional state for each active connection, including data buffers, and our systems have finite memory limits.

If your application is slow to connect new sockets, this is one place to look. The lsof command on Linux can be used to inspect the number of socket descriptors owned by a given application process, and ulimit can be used to inspect the OS descriptor limits applied to each application process (i.e. how many open files / sockets / etc. the process is allowed to allocate). Running up against a hard limit on the number of sockets allowed for your process, or leaving a large number of TCP/IP connections in a not-fully-closed and released state, can exhaust these OS resource pools and interfere with and delay acquiring resources for new connections.

TCP/IP Socket Connections

The two key networking protocols in the internet protocol suite that HTTP depends on are TCP and IP.

IP (Internet Protocol) provides the base foundation to transmit small datagrams between hosts on the internet, based on addresses in the datagram header that indicate the intended destination host and the source host (so that the destination can respond to the source if it desires). These datagrams pass through many hops (generally routers) on their way between hosts, they may be dropped by a router along the network path in the face of too much ingress traffic that fills up all available buffer space, and in rare circumstances datagrams within a single logical connection may even end up routed along entirely different paths in the network resulting in different end-to-end delays and out-of-order delivery at the target host.

TCP (Transmission Control Protocol) sits on top of IP and supports sending a stream of bytes bi-directionally (i.e. duplex) between two hosts across a logical connection. It achieves this with a set of features that provide reliable (re)transmission, error detection, flow control, and congestion control. A TCP protocol stack splits its outbound byte stream for a given connection into a series of segments, each tagged with a sequence number, that are packed into IP datagrams and transmitted over the network. The sequence numbers allow lost or delayed segments to be detected at the receiving end and safely retransmitted by the sending end. The inbound and outbound byte streams are buffered to detect lost or delayed segments and wait for them to be retransmitted successfully before passing completed blocks of the byte stream received so far up to application code. Lots of other tricky work is done by the TCP protocol stacks at both ends of a logical connection to manage throughput, congestion, etc.. I got a great taste for the complexities here during my time at Adobe working with a peer-to-peer protocol stack, RTMFP (Real-Time Media Flow Protocol), that ran over UDP/IP but implemented many of these same concerns in order to play nicely with TCP/IP connections across shared routers and middleboxes in the network.

Before byte streams can be sent across a logical TCP connection, the new connection is negotiated between the two hosts’ TCP protocol stacks. This is the proverbial TCP 3-way handshake, where 3 segments are exchanged in total to bootstrap the state necessary for both hosts to manage the new connection. Unlike a physical cable, a TCP “connection” is a virtual connection represented by programmatic state maintained by the TCP protocol stacks at either end. So when a connection drops or times out, what is actually happening is the TCP protocol stack decides that absence of an expected TCP segment arriving is a good enough indication that the remote host is no longer reachable or online to consider the “connection” timed out and dead.

Safety First

All communication over the public internet must be secured, and for TCP socket connections this is handled with public-key cryptography and the Transport Layer Security (TLS) protocol. A lot more goes on in a TLS handshake back-and-forth across the network between the client and the server and a lot more data is exchanged just to get started. Think 7-way handshake rather than the 3-way TCP handshake for an insecure connection.

TLS 7-way handshake to get secure data flowing from client to server on a new socket

It’s at the point that all TCP and TLS handshaking completes that Nodejs’ tlsSocket emits its secureConnect event. Assuming you or your library sets a timeout on your socket, you can listen for a timeout event to detect when the full handshake doesn’t complete within this deadline. TCP protocol inactivity timeouts tend to be quite long by default, depending on the OS! This is much longer than most folks likely want to wait for a secure connection to initialize, much less send and receive an HTTP request and response, so apps or libraries often set their own app-level timeout and simply close and release the socket if a connect event doesn’t emit within a desired deadline. It’s important to realize that errors or timeouts during this phase have no impact on an HTTP transaction. The client hasn’t even begun transmitting its HTTP request message.

The specifics of the TLS handshake evolve as attacks and vulnerabilities are discovered and mitigated. But the gist is that the hosts at both ends of the new TCP socket connection exchange certificates and then do some computationally expensive cryptographic work in order to support encrypting/decrypting all of the upcoming in-flight data sent across the connection. Click the lock icon in your browser address bar and you can view details for the server certificate that was used to secure the TCP socket connection underlying the HTTP request and response that served you this web page.

All of this certificate metadata and keying information was sent to your host during the TLS handshake

Sometimes, HTTP requests between backend systems are done without TLS in the mix. If all hosts involved are running within the same network trust boundary that may be OK. When this isn’t the case, outbound HTTP requests from Nodejs or other backend systems to target endpoints should absolutely use HTTPS and certificate pinning to prevent man-in-the-middle attacks.

Finally, HTTP!

At this point we finally have a TCP socket allocated by the OS, connected to a remote host’s TCP stack, with security (aka encryption/decryption of all in-flight data across the connection) in place. Now my client can actually send an HTTP request message over the secure connection and receive a response. The HTTP request message is often quite small compared to all the work that’s happened to set everything up. HTTP response messages on the other hand tend to be fairly large. The browser devtools network tab is a simple way to inspect the lower-level details of HTTP request / response messages (which are changing in some significant ways going from HTTP/1.1 to HTTP/2).

Not much data sent for HTTP GET requests; essentially these values, and common request headers

Connection Pooling

As we can see, a lot goes on to set up client-server connectivity before an HTTP request / response message exchange even takes place. In the dark ages, the HTTP protocol was defined to close an underlying TCP connection to signal the end of an HTTP transaction. It didn’t take long for folks to realize that was a poor decision.

So from HTTP/1.1 forward the underlying TCP connection (along with its TLS handshake state) can be reused to perform further HTTP transactions against the same server host. Depending on the HTTP verb, and its idempotency, multiple requests may even be pipelined rather than running in a serialized request -> response -> request -> and so on lock-step fashion.

The Nodejs library for outbound HTTP requests provides support for pooled connections, viahttp.Agent, so close study of the API and thoughtful tuning is critical for good performance!

TCP Connection Throughput; A Pooling Benefit

In addition to avoiding redo of all the setup work for a new TCP connection, connection pooling lets you better-align your workload to the realities of the TCP protocol’s flow and congestion controls.

The rate that the TCP protocol can move data across a connection is based on a variety of complex factors, but limited by bandwidth-delay product ultimately. The TCP protocol uses window sizes, flow control, and congestion control to attempt to make best use of available resources across the entire network path between the client and server (including all routers, proxies, and so on, along the way). But the gist is that the TCP protocol passively probes to discover how much data it can transmit in the best-case, by tip-toeing forward on its amount of in-flight data, and then aggressively falling backward when loss / delay is detected. And it starts off with a very pessimistic view, and a small window-size for in-flight data. This algorithm is referred to as additive increase / multiplicative decrease.

Why does this matter? It matters because the effective throughput on a brand new TCP connection is terrible. The longer you can keep a connection around, the better the TCP protocol stacks at either end will understand true network conditions and how quickly they can shove data reliably across the connection!

So by not pooling connections, you’re forcing yourself into the network slow lane…

TCP Incast

I mentioned above that HTTP request messages tend to be small and HTTP response messages tend to be large. This also has interesting implications on making large numbers of outbound HTTP requests from a Nodejs server.

When the amount of data leaving a host on a large number of TCP connections is small relative to the amount of inbound response data, it’s possible to induce TCP network congestion on all of those inbound responses. HTTP transactions tend to exhibit this asymmetry in payload size, where small outbound requests are paired with much larger inbound responses. Watch this video for a great walk-through of this phenomenon.

The end result is that you might experience multiplicative decrease in TCP throughput for the competing responses that are trying to make their way back into your Nodejs server. This can be a bit harder to monitor for (you’re looking for packet drops on the return path into your server across all of those TCP connections, which is what triggers the multiplicative throughput decrease).

The More You Know

Hopefully you now know a bit more about what’s happening under the hood when your Nodejs app makes an outbound HTTP request, and knowing is half the battle.

--

--