DC Metro Pro: Node.js and Socket.IO for realtime data

Super-charging realtime train arrival information

Mike Surowiec
5 min readJul 24, 2017

--

So What’s the Problem?

Washington Metropolitan Area Transit Authority (WMATA) shows train arrival information in 20 second intervals. That means if you’re a Metro patron, you’re likely looking at information that’s at least 10 seconds old!

❌ U n a c c e p t a b l e.

If you’re not there in 2 minutes, you’ll be waiting 13 minutes!

I mean, this is 2017 right? Who has time for 20 second old data! Next thing you know they’ll be telling us it’s better to tunnel cars underground than to fly them! Balderdash!

Alright, so let’s take a walk through how the DC Metro Pro app and website are setup to request the WMATA data as fast as possible.

Demo of the DC Metro Pro website. Click here to see it for yourself.

The app and website are backed by a small Node.js server that pushes realtime train information to the users. Below is a rough outline of how the system works.

If you go to https://doors-closing-server.devshack.io/ you should see an OK message! That means the server is running 😄. Also, Doors Closing was the old name of the app — an obscure homage to the automated train voice.

You can see in the diagram above that the Node.js server is essentially a caching layer between the clients and WMATA. (1) shows the HTTP request polling the WMATA API, (2) shows socket.io pushing the new data to the clients who want it.

Why do it this way?

The WMATA API is rate limited to 50,000 requests per day. With the one server making requests, we can use that 50k allotment to keep all of the DC Metro Pro users the most up-to-date. With a little maths:

86,400 seconds in a day / 50,000 requests = 1.728 seconds per request

In practice, I added a little buffer and make one request every ~2.5 seconds (see the Multiple Endpoints section below for why). As a result, users of the DC Metro Pro app and website have the latest information 8x faster than on the WMATA website.

On top of that, using a WebSocket connection vs HTTP has it’s own speed and network overhead advantages. All around it’s a pretty decent system!

But You Can’t Just Push New Data

If the clients only received new data when the Node.js server got a response from the WMATA API, there would be a lag of 2–3 seconds when a new client came online between polling requests.

❌ U n a c c e p t a b l e.

To make the clients feel instantaneous, the Node.JS server accepts one-off requests for data (which it pulls very quickly from the in memory cache). This aspect is much more like a standard REST endpoint, where the client sends request and gets a response, vs the listening approach of “let me know when you have new stuff”. You need both to have a complete system.

Above you can see when a client sends a request for the realtime data for a set of stations (usually something like['A32', 'B11']), the server grabs the data from the cache and sends it back via a separate emit for each station.

Multiple Endpoints to Synchronize

WMATA has more information than just the realtime train arrivals, so I set up a synchronization mechanism that requests the data from separate WMATA endpoints at different intervals. You can see below that the realtime train arrival data is the one updated most often, while the other two are on longer polling cycles.

Just checking my math here, at most the server will make 37,440 requests in a day.

(86,400 / 2.5) + (86,400 / 60) + (86,400 / 60) = 37,440 requests per day

So thats roughly 75% of the request allotment being utilized for the production server. The 25% buffer is for development and that warm feeling of being safely under the limit.

setInterval vs setTimeout

One thing you may notice is how the synchornization code doesn’t use setInterval, despite the function being referred to as loopWithInterval. This is because using setInterval could cause requests to pile up and overlap; imagine if one request hung, and a following one returned before it. That gets into bad territory fast! In my case, the easiest solution was Good Enough™. We simply wait for the response from WMATA and then send a new request after our pre-determined interval. This can result in “float” as the request windows are shifted forward by the latency of the previous request. We could account for this pretty easily but ¯\_(ツ)_/¯.

For a more technical look at this problem (but in reverse) see how Figma handles rate-limiting: https://blog.figma.com/an-alternative-approach-to-rate-limiting-f8a06cf7c94c.

Graphs!

Since starting this project WMATA has added some graphs and usage information, so I’ll share those below. This is usage over the last 90 days.

165GB of bandwidth 😎
Graphs are always fun!

My math from earlier seems to check out! We’re sitting around 31,000k calls on a given day. If you’re so inclined, you can quantify how the request latency affects the number of calls we make! Neat-o.

I hope you enjoyed a little peek at the backend that powers the DC Metro Pro app and website. Thanks for reading!

✅ A c c e p t a b l e

— Mike Surowiec

--

--