Why CDN capacity numbers don’t matter

I’ve been constantly fascinated over the last 10 or 15 years as CDN’s have started to release “capacity numbers”, and more and more people have started to ask about CDN capacity. How many servers do you have, how many terabits of capacity do you have, how many PoPs do you have…etc.

This is making its way into nearly every RFP process and it seems to be discussed by some blogs and media types as news — CDN X announces they have added 5tbps of new capacity, woo!

I understand the need for yard sticks, and perhaps these are the best we have, but it seems to imply the answer to this question is relevant to an individual customer. Everyone wants to be a on a global CDN with ample capacity, but capacity to do what? where? when? with whom?


Photo by Thomas Kvistholt on Unsplash

What problem are you trying to solve?

The question people *should* be trying to solve when evaluating a CDN is: 
For any given request for my content, by my clients, at the moment they request it — is that response delivered fast enough to meet my performance goals.

If you take a CDN with 25,000 servers, of which you are mapped to 200 (or even less), what value do those extra 24,800 servers provide to you?

Who else are you sharing those 200 servers with? What impact do they have on your cache hit ratio? What does it matter what the average cache hit ratio is on the other 24,800 servers?

What is the value of a CDN POP that your traffic will never be served from?


$x tbps of capacity!

What good is 20tbps of capacity? Capacity to who? where? Lots of this capacity is ‘distressed’ a lot of the time. For example, say you have a CDN with 15tbps capacity split across east/west coast. Or North America/Australia. That ‘capacity’ is useless if the users in the local market are asleep. You can’t ever be using 15tbps at the same time (while still keeping the traffic on the same side of the country/ocean).

Even during a 24 period, Australia might only account for 3% usage compared to North America. That ‘10tbps of capacity’, however, might include 0.5tbps of capacity in Australia..
 
That means in order to “use” that 0.5tbps of capacity of Australian capacity that day, you’d actually have been trying to push 16tbps for your other 97% of users down your 9.5tbps pipe in North America!


More importantly, this ‘capacity’ isn’t guaranteed. The performance between a sender and a receiver is limited by whatever the most constrained point in the network path is. That has(usually) *never* been the CDN. It’s always been the transit providers peering, your peers total capacity to the peering point, eyeball provider backhaul, etc.

Going from 2x100gb circuits to your Transit provider to 3x100gb seems like you’ve added capacity, but if they only had 8gbps of headroom at peak to a destination, what value have you added to that network?

I, personally, would be way more interested to hear news along the lines of “The transit provider we have 2tbps of capacity to just added 1tbps of additional capacity to Comcast or AT&T in New York, Ashburn and Miami” than a CDN announcing “we added another 5tbps of capacity to the same transit providers globally”.


What does this all mean? Not much, mostly a rant.. I don’t have a better yard stick to suggest — I’m suggesting you stop trying to use a yard stick. Choose a provider who you believe is a great operator and will do everything they can to deliver your bits to your users and most of all measure the performance impact to you.

Don’t worry too much about what infrastructure is used by people who aren’t you.

Like what you read? Give Matt Levine a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.