Why the Internet is broken

And how we can fix it

Each and every day we deliver customer data to end users through thousands of networks around the world. By the nature of our business, we are in the middle of content distribution. Our goal is to always deliver the data with 100% quality and speed. As hard as we try, however, it’s not always possible.

Ten years ago, the Internet was about “free” peering, quality and speed improvements. Sure, transit prices were significantly higher but everyone wanted to cooperate. Nowadays, because of large networks of the largest ISPs on the planet, the Internet is broken in peak times.

Why?

Normally, you’d buy interconnection to a TIER 1 network, one of those that can reach every other network on the Internet without purchasing IP transit or paying settlement fees. That should ensure you can connect to every network on the planet, right?

Wrong!

Large operators like Orange, Telefonica, Hinet, Deutsche Telekom or Liberty Global/UPC restrict access to their networks even from TIER 1 providers. The result is slow video download, higher latency, packet loss and traffic loss.

If we look at, for example, Orange France/Espana (AS3215) during peak times, we can see that the uplinks to their network are full.

As a consequence, data gets lost. In this particular case, Orange’s customers may experience packet losses of up to 30% of the data in the uplink. The line graph above displays the quality of the connection between our probe and an end user inside Orange France. Green line means no losses, blue line means packet loss.

At this point, you may wonder why they do it? What do they want?

The only reason for this is money and the market share. Prices for direct access these operators require are ridiculously high and for the most part it makes no sense to pay it.

Same thing happens with interconnections between Cogent (TIER 1) and Telecom Italia. As you can see, the interconnections are full several hours per day.

Contracts with TIER 1 companies (main Internet networks) sometime include clauses in this sense:

Customer agrees to do its best effort to control and limit its use of XXX to reach AS 5511, 2956 and 3320.

Or

The Service provided pursuant of this Order shall not include connectivity to the IP Networks of Deutsche Telekom, France Telecom, AT&T and Comcast.

This means that even the largest Internet networks such as Level3, Telia, Cogent, GTT, NTT or TATA do not provide access to the whole Internet as discussed in the beginning.

The exceptions are large networks of ISPs in Europe, US and Asia. As a small/medium network you simply don’t have access to these networks during peak times even with uplinks from TIER 1 providers.

During the past few years, I’ve spent hours dealing with ISPs about the interconnection in meetings. However, the situation hasn’t changed.

What they don’t seem to realise is that it’s beneficial for both parties:

Content owners need interconnection to ISPs (i.e. users). ISPs need interconnection to content (Netflix, Facebook, Apple, Google, CDN providers, OVH, Leaseweb etc.).

One wouldn’t work without the other. The problem is, ISPs don’t really have this “pro-customer attitude” due to their local monopolies. Since infrastructure is the only thing there’s left for them to offer, they’re trying to squeeze the most out of it.

Even though content owners can switch to different providers, which doesn’t necessarily solve the problem, end users at home cannot switch (or their options are very limited). They have to suffer from slow connection.

At this point, I believe we can agree that this is a problem. Until ISPs start to cooperate, we propose a different solution.

Users requesting the same data consume connectivity in vain and hence slow down the connection to each other. Since they do request the same data, what if they could leverage what they’ve already downloaded?

They could create a network inside a hard-to-reach network by sending the data to each other. For the best delivery possible, each file would be delivered from more users at the same time.

If your initial thought was “we’ve been there, it didn’t work”, think again.

We can manage the content flow, i.e. we are in control of the peer-to-peer routing. We can keep traffic local and inside these hard-to-reach networks. If at some point peer to peer stops working, a CDN is ready to provide a helping hand and back the connection.

First, the connectivity inside an ISP’s network isn’t limited. Second, users wouldn’t slow the connection to each other. On the contrary, they would speed it up as the uplink will thin out. Third, the cost for content providers would decrease.

To depict a real use case, imagine a live stream to Telefonica, DTAG or Orange. In an ideal scenario, the stream is only delivered a few times while the rest is being delivered peer to peer (P2P).

Result for end users?

Internet will be as fast as technically possible again.

Because we know it works, we created a live demo. See it here webrtc2cdn.io. Try it, share it, comment on it.

Zdeněk Cendra, CDN77

My Twitter: twitter.com/zdendac