Blockchain and P2P Filesharing: Repeating History of Scalability Failures (Blockchain Society 3/?)

J. Macodiseas
Mar 16 · 12 min read
Photo by Nitin Mendekar on Unsplash

What “crypto” can(’t) learn from failures of others

In the year 2001, same year BitTorrent was invented, another p2p network, that was already established and based on exactly the same principles, was already failing. It had become the victim of its own success: it was reaching 150.000 users, and its community-operated directory servers could not keep up. Connect a new “server”, and three hours later it was at its bandwidth limits, dying.

What does this have to do with crypto? If you have tried to fully synch a crypto wallet lately — for example, a full Ethereum client — you have likely seen some of these issues, too. This “distributed” technology has scalability problems that, to somebody who started his “online career” unwittingly documenting the demise of the p2p network mentioned above, look startlingly familiar.

In this article I will try to describe the root of the problem, how it relates to crypto, and how it has been solved for p2p. And also why, in the crypto world, these solutions can be applied only partially, if at all.

The Granddaddy of distributed scalability problems

There are many parallels between the crypto world, and the p2p filesharing world, and many of them have been excellently described in Simon Morris’ four-part series, “Bittorrent Lessons for Crypto”. I agree with most of what he writes, except for one thing: BitTorrent was not the first system that allowed easy and reliable file transfer over slow, unreliable links. Even ignoring Z-Modem from 1986, which already featured a significant amount of resilience features —BT was a simplified version of something that already existed: a system that allowed multiple downloaders of one file to share fragments of this file among each other, to speed up each other’s download and most importantly, to reduce the strain on the uploader. A system that also allowed hyperlinks pointing to the shared files to be embedded on normal websites. At least one other such system was eDonkey 2000 — also called eDonkey, or ed2k for short. The operating principles of BitTorrent and eDonkey are so similar that it is hard not to consider BitTorrent a clone.

The main difference between BT and ed2k is that in ed2k, files were discoverable without a third-party website. You didn’t have to go to Pirate Bay, you could simply search the eDonkey directory server you were connected to for any files that others were sharing. You did this using a good old search box. What happened behind this search box was both the network’s boon, and its doom.

If you don’t want the technical nitty-gritty, just skip past the text box below.

In order to download a file from BitTorrent, you first have to find and download a torrent file for it from some web server, and feed it to your BT client. It will deal with the moderately complicated rest: connect to the tracker(s) for the actual file you want, get a list of everyone who you can download fragments of this file from, and begin the download.

Neither the tracker, nor peers can be found without the torrent file — which somebody has to host on a website. This means, if you want to publish a file, you have to

- run a tracker for that file

- create a torrent file pointing to that tracker

- upload the torrent file to some website

- make the link to it known somehow

It introduces an additional layer of complexity, and additional middlemen. And of course, the sites those links are published to, like The Pirate Bay, are a frequent targets for law enforcement.

On eDonkey, all you had to do to publish a file, was to

- start the eDonkey client

- put the file in the right directory on your computer

That’s it! Anybody connected to the same “eDonkey Server” as you can simply type some part of the file name into a search box, and gets offered your file for download. Click, done. Anyone downloading the same file automatically shares his file fragments with others, just like on BitTorrent. Thus, an ed2k server serves both the role of the website, and the tracker, in one.

(Note that neither of those three have to handle the actual file data, so they are not in direct breach of copyright — but of course everyone who is uploading or downloading the actual file is.)

(As a further aside, in ed2k you could use hyperlinks on a website to point to the file as well, so if you preferred the convenience of a vetted website, it could still be done.)

Of course no-one expected just one server to be able to handle all the work that would result from hosting a search engine and the tracking information for all the files on the internet. This is why everybody who wanted to run an eDonkey server, could do so. The servers connected with each other, forming a network of their own — just like the crypto clients do.

However, there was still one small problem. How do I find files that “my server” — the server I am connected to — does not track?

How did BitTorrent solve this problem? They didn’t — at least not at first. If your torrent file didn’t list an active tracker for a file, … tough luck. (This was also a feature, and saved BT from having the same type of problems, but I will leave this topic for another time*.)

eDonkey servers, on the other hand, sent you a list of all the servers they knew as a greeting, as soon as you connected. Which allowed you… to query them all for the file you needed. And this is why the eDonkey network died.

See, this last part meant, it didn’t matter how many eDonkey servers there were on the internet. Because every client out there would ask your server whether you know a file. Adding a new server didn’t reduce the load on any of the others — instead, it just increased the overall eDonkey search traffic on the internet. As the eDonkey server was gradually discovered by more and more clients, its internet connection — mostly a home DSL connection — became completely saturated, and the only way to continue operating it was, cycle your router to get a different IP from your provider.

Adding another client, however, hit every server equally — at least, every server the client could find. (How did this discovery happen? That is a tale for another time.) It turned out, the maximum amount of clients on the entire network the servers could survive without dying was about 150 000 — close to nothing in today’s standards, but you have to remember that the normal DSL upload bandwidth back then was 256 kbit/s.

As part of the eDonkey community*, I was in the middle of this disaster, trying to find solutions, and unsuccessfully trying to convince people to avoid abusive tools that increased the traffic that was already getting out of hand even more.

And this was why, when I first heard of how Bitcoin was supposed to work, which was well before its official introduction, I laughed. In hindsight, I should have mined some instead: understanding the technical limitations of something obviously doesn’t help one bit with understanding greed.

The crypto scalability issue

If you know anything about distributed ledgers, you probably know that every full client of any of those networks has to have a full copy of the ledger (at the very least, the current data of every account of the ledger plus some history) which means receiving all updates on the network. Just like with edonkey servers, adding another full client doesn’t take any work off the other full clients’ shoulders — instead, it produces more work for everyone, since somehow, someone has to send copies of the transactions to one more machine — and this machine, in turn, also generates transactions.

Adding a “light” client, or a “leech” client, doesn’t do anything good to the full clients either: now every one of them has to also process transactions of one more client, but now, at least one of them is additionally going to be exposed to the light clients’ requests every time it needs to query a balance.

Thus, the load on the network as a whole is increased with every client, no matter full or light, and it can not be reduced by any means built into the tools. Does this remind you of something?…

If the network happens to have miners, the same problem eventually arises for the miners: they have to process and sign every single transaction. Now, normally those machines and their connections are better suited for the role of a server — they make money on the transactions after all — but even with those, a saturated connection is just a matter of time. Especially if they become the de-facto servers on the network, because all clients have turned into light clients, because the requirements for full clients are simply too high for non-commercial use.

The filesharer’s solution

Now if you are an avid technologist, you would shout “why didn’t they just use distributed hash tables!”

I can tell you why: all of this was happening before DHTs were invented.

DHTs, shorthand for Distributed Hash Tables, work by spreading hash tables — in essense, simple databases — over a network of computers, so each one of them becomes responsible for a part of the data. This means, with some overhead, you can hold a lot more data in a DHT that can fit into the memory of any of the participating computers — and done right, it also means that every participant only gets hit by requests that query “his” part of the hash table, spreading the load over multiple nodes. Adding nodes to a DHT actually reduces the load on every other node instead of increasing it. This is why some form of DHTs is used by literally everybody these days, starting with Google.

DHTs were, in fact, the solution for both ed2k — or rather its follow-up protocol called “Overnet” — and for BitTorrent’s tracker segregation issue: in Overnet, like in the BitTorrent DHT, every client becomes a tracker for a part of the network, allowing a much larger pool of sources to be found, and in Overnet’s case, removing the need for the eDonkey’s “server” network entirely.

Can “crypto” learn from failures of others?

Distributed Hash Tables seem like a silver bullet. They allow “sharding” virtually any information store, protecting the nodes from too many requests. Can this solution be used for crypto, then?

Short answer: no (see below). Long answer: maybe, but not in a way that satisfies all of the blockchain’s original promises.

The reliability of a “distributed ledger” comes in large part from the fact that a distributed ledger, very unlike a distributed hash table, is only distributed in the sense that a newspaper is distributed — everyone gets a copy. Sharding it will reduce the availability of the data, and therefore, the reliability.

Using another analogy with file sharing: you can only download a movie if there are enough sources online at the same time with you to cover the entire file. If there are 300 sources who have the beginning, 50 who have the end, and none who have the middle, you are not watching that movie today. This is sad, but not the end of the world. It might be a bit less great if what’s missing happens to be the contents of your wallet.

This, of course, can be counteracted by copying the ledger’s parts many times, to guarantee that enough clients are online at all times to cover the entire thing… But how much is enough? Are you OK with all of your money being “offline” with a chance of 1 in a thousand? 1 in a million? As long as all the nodes are us, the everyday people, you never know who and how many are going to be offline at what times. What if there is a major power outage in Sudan, and all the copies of your wallet happen to be there, even if you are in the US?

Of course, you could always run a full client yourself, and the sharding protocol could make sure that there is always a copy of your wallet and your transaction history on your machine. But look at it from the other side: let’s say you are selling something, and the buyer is the only person who has the proof of how much funds he has… are you really going to sell to him?

Then there is the little problem of securing the network: it is secured via miners. Obviously, in a scenario like this, a block can not be signed by every miner. Otherwise, we just move the “server-ness” of the server-less network onto the miners: every single one of them has to deal with every transaction on the network, making their connection the bottleneck.

This means, sharding will automatically reduce the security costs of the crypto network, in turn making it easier to attack. There are only two crypto networks in the world right now that can afford to reduce their security costs: Bitcoin and Ethereum. Pretty much every other crypto network has already been 51%-hacked. Which shows us at least one thing: the computation powers some of the adversaries are able to wield are simply stupendous.

Which brings us to the last point. When you shard a crypto ledger, who can guarantee you that the shard you are being served is actually a real part of the ledger? We are dealing with a network of unreliable, untrusted nodes after all. What happens if all of the part X of the ledger end up being “hosted” by an adversary?

At this point there are three arguments:

  1. Handwaving (no-one could take over all the nodes that host a shard)
  2. Statistics (it is extremely unlikely that all of the nodes that host a shard will end up in one network)
  3. Cryptography (there is some cryptographic security that prevents this from happening.)

The problem with all of those is that people tend to vastly underestimate the size and power of the adversary networks the crypto world is dealing with. For example, at the beginning of 2019, there has been a 51% attack on Ethereum Classic, something that should have cost about 55 million to pull off (or at least half of that, given the decline of crypto prices). The adversary later returned $100k of the stolen tokens. It was, it seems, just a test — or a warning. It seems likely that for him that action didn’t cost more than $1 mio — since this is the amount the hacker seems to have kept.

If you can command a vast amount of computational power for a relatively low price (compared to the amount of money you are going to steal), there are all kinds of things that you could do. Points 1. and 2. could be dealt by DDOSing the “real” nodes of the shard off the net, and taking their place. For number 3, I have to, once again, go back to history:

On eDonkey, the key for a file wasn’t its name. Everybody could have forged a file name, right? The files were addressed using their cryptographic hashes instead. You would follow a vetted link to, say, the newest blockbuster movie. The hash and the size would pan out.

So some hours or days later this movie would be done, you would grab some microwaved popcorn, sit down with your significant other to watch, click … and turn it off rather quickly, seeing that it was the middle of some hardcore porn flick.

After a long argument with your hopefully still significant other you would sit down and research what the heck happened. What happened, was, the algorithm that was used for the hashing, and therefore for finding and addressing the file, was MD4. While you can not decode a hash back into the data due to information loss, you can, with some difficulty, find data that hashes into the same hash, and still has the same size. Brute-forcing a hash is hard — it requires a lot of computational power, or a lot of time. But with MD4, brute-forcing wasn’t necessary: a few years earlier, somebody has found a weakness in the algorithm that allowed you to construct data to match a hash at will.

The only thing precluding you from doing the same with more modern hashes is the amount of computational work that is required. But as we already saw, the amount of computational power that the adversaries wield is quite mind-boggling, and seemingly cheaper than we tend to anticipate. It might need more than a cryptographic hash or two to keep things safe.

This is not to say that scaling a blockchain network is impossible. It clearly is possible by sharding — but it comes with trade-offs, some of which we have mentioned above, and it is not going to be the happy, duplicated, everyone-gets-a-copy distributed ledger world. The other question is “why do it” — it seems much easier to solve all these problems by means of traditional databases or permissioned chains, without wasting enormous amounts of energy on Proof Of Work.


This article is part of the Blockchain Society series. For now, it consists of the following parts:

  1. The Blockchain’s Broken Promises
  2. Trustless Blockchain Doesn’t Scale
  3. Blockchain and P2P Filesharing: Repeating History of Scalability Failures (this piece)
  4. Death By Fake Decentralization

*: I was one of a handful of people providing a service for the ed2k community that the BitTorrent network simply didn’t need: a server list. The .torrent files, while being an ugly intermediate step, included a way to find the trackers. ed2k:// links did not, and dynamic IP addresses prevalent back then, as well as the puzzling lack of support for DNS in the edonkey protocol, made even getting onto the eDonkey network without a recent server list a rather frustrating proposition.

J. Macodiseas

Written by

Science Fiction, Tech, sarcasm, and philosophical ramblings about the Universe.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade