IPFS: The Internet Democratised

Tony Willenberg
5 min readSep 12, 2016

--

Most people know of Sir Tim Berners-Lee the CERN fellow who back in March 1989 proposed “hypertext” and the globally distributed and interconnected information system that today we call the World Wide Web. But you would be hard pressed to find mention of Juan Benet. Juan is the inventor and developer of the Inter-Planetary File System (IPFS), a technology that is likely to irreversibly disrupt and bring an entirely new Internet architecture into being.

Our current Internet, along with all the design, engineering, coordination, control, and myriad sub-systems it takes to keep it all working, is not truly open and efficient. Let me explain.

It is not open, in spite of the “Web 2.0” revolution which went part-way to put content production in the hands of anyone with a browser. It is not open, because access to content is still via an infrastructure which, although globally distributed, is still very much centralised to a subset of computers (servers), providers, and network connections.

For example, this very post you are reading requires dozens of intermediating services and actors: the post was created on a computer that was connected via a mobile broadband Internet connection provided by a mobile phone company (aka an Internet service provider or ISP) to connect me to a router which allowed me to locate the server running software that manages a database of content curated by Medium.com into which I placed the post. You, then, connected to your ISP, used a browser which took some text from you representing the content’s address and using the global domain name system (DNS) returned an IP address which routers at the ISP used to locate the server on which the content was located and that server then responded to your request to access the content in the database at Medium.com resulting in a copy of it downloaded to your browser in order for you to read it. The only truly democratised thing in this entire “transaction” is the content I wrote and your freedom to read it, and even that may not be true in some parts of the world.

The Internet is also not as efficient as it could be with interconnections needing more and more bandwidth for information to flow smoothly between any two points. The Internet today is largely a layered mish-mash of protocols and devices, each maintaining backward compatibility with older and lower layers in the system in order to deliver new services to you, the user, who only interacts with the topmost layer. Underneath is a deep tumultuous sea of bits getting choppier and deeper in our continuous effort to keep the world of devices and people afloat and sailing calmly across its surface.

Think back to a time when you couldn’t stream video on the Internet or when you couldn’t buy stuff online. These services required computer scientific work to upgrade the Internet’s software and hardware to make it happen. Introducing a new service has to take place carefully so as not to disrupt the services that came before and it is this continual patching and upgrading that has contributed to the Internet’s technical inefficiency.

With the benefit of hindsight and developments in computer science over just the last 15 years, the IPFS proposes to make the Internet anew, potentially disrupting the realms of society, politics and economics along the way because of how it will deconcentrate and disintermediate power and control on today’s Internet. The last 15 years in computer science has seen some groundbreaking progress made in how we solve real world problems using information and computation. We should excuse ourselves in the sense that prior to this period we had no real alternative to grow an Internet in any other way than how it has evolved to follow a top-down command-and-control architecture.

We needed ICANN to coordinate the assignment of unique IP addresses for servers on Internet. We needed the DNS system to provide all users with a single directory for how to go from a name of a server to an IP address. We needed Internet service providers around the world to provide local links to people to get onto distribution network. We needed governments and the largest telecommunication carriers to provide the core interconnections between continents and across seas and over mountain ranges. We needed centralised trusted authorities to manage digital identity for people and servers to be able to trust each other during trade.

But have we gone too far? Many argue that we have. We have walled off most of the Internet so that the cyber-boundaries mirror boundaries around the nation state. We filter and censor the Internet in ways that are open to manipulation without oversight and performed at the hands of a limited few. Our interactions are intermediated by a very few, very large businesses, some rivalling the size of nation states on any measure of size. Even reading this post consists of dozens of intermediating parties from ISPs to data centre providers to domain name hosting to government monitoring (depending on from where you are reading it).

Enter Juan Benet, designer of the Inter-Planetary File System (IPFS). The IPFS combines advances in computer science over the last 15 years like: Bram Cohen’s BitTorrent protocol, Linus Torvalds Git protocol, and Petar Maymounkov and David Mazières’ Kademlia protocol; into what is described as “a new hypermedia distribution protocol, addressed by content and identities”. The IPFS permits anyone to be both a client and a server or producer and consumer with approximately the same amount of effort either way. The IPFS encourages you to be both.

The IPFS is not only the best of the Internet’s services, protocols, layers and structures in one architecture, but it permits a smooth migration and ignores all the patched and inefficient bits usually retained when you migrate to something newer.

The IPFS provides:

  1. a protocol to locate content and coordinate delivery from one place to another;
  2. a file system to mount on a local system so that you can access distant resources as if they were local;
  3. a modular approach to network functions like routing and virtual circuits;
  4. peer-to-peer transfers of files without needing servers;
  5. a global namespace based on Public Key Infrastructure (PKI);
  6. a system for ensuring the integrity and version control of files; and
  7. an upgrade path for browsers so you can access information using the old way (http://) or the new way (ipfs://).

The IPFS is not just a theoretical or academic experiment. It is a working software system (although still in alpha) that can be downloaded and switched on right now. You can in fact be up and running in a few minutes. Visit the IPFS home on Github to learn more and watch Juan Benet’s demonstration installing and operating IPFS nodes.

--

--