The Magic Of IPFS

Alexander Weinmann
Coinmonks
Published in
5 min readMar 18, 2019

--

It is very easy to publish a complete website with IPFS. If you have a local IPFS node installed, you just need to run:

ipfs add -r webapp

Here webapp needs to be the directory in your local file system, where your complete web application resides.

But is this really a useful way to publish websites?

Yes, it is, if you can handle it!

The pros and cons are widely discussed in the net, and you have to be aware that it is completely different from ordinary web publication. (Some of the known pitfalls are described in this article quite clearly.)

After you entered the command above, you will end up with nothing more than a bunch of hash codes, that might look like this one: QmSH4VDxSY2V3KoDLojHZGkfPtVoVhDoisaLGBL2RPnvqM

As long as your own IPFS node is properly configured and up and running, it will now be possible for everybody, to visit your site. By one simple command, you have added all the files of your web application recursively to IPFS. The root of your application is the last hash displayed in the output.

There are public IPFS gateways, that provide your content for free, iff they happen to be online. At the time of this writing, the Gateway Checker found 33 gateways, and 17 of them were online. This seems to be poor availability, but in fact, you have successfully published your content on 17 different sites. If another 16 went down, you would still remain “online”. You are very close to high availability.

Public gateways work like this: They query IPFS for the hash code you specified and then treat the result just like as static website. Of course, this implies that you cannot execute any server-side logic, as the backend can do nothing but provide static content.

There is no application server doing the heavy work like reading from and writing to databases. Today this is not as much a disadvantage, as it used to be. In many contexts, application servers can be called dinosaurs. There is an ongoing shift towards client-side applications. Serverless has become the new paradigm. Of course, at present, this is not yet a feasible solution for every situation. But things will change.

New developments like IPFS are good indications of what the future will look like. For the first time, we have a data storage system that is not centralized. Its remaining limitations will go away with better infrastructure.

For IPFS, better browser support is needed. Expressed in a simple way, browsers need to move from client/server to decentralized. When that has happened, serverless applications will be much easier to write than today. They will be able to communicate with all sorts of peer-to-peer networks: with blockchains or newer file systems like IPFS.

In the meantime, I feel that IPFS is answering only half of the important questions. What about encryption? What about data integrity, permissions? In any local file system, you can limit access, you can grant different rights to different users. You can encrypt. None of that is possible in IPFS at the moment. Its strengths come from a completely different side — not from mimicking the features of conventional file systems. So the more traditional features of the older file system are missing in IPFS.

Only the future will tell how much of a problem that really is. Take encryption: This technology always implies ownership, and thus inequality. A private key needs to remain a secret for all the others. In a way, this contradicts to the philosophy of a peer-to-peer network, where all nodes are designed to be equal. Maybe we just need a better way to handle privacy and ownership than old school encryption. Encryption is of course indispensable (and it is heavily used inside blockchain technology). But will it be enough to enable secure trading in peer-to-peer networks?

For now, we have to accept, that neither encryption nor any sort of authorization and authentication is part of IPFS. So we have to find a way to deal with this — or just sit back, relax and wait for better times.

In my experiments, I have tried to solve another less difficult problem with IPFS: The volatility of URLs. Each time the content of your web page changes only a little bit, it needs to be addressed with a different hash. So as explained above, different content needs to be requested by a different URL.

There are many ways to handle this, for example, IPNS. And it is also interesting to read, how you can tweak DNS, to end up with a stable URL for your homepage.

All that is advanced stuff, and it already goes beyond the basic usage of IPFS. I find it more interesting to accept the fact of transient addresses and play around with it a little bit.

I used a simple page, that only rarely changes, but does not show any content. The content is distributed separately and as a sequence of pages, like this one. You just combine these two hashes by using a URL like this one:

/ipfs/<HASH1>#<HASH2>

As the content consists of many HTML snippets that can be loaded sequentially, it is easy to write a wizard in JavaScript, that can be used to browse the content. You end up with this sample page.

I think that in this way you can profit from many of the advantages that come with IPFS. Any new content can be published just by publishing (and pinning) the new page sequence to IPFS, as demonstrated in the example. There is nothing more to do but distribute the new URL somewhere. Your content will remain online forever, iff the pinning-feature of IPFS works as expected.

It works! You can find your content here, here or here or in any other ipfs gateway that is currently online.

This is not yet really serverless, as the gateway servers are still needed. But it does give you a feeling of what will be possible in the future. The paradigm change is apparent, and it will get more obvious pretty soon …

Get Best Software Deals Directly In Your Inbox

--

--

Alexander Weinmann
Coinmonks

Programmer, Thinker, Entrepreneur, “Intuitioner”