Is peer-to-peer the solution for open source software distribution?

With AppImage, application authors have a format for distributing Linux applications directly to end users without intermediaries such as distributions or centralized repositories controlled by central entities. Yet there is still a need for hosting, mirrors, CDNs, security infrastructure and such, all of which tend to be either owned by large corporations or cumbersome and costly to set up. Can peer-to-peer technologies solve this, while still being easy to use? Can we make open source software distribution fully decentralized?

The challenge

What is complicated about distributing applications for Linux today?

  • You either need to get your application into distributions (which can be very cumbersome due to policies set by distributions), or you need to set up private repositories for each distribution and version (such as personal package archives, which can break the system, are a hassle for users, and are generally frowned upon). With AppImage however, there is a “one app = one file” format that allows users to run your application on most common desktop Linux systems — Problem largely solved
  • To upload something, you need to have some form of hosting (paid or free), have to set up an account, have to deal with passwords — Needs to be simplified
  • In case of free services, the hosting may go away at any time — Needs to be solved
  • Stuff needs to be digitally signed in a separate, cumbersome step — Needs to be simplified
  • You don’t get mirrors in every country where users are — Needs to be solved
  • Access may be restricted from some countries, e.g., Chinese users may have slow or no access to sites like GitHub — Needs to be solved
  • Making things secure with https is cumbersome, need to pay for and/or fiddle with certificates — Needs to be simplified
  • Users need to know whom they can trust, in order not to download applications from some random sites — Needs to be solved
  • In-house software distribution means downloading the same stuff from the Internet over and over (which can be slow, costly, or both), or setting up local mirrors which is a cumbersome (and hence, expensive) admin task — Needs to be simplified

The solution?


Enter IPFS, “a peer-to-peer hypermedia protocol to make the web faster, safer, and more open”.

IPFS. Source:

According to Wikipedia,

InterPlanetary File System (IPFS) is a protocol designed to create a permanent and decentralized method of storing and sharing files. It is a content-addressable, peer-to-peer hypermedia distribution protocol. Nodes in the IPFS network form a distributed file system. IPFS is an open-source project developed since 2014

It may well be the next Internet revolution, as Juan Benet from the IPFS project argues at TEDxSanFrancisco:

Juan Benet at TEDxSanFrancisco. Source: YouTube
IPFS Alpha Demo. Source: YouTube

What makes IPFS interesting for open source software distribution?

  • Seems to really work globally, as an AppImage user from China reports
  • You don’t need logins and passwords (but private keys — you need something, after all)
  • Data is not stored on central servers owned by some big corporations
  • Everyone can upload (but we need to figure out some web of trust so that users will download files coming from trustful upstream authors rather than from some random guy)
  • No fiddling with https, server configuration, certificates
  • No separate signing step, it can be built into the workflow

But there are still open questions:

  • How does the metadata about IPFS files travel through the ecosystem? Can we build a peer-to-peer database to hold what the information about the available AppImages?
  • How can we implement different channels (such as release, beta, alpha, nightly, continuous, etc.)?
  • Is IPFS the most suitable choice? Or should we use Dat? IPFS seems to have a longer history, but Dat has this cool peer-to-peer web browser called Beaker Browser. Why are there two systems that appear to become increasingly similar over time? Even after reading the FAQ sections of both projects, I still haven’t found killer arguments why we need two such systems, or which one we should use over the other one

Beaker Browser and Dat

Straight from their homepage:

Beaker is an experimental browser for exploring and building the peer-to-peer Web. (It) adds support for a peer-to-peer protocol called Dat. It’s the Web you know and love, but instead of HTTP, websites and files are transported with Dat.

It’s the read-write web, possibly close to what Tim Berners-Lee originally envisioned the World Wide Web to be when he created it on his NeXT computer.

It’s great, it’s easy, it provides for a polished user experience. Beaker Browser is even available for Linux as an AppImage. But why are they not hosting the AppImage on Dat?

It puzzles me why we have two competing technologies here that are similar enough to get me confused to the point that I have been taking the stance of “wait and see which one will be the one everybody uses”.

Or are there technical differences in IPFS vs. Dat that make one or the other more suitable for the distribution of binaries, especially AppImages?

Where do we go from here?

As a first step , we need to build a deeper understanding of the paradigm shift that comes with a system like IPFS. How should peer-to-peer open source software distribution look like?

A plan of action could look like this — what do you think about it?

  • Make it easy to use: Integrate IPFS (or DAT or another similar system) into the AppImage tools to provide for an extremely easy workflow without users needing to know much about the inner workings of ipfs
  • Build de-central infrastructure for metadata: Think about an IPFS-based, fully decentral AppImageHub database
  • Make it trustworthy: Build some web of trust
  • Make it efficient: Work with the IPFS authors to make the IPFS chunking mechanism aware of squashfs (the format which AppImage uses) in order to maximize deduplication — after all, it would be very cool if we would only have to download that once, for all AppImages that use the same version of it… if needed, investigate different compressors for squashfs (or a different format altogether)
  • Make it resilient: Needs to fall back to HTTP(S) seamlessly for users who cannot use IPFS…

Try it out

Of course we could make this much simpler by integrating it into appimaged and similar tools:

Please try to download QQ-20171129-x86_64.AppImage using ipfs.
The more people do, the faster the download should be.
tar xfv go-ipfs_*3_linux-amd64.tar.gz
./go-ipfs/ipfs init
./go-ipfs/ipfs daemon &
./go-ipfs/ipfs get QmNdcsAskKYYqrrykgtFbxjp4axNMa3j65oGpkbNVJLXLy
./go-ipfs/ipfs pin add QmNdcsAskKYYqrrykgtFbxjp4axNMa3j65oGpkbNVJLXLy
mv QmNdcsAskKYYqrrykgtFbxjp4axNMa3j65oGpkbNVJLXLy QQ-20171129-x86_64.AppImage
chmod +x QQ-20171129-x86_64.AppImage

There are also HTTPS gateways if you cannot install ipfs locally:

But if you download from there, you are not sharing, and hence not helping to keep the file available and fast.

NOTE: Since I have written this article a quarter ago, the files have disappeared. So we cannot rely on “just throwing files out there” and assume they will somehow magically stay alive.

Maybe all this server-based stuff is still too cumbersome and we should opt for a pure browser-based solution like Which approach has more traction?

Come join us

Some bold ideas. Lots of change. Simplicity and beauty if done right.

Want to shape the future of open source software distribution?

We cannot do this alone. We need people from the community who want to work with us on these challenges. You?

The AppImage developers are at #AppImage on

probono is the founder and lead developer of the AppImage project.