Offline First, the Decentralized Web, and Peer-to-Peer Technologies
I recently attended the first Offline Camp, a gathering of a very diverse group of people that were interested and invested in the Offline First subject. Thanks to the organizers (and the sponsoring by YLD, IBM Cloud Data Services, Meetup, Hoodie, Bocoup and Make&Model), for three days we convened in a beautiful property on the Catskills Mountains, just a few hours north of New York City. The house rested on the top of a hill, surrounded by a lake and a breath-taking view. This scenery prompted a relaxed environment that sparked many interesting conversations — one-on-one and as a group — in an “unconference” format that lended itself to the spontaneity.
Depending on the personalities involved, this type of gathering sometimes is not very productive, but that was definitely not the case for this gathering. As commanded by the format, the subjects for discussion were proposed and voted on, and interesting conversations sparked the imagination and verbosity of the attendees. At any given time there was always more than one subject being discussed in a different meeting room, but given that each group presented a detailed summary of what was discussed, I felt I didn’t miss a bit.
One of the session I managed to attend was one where we discussed the broad topic of Decentralized Web and related technologies in the context of Offline First applications. Offline First applications typically need to rely on the device resources if you want to access content when there is no internet connection. But connectivity is not an all-or-nothing condition. You can, for instance, be connected to a local network that itself has no internet connectivity, but you could still be able to partially or completely operate based on the content available on that network. Or it could be that you’re connected to a network where one of the devices has some internet connection, having somewhat limited access to internet content through it.
Why could this be useful? If, for instance, you’re in a place with no internet connection — on a plane, for instance, perhaps you can still access some content — you could gain access t0 some popular npm packages to continue developing your application or still be able read some articles, stream some music, or even watch movies.
Other scenarios could exist that are more evolved than just accessing content. For instance, if there’s an emergency situation that renders the phone and internet access down for a certain area, mobile phones could still operate by forming a mesh network that allowed humans to still relay messages across, so being able to communicate and coordinate.
I know we’re all probably still far from this, but perhaps not as far as you might think at first. For instance, FireChat is an Android message exchange application that forms a wireless mesh network using Bluetooth in the absence of an internet connection, and one of its use cases is to be used in disasters or emergency situations.
There are many other examples where a decentralized web could work with partial or no internet access. By not relying on a central location on the internet, you can increase the availability and usability of an application without having to rely solely on the local device.
One of these systems that allows you to access content while partially connected is the Interplanetary File System (IPFS). IPFS is a peer-to-peer protocol where each file block is given a unique fingerprint. Each network node stores only the data its interested in, but knows where to get or ask for data that it doesn’t have. On top of this, IPFS delivers you a file system that allows nodes to access content from other peers whether you’re connected to the backbone or not.
Behind this is the concept of a Merkle Directed Acyclic Graph (Merkle DAG for short). A Merkle DAG is an append-only data structure where each node contains a secure representation of its children. Using this, parties can exchange references to objects (Merkle Links) securely. One of these references is enough to verify the authenticity of the object at a later time, allowing these objects to be sent across untrusted channels without the fear of data being changed along the way, effectively protecting against man-in-the-middle attacks.
Conflict-Free Replicated Data Types
Merkle DAGs simplify the construction of secure conflict-free replicated data types (CRDTs), which are used to achieve strong eventual consistency. CRDTs are a way of allowing several types of Offline First applications to flourish. For instance, it allows two or more people to be editing the same document without a conflict, or several people to be playing the same game at the same time over an intermittent network connection. No matter their connectivity story, all users will eventually converge on the same result.
Distributed Hash Table (DHT)
IPFS and other peer-to-peer protocols implement the Kademlia protocol, a distributed hash table (DHT) for peer-to-peer networks. When looking for a piece of content, you need to know which nodes are responsible for finding it. Also, when you have the address of a node, you need to know how to reach it. Kademlia (and DHTs in general) defines how this information is spread out throughout the nodes on the network and how a client can query it.
A DHT is the fundamental way nodes can know where content resides and how to reach it.
Another topic that was discussed was how to separate data from applications. By using the remoteStorage API, a user can provide a storage device — either local or remote — that can be attached as the data source for any application.
A remoteStorage service works as a basic key-value pair which applications retrieve and save data to. The user defines categories (similar to folders) and access permissions for it (remoteStorage will use oAuth scopes for this).
A remoteStorage-enabled application not only lets the user be in control of the data, it also allows it to work offline by enabling the users to sync their data across their devices.
Solid stands for “Socially Linked Data” and is an exciting new project lead by Tim Berners-Lee, the inventor of the Web.
With similar motivations to the remoteStorage project, it’s rebelling against the fact that nowadays people can’t control their data. Given that, in general, personal data is much more valuable to the individual than to any corporation holding it, this project aims to develop a web architecture that gives users back the ownership of their data.
And much like remoteStorage, Solid relies on web standards and extends them to provide a platform on top of which applications can act as data consumers and producers, but also provides the user mechanisms for controlling who can access that information, so that you can easily, for instance, share it amongst your friends and family.
Solid offline support
Web standards, browser caching, and particularly Service Workers can be used to make some of this web data available offline, but enabling changes to occur locally while offline is a different problem.
Although Solid defines a Pub/Sub mechanism where clients can be notified in realtime of changes happening to any given container, it does not (yet) define a way to get changes or a way to sync data between devices, making this protocol not really suitable.
Although it does not, in my view, provide a good offline story (yet), Solid shows promise in using web fundamentals — the URLs and HTTP, sprinkled with Semantic Web data standards — to extend the web itself and free our data from being siloed.
How can companies share and embrace the decentralized web?
Not particular to Offline First, but very relevant to this whole topic, is the problem of how we anticipate or drive companies to embrace the decentralized web.
As Solid becomes a stable standard, companies can start implementing it internally, and eventually open it so that customers will own the data. Eventually this will create a whole ecosystem of compliant third-party storage providers — much like Dropbox and its competition — that customers can choose from for hosting and sharing their data. Starting from being a competitive advantage, the consumers will start demanding its adoption by opting into the products that give them the opportunity to own their data.
I realize that a lot of this is wishful thinking, but with the right push from the right organizations this could mean that the future of the social web is, after all, bright.
A lot has been said about new decentralized payment protocols like Bitcoin, but there’s a new kid on the block that allows offline payments and that is starting to see promising adoption: Stellar. Stellar aims to bring a new financial infrastructure to a world where billions of people cannot operate today. Still highly experimental, some believe Stellar is a step in the right direction for providing a basic infrastructure for performing financial transactions efficiently.
Stellar Development Foundation is a non-profit organization that leads the development of the network and defines the Stellar Consensus Protocol. This consensus protocol is a way to reliably and safely perform transactions on a global scale.
If you like distributed systems and wish to find out more about Stellar, you can watch this great presentation by David Mazières.
Using the Stellar network, the client can create a transaction (comprised of a destination and amount), sign it with its key pair, create the transaction envelope and somehow send it to the merchant in any way possible. The client can, for instance, transmit this signed transaction using bluetooth, NFC, or even a QR code.
Once the transaction reaches it, the merchant can then use its connection to the Stellar network to commit the transaction order, crediting the merchant account and debiting the customer account.
The Stellar network itself is also resilient to network partitions between financial institutions. As long as there are enough live nodes, consensus can be reached and the system is alive, even though it’s not as resilient to intermittent failures as a blockchain-based network.
Blockchain-Based Lightning Networks
In the Blockchain world there’s the possibility of not using the network for every transaction. Because there is a real time and money cost for each transaction, by using Lightning Networks, two or more parties can create private channels and send each other transactions. Only a final transaction with a final total amount then needs to be sent off to the network at the end of a certain validity period.
Lightning Networks work even if there is no trust between the parties, and you can understand why here.
As a nice side-effect, Lightning Networks allows “offline” micro-payments to exist in the Blockchain world, finally letting you say “Bitcoin scales” without blushing too much.
Offline First, peer-to-peer and decentralized web technologies are closely related: increasing the decentralization enables applications to work without a connection to the backbone, making connectivity not an all-or-nothing condition.
Several technologies, protocols and techniques that free us from a centralized service will also be the ones that free us from having to be connected all the time.
The author would like to thank David Dias for his insights in P2P networks and protocols, Teri Chadbourne for reviewing this article and Matt Fowle for summarizing the session.
Editor’s Note: This article is part of series of unconference session recaps submitted by the awesome folks who participated in our first ever Offline Camp, a unique tech retreat that brings together the Offline First community. You can find more coverage of our initial discussions in our Medium publication. To continue the conversation, join us at our next event or sign up for updates and cast your vote on where we should host future editions of Offline Camp.