Ushering In The New Age Of Data Sovereignty
IPFS (the Interplanetary File System) leads the way
The Internet has seen three major transitions in the last thirty years. The first was in the early 1990’s when America Online (AOL) introduced support for DOS and Windows. These relationships made it easier for the average person to get online and consume information. The second transition happened in the mid 2000’s with the introduction of Facebook. With Facebook, the user became the content creator in addition to the consumer. With AOL and Facebook however, the consumer is giving up personal data and entrusting these providers as digital fiduciaries. In the last year we’ve witnessed how these organizations actually handle our data albeit from perfunctory security measures (the Equifax breach) or through clandestine manipulation and monetization strategies (the Facebook and Cambridge Analytica scandal). We are now in the third major transition where peer-to-peer technologies are gaining traction due to a combination of trust (or the lack thereof) and a maturity of the technologies. These new distributed systems come with a promise to eliminate the trust element by giving users full control over their data and its digital legacy.
One major component driving what is often referred to as “Web 3” is the advances made in peer-to-peer data distribution. We can think about data storage by breaking it into two classes: The first is centralized; where we trust the Facebook’s and Equifax’s of the world to host and secure our information. The second is decentralized, where we tap into the millions of computers, laptops, gaming systems, and servers around the world that have unused storage and, through economic incentives (ie paying them for hosting small encrypted bits of our data) distribute our information so it’s not controlled by one single entity. The benefit of decentralization beyond control is enhanced security. Instead of the robber having to break into one single house to steal your jewelry, they have to break into the entire neighborhood because you separate the pieces amongst your neighbors. One project leading peer-to-peer data distribution is the InterPlantary File System (IPFS)
IPFS is leading Web 3 by providing several already launched components: data dissemination mapping, data storage and data redundancy. It’s a protocol that will upgrade the web’s storage architecture to a standard that puts the user in control.
Next, we will be exploring how IPFS works and the important role it plays in imbrex’s ecosystem. But before we get started, let’s add further clarity on the technology by taking a trip back in time to understand the history of the Internet and how data ends up on your computer.
A Brief History
In 1962, a scientist named J.C.R. Licklider conceptualized a “galactic network” of computers with the ability to communicate with one another. This idea led to the development of ARPANET, a primitive network that served as a precursor to the Internet; a network of networks. ARPANET used the Transmission Control Protocol and the Internet Protocol (TCP/IP) (a set of rules that specify how data should be packetized, addressed, transmitted, routed, and received developed by DARPA researchers in the 1970s which means there is one standard way data should be transferred between different hardware pieces as well as between hardware and software. Meaning developers know how data will move between systems and can code apps that exist and function within those expectations,) as its standard networking protocols; however, there are many other protocols that have been created to handle specific functions on different layers of the Internet to provide end-to-end data communication (a computer network design framework where the application specific features live at communication end points as opposed to intermediaries like routers or gateways. That means that the intermediary does not have control over the parameters or rules of a particular process).
A common protocol today’s Internet users may recognize is HTTP, the Hypertext Transfer Protocol (the HTTP system functions as a request–response protocol a users web browser will make a “request” by typing in a URL and a website hosted on a server will provide “response” by showing a certain page). It defines how clients and servers communicate. HTTP has become the backbone of data transfer on the Internet, but it is inefficient for today’s data-driven society. In a network where the same file may exist in many places, only one specific server can deliver the requested file to ensure authenticity. Moreover, HTTP can also be unreliable. If the connection is lost or servers malfunction for any reason, it could lead to inaccessible data. The problem here is that the “response”, or piece of data that a user wants to view, is stored locally in one place and accessed. Lots of requests for the same piece of data will slow response times and sometimes even shut down responses all together.
How IPFS Works
IPFS has the potential to improve data permanence and transfer efficiency by eliminating the need for central servers. Instead of relying on a single server, users have the ability to download a file from multiple servers simultaneously. This is made possible by utilizing a distributed hash table (defined in the second paragraph), a type of data structure analogous to the Dewey Decimal System.
Each file added to IPFS receives a unique string of characters called a cryptographic hash (a fixed size string of letters and numbers that is permanently associated with the file or message that is uploaded to IPFS). This identifier is paired with a human-readable name to simplify searches. When a user requests a particular file, the cryptographic hash is used to determine which nodes in the network are storing the file. At this point, the user is able to connect to those nodes and download any specific version of the file since all changes are recorded. In order to promote a healthy ecosystem, nodes are incentivized to actively share data using a simple credit-like system.
IPFS Integration on the imbrex Platform
After we deploy imbrex, it will begin to collect a large volume of listing data. Storing data on the blockchain is very expensive, but IPFS provide lends a convenient solution to control costs. When a user uploads a listing, a JSON file (a file format that uses human-readable text to transmit data objects.) is created and sent to IPFS which returns a corresponding cryptographic hash. The Imbrexer (the shared database of all imbrex data that is being hosted on all nodes) is able to request the referenced data by scanning the blockchain for these hashes. Once the data is obtained, it can be parsed into ElasticSearch (a search engine product that stores search results as plain text JSON arguments). This allows a data item, say a listing, to be broken down into multiple parts and then found by searching for any one of those parts (address, city, etc.) to be included in all of the searches on the platform.
Data is becoming an increasingly valuable asset, and therefore it would be beneficial to preserve it as long as possible. Although the current infrastructure of the Internet is not equipped to handle the rise of big data, utilizing tools that improve efficiency and reliability are a step in the right direction. IPFS is an exciting protocol that not only powers the imbrex platform, but also builds a better ecosystem in which data can travel.