The Journey to Web 3
How We Got Here
Web 3 is a popular, but poorly defined term. There is still a lot of confusion as to what Web 3 actually is, where it came from.
In focusing more on ‘the origins of Web 3’, this article explains why Web 3 is the next logical wave in the cycle of computing innovation. Although gradual, the implications of this can be felt by everyone and could change the way individuals and societies organise themselves.
In a forthcoming article by the Web3 Foundation, Peter Czaban will elaborate on how we realize this vision. As a key part of this, he will introduce the Web3 Technology Stack and the Foundation’s plans for shepherding these technologies into existence.
I have intentionally chosen to review the following three waves because they all represented tectonic shifts in the structure of our computing systems. While there may be others I have left out, I hope that raising awareness of the preceding innovations will enable a deeper understanding of Web 3.
First Wave: Networked Intercommunication
The sharing of computer resources via the interoperation of computing systems was the founding principle of the Internet and, as we shall explore later on, remains one of the key principles of Web 3. As J.C.R Licklider pondered in his famous memos in 1963:
It seems to me to be interesting and important, nevertheless, to develop a capability for integrated network operation. If such a network as I envisage nebulously could be brought into operation, we would have at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units — not to mention the remote consoles and teletype stations — all churning away.
It is worth remembering that Licklider’s memo led directly to the development of ARPANET, the forebear of the Internet.
Eleven years later, in 1974, Cerf and Kahn re-emphasise the point: ‘a principal reason for developing such networks has been to facilitate the sharing of computer resources.’ Sharing across networks, internetwork communications began with their assumption of a transmission control program (TCP), the model which would become known as TCP/IP.
These initial internetting concepts exhibit a key underlying technical idea, namely that of open architecture networking. In this manner, the choice of any individual network technology was not imposed by a particular network architecture, instead it could be selected freely and made to interwork with other networks through a meta-level ‘Internetworking Architecture.’ This point highlights both the architectural decentralisation and significance of internetwork operation, both key design features of Web 3.
Second Wave: World Wide Web
The second wave of innovation in computing systems arrived in 1989 with the paper ‘Information Management: A Proposal,’ by Sir Tim Berners-Lee. He imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities and collaborate across geographic and cultural boundaries:
‘The actual observed working structure of the organisation is a multiply connected “web” whose interconnections evolve over time.’
One of the factors that contributed to his success was rather than trying to convince anyone at CERN to support anything too radical, he advised bringing together concepts and technologies his colleagues already knew about and understood. In many respects, the assemblage of Web 3 infrastructure echoes this principal. It’s worth remember that one of Berners-Lee’s key design features was decentralisation.
‘A new system must allow existing systems to be linked together without requiring any central control or coordination.’
The fundamental problem is that the web has evolved into an increasingly centralised architecture that strongly impairs the rights of end users, endangers privacy, and confidentiality of information. A problem that Web 3 seeks to solve by re-decentralising the web.
Decentralised Origins of The Web
Centralised services were uncommon in the early days. As it became increasingly commercialised, service providers were mostly small and medium sized enterprises, schools, and cooperatives that used the distributed nature of the network. Before wordpress.com small groups of people would set up a web server, making use of retrofitted software. Similarly, instant messaging was generally done through direct communication between peers.
1. The Dot Com Boom
The dot com boom was ‘characterised by a rush to own infrastructure, to consolidate independent Internet service providers and take control of the network.’ A land grab ensued where investors centralised service providers from low level telecoms infrastructure to high level news aggregation, email, and video.
2. Rise of the Network Effect
The network effect is a phenomenon where the more users there are on a platform, the more valuable the platform is to each user. As De Filippi and McCarthy affirm, in spite of their significance in the context of social networks, network effects are not, as such, a sufficient justification for there to be one centralised social networking platform. Furthermore, the network is fully capable of allowing for decentralised systems, as various peer-to-peer protocols have demonstrated.
3. Lack of Interoperability
The increasing concentration of the market and the consequent concentration of power prevented interoperability from being a feature of the modern web. By extension and because of their dominant position, large service providers have exerted a degree of control and manipulation unimaginable by smaller, more local service providers, and that would be nearly impossible in a peer-to-peer network.
The consolidation of service providers during the Dot Com boom and the subsequent rise of the network effect drove the centralisation of the web, however, it was the third wave of innovation in computing systems that spearheaded this trend.
Third Wave: Cloud Computing
The launch of Amazon Web Services on 14th March 2006 lit a fire under the trend of web centralisation. This was the moment ‘Cloud Computing’ went mainstream. The aggregation of computing power together into a few large metropolises created an imbalance in authority structure not dissimilar to the structural changes witnessed during the Industrial Revolution:
‘Just as the industrial revolution has progressively alienated workers from the means of production, today, most of the means of online production (in terms of hardware, software, content or data) are concentrated within the hands of large Internet service providers.’
Exporting all our infrastructure and data into the Cloud diminished the control users had over their resources. This trend has been gathering force since 2006. Resources have been migrating away from end-users, towards central authorities that possess enormous processing, storage and communication power. They essentially own our global computing systems.
Symptoms of Web Centralisation
1. Loss of Freedom of Choice
In many respects, we gave away our content under the false pretense of community; we gave up our privacy in hope of a more personalised service; we handed over our rights in the name of accessibility and comfort; but, most importantly, we gave away our freedoms and, more often than not, we didn’t even realise we were were doing it. [Source]
Facebook, being the first, did realise what they were doing, as did others who followed. They calculated correctly that the human psyche could be exploited and concluded: how do we consume as much of your conscious attention as possible, in return charging your dopamine system sufficiently to maintain your concentration, so that we can manipulate your choices with targeted advertising and charge billions in the process. At the philosophical level, this represents an infringement of basic human rights: the right to freedom of choice; it also endangers rights to privacy and confidentiality.
2. Exposure to Hacks
The proliferation of cloud computing exacerbated the physical siloisation of the web which creates attractive targets for hackers. Millions of personal data records are hacked every month. In April 2018, according to itgovernance.co.uk, 72,611,721 records were leaked. The mother of all hacks was, of course, Equifax. Their most recent earnings call revealed a cumulative spend of $242m on the hack. This, I assume, is in addition to the c.$90m severance package, Richard Smith, the outgoing CEO received for allowing this happen on his watch. Clearly, the incentives and disciplinary procedures that govern some of our corporations are out of whack.
3. Exposure to Political Propaganda
The Facebook scandal earlier this year, and Zuckerberg’s subsequent roasting in Congress was a serious wake up call for all users of the web. Cambridge Analytica, a company linked to Trump’s 2016 presidential campaign, harvested data on 50m Facebook users. Whether or not CA actually subverted democracy, Facebook users are still exposed to this kind of manipulation. This incident echoes an established pattern of blatant disregard for privacy, tolerance of incompetence and a reluctance to admit wrongdoing and say ‘sorry’.
Shouldn’t we be asking ourselves whether this is really the kind of culture we want to be a part of going forward? And surely now any ability to make rational trust assumptions about the computer systems many of us rely on is over.
The Loss of Trust
The foundation of every relationship is trust. This lack of trust doesn’t just affect our computing systems, but also extend to those institutions we used to trust for guidance, leadership and support.
The Battle for Truth
In 2018, the world entered a new phase of declining trust in governments, institutions, NGOs, the media and the establishment. The lack of confidence in information channels and sources is representative of this global trust implosion. The foundations of our institutions were compromised during the Global Financial Crisis where confidence in traditional authority figures and institutions literally collapsed, and is yet to recover. Now we live in a world absent of common facts and objective truth.
This begs the question, if we cannot trust our computing systems, our institutions, governments or the media, who can we trust?
Web 3: The New Trust Model
The original web didn’t do anything that required trust. If you put your data on a website, everyone could read it and there were no expectations of privacy. This could work with a decentralised architecture easily. But the functionality of the web has expanded to things that do require trust such as disseminating information to restricted groups of people, private storage and computing.
Centralisation is an imperfect solution to trust. Although the stakes are lower, we trust Facebook, Google, and Amazon for the same reasons that we trust banks. Firstly, everyone else trusts them and there is comfort in groups. And when a big organisation mucks something up, it’s big news, we all hear about it, and this usually translates into enough political will to act on our behalf. However, by then it might be too late.
We gave our personal data to Facebook in the expectation that the only people who see it were are our friends and family (and Facebook itself). We couldn’t do that with the original web. But trusting a centralised third party, especially when they have little invested in us, turned out to be a bad idea.
Web 3 aims to provide the functionality of Web 2 but with the decentralisation promised in the original vision of the web. This needs a new model of trust.
Clearly, we need to move away from the web as a platform for trustworthy activities. We should move them somewhere integrity is maintained, resilience to suppression is achieved and, finally, is trustless and difficult to corrupt. These are a few features that describe decentralised systems.
One helpful way to measure the degree of decentralisation is via Vitalik Buterin’s three axes of decentralisation:
- Architectural Decentralisation: the number of physical computers in the system. Can the system endure a number of computers conking out at a given moment?
- Political Decentralisation: how many individuals or organisations basically control the network of computers the system is made up of?
- Logical Decentralisation: is the system assembled together or scattered? As a rule of thumb, if you cut the system in half, will both halves continue to operate as independent organisations? Consider the starfish and the spider analogy which refers to the biological nature of the respective organisms. If you cut a spider in half it will die. If you cut certain species of starfish in half, due to their decentralised neural structure, they will regenerate into a new starfish.
What does Web 3 look like under this examination? In theory, Web 3 is politically decentralised, since no one controls its development or operation; it’s architecturally decentralised, since it comprises blockchain and peer-to-peer technologies and, finally; it is logically decentralised since the building blocks of computing systems that we expect to make up Web 3 are themselves decentralised and could replicate if cut in half.
Why Decentralise the Web?
The merits of decentralised systems have been observed in society over the last three millenia. Centralised systems have been, are, and will continue to be present in human history. But they are not sustainable in the long run. The antidote is the multiplication of decision centres that disperse and minimize the consequence of errors. As Vitalik Buterin points out, there are three main principles for decentralisation:
Decentralised systems are much more challenging and expensive to attack because they lack vulnerable central pressure points.
A fault-tolerant design allows the system to keep functioning, possibly at a reduced level, rather than failing completely, in the event of partial failure.
It is much more challenging for members of a decentralised system to conspire and collude in selfish ways.
Decentralising Base Level Architecture: Web 3
By definition, we cannot point to any one protocol or computing system that is Web 3. Instead, Web 3 is comprised of the original building blocks of computing systems: storage, processing and communication, but with one key distinction, these building blocks have now been decentralised. As alluded to by Gavin Wood in his original blog post describing Web 3:
Web 3.0…is the reimagination of the sorts of things we already use the web for, but with a fundamentally different model for the interactions between parties. Information that we assume to be public, we publish. Information that we assume to be to be agreed, we place on a consensus-ledger. Information that we assume to be private, we keep secret and never reveal. Communication always takes place over encrypted channels…never with anything traceable (such as IP addresses). In short, we engineer the system to mathematically enforce our prior assumptions since no government or organisation can reasonably be trusted.
One of the best ways to compete with the centralised behemoths that control the market is for a very large number of small players to combine forces in the creation of one large interoperable and integrated infrastructure. This is what the Web 3 movement is beginning to look and feel like. Web 3 aims to restore the web back to its decentralised origins whilst delivering the following characteristics:
- Trustless. Deliver economically strong, peer-to-peer systems that allow our intentions to be carried while minimising trust among interacting parties.
- Redistribute Wealth & Power: Distribute existing concentrations of wealth and resources among the community; pursue democratic ideals instead of creating a new oligarchy.
- Continuity under adverse conditions. Constancy of Web 3 is assured due to its decentralised components.
- Resistant to suppression. The maintenance of censorships, such as firewalls is not possible.
- Balanced approach data. A right to personal private data & consent while democratizing non personally identifiable information.
- Immutability. Web 3 data storage protocols mean some data cannot be modified.
- Transparency and Openness. Web 3 is designed to be a transparent and open architecture upon which new decentralised applications can be built.
- Self-Sovereign Identity. Where cryptography is used to protect a users autonomy and control, this requires the users to be rulers of their own identities.
- Interoperability. The interoperation of systems and protocols operated by different projects will likely be at the heart of Web 3.
- Transfer of Value. Web 3 enables, secure transfer of economic value. Digital money, assets, securities and collateral is a natural aspect of Web 3.
This is by no means an exhaustive list. The characteristics of Web 3 are only just beginning to emerge. These are the potential outcomes of a grouping together of seemingly unrelated projects, startups, protocols, teams and individuals who are all working hard to ensure a better future for all of us.
The founders of both the Internet and the web championed decentralisation, interoperability, and openness. Through the passage of time the control of our computing resources was captured and concentrated in the hands of the few. This tendency towards centralisation exposes users to heightened risks and vastly diminishes our ability to trust our systems. Rather than trying to restore and repair an old system, innovators and technologists who were on the outskirts of the technology industry are building a new architecture. They are reimagining the ways the fundamental components of computing are assembled. Echoing the design principles of the Internet, openness, interoperability, and decentralisation sit at the heart of this new design. The design of Web 3. This new system promises to rebalance existing power structures, reorganise societies, and restore truth. We are fundamentally optimistic about what comes next.
Written by Josh Oakley.