Announcing the Internet Computer “Mainnet” and a 20-Year Roadmap

Dominic Williams
The Internet Computer Review
67 min readJan 6, 2021

--

The Internet Computer is the world’s first blockchain that runs at web speed and can increase its capacity without bound.

DFINITY Status Update, New Year 2021

I HAVE SOME EXCITING NEWS.

On December 18, 2020, a crucial initial stage of Internet Computer blockchain’s decentralization occurred. This means that the Internet Computer’s mainnet now exists, and is hosted by standardized “node machines” that are independently owned and installed within independent data centers, which have been placed under the control of the Network Nervous System (NNS).

For those new to the project, the Internet Computer is the world’s first blockchain that runs at web speed and can increase its capacity without bound to host any volume of smart contract computations and store any quantity of data, and the NNS is an open, algorithmic governance system that controls the network. The NNS is hosted within the network itself, and is part of the system of protocols that securely weaves together the compute capacity of the node machines to create the Internet Computer blockchain network, allowing the network to be autonomous and adaptive.

Once the mainnet bootstrapped itself by performing initial cryptographic setup routines, the NNS took over responsibility for orchestrating ongoing network management tasks, such as inducting new node machines to increase the network’s compute capacity and upgrading node machines to update the network protocol. I’m pleased to report that the NNS processed two initial proposals soon after launch, inducting node machines into a new subnet (which is a special kind of blockchain within the Internet Computer network that can seamlessly integrate with other blockchains to increase capacity) and upgrading the nodes of a subnet. This means that the NNS has already begun building out and evolving the network.

Although this progress has so far remained mostly only visible to those working with the network, the successful passing of this initial decentralization step marks a truly momentous moment for the not-for-profit DFINITY Foundation, everyone who has contributed to the Internet Computer project generally, the many parties now building out the physical network, and everybody around the world that shall benefit from what this new network makes possible. Of course, network bootstrapping took place as part of the Mercury milestone, which was the fifth launch milestone announced in the summer of 2019 that we have hit on time, and I am incredibly proud of the DFINITY team and numerous contributing parties for making that happen.

Reaching Mercury now puts us on a relatively short path to a last “Genesis” decentralization step. This will involve the Network Nervous System releasing ICP utility tokens (previously called “DFN”) to holders in the form of voting neurons, which will occur after it processes a trigger proposal, likely inside Q1 2021. Once Genesis occurs, recipients can begin participating in network governance; dissolve their neurons to release the tokens inside, then convert them into cycles to power computation; or transfer them, as best suits their purpose. However, in order for the NNS to trigger Genesis in the best interests of the network, it is expected that various additional gates must be passed.

Fresh from the holidays, multiple parties shall now help the Internet Computer project past these remaining gates in a rapid ramp up to Genesis. Gates include, but are not limited to, the DFINITY Foundation releasing all related source code that is not yet in the public domain, the release of vast quantities of technical and design information pertaining to ICP (Internet Computer Protocol) including full descriptions of the Chain Key cryptography and protocol math that makes the Internet Computer network possible; security audits and stress tests being successfully passed; the release of a few additional features that didn’t quite make Mercury; the release of a complete “open internet service” sample dapp in the form of CanCan (as previously demonstrated); the release of revamped online content currently being developed that will better reflect the scope and quality of the Internet Computer project; the dissemination of detailed information about the physical network and its participants; the provision of support services for parties wishing to supply nodes or otherwise participate in network build-out; the provision of detailed information about the DFINITY Foundation and the newly formed Internet Computer Association; and many other things. These are the final stages. We are almost there!

Mercury represents incredible technical achievements, and the realization of a blockchain vision unlike any other, but even at this stage the network could not have been established without the efforts of large numbers of independent parties. Behind the scenes, despite the difficulties created by the COVID-19 pandemic, several manufacturers have been making the Gen I standardized Internet Computer node machines used to create the physical network, and tens of independent funding partners have stepped up to finance and control the deployment of node machines into the first data centers. As I write, hundreds of new node machines have been deployed to data centers in a massive effort, and many are already running in a way that allows the Network Nervous System to weave them into the network in order to expand the capacity of the Internet Computer — the world’s first public blockchain capable of expanding its capacity without bound, that runs at web speed, among many firsts soon to create enormous impact. The network is expected to grow to millions of nodes running from thousands of data centers in coming years. We expect history will show that this is a seminal moment for both blockchain and the internet.

Before I write more about Genesis, this a good moment to review what the Internet Computer project is about — a project that has invested vastly more time and money into advanced blockchain research and engineering than any other, that has built out dedicated research centers around the world, that ran an initial decentralized fundraiser in 2017 and then disappeared to focus on science and engineering that reaches beyond traditional blockchain to fundamentally reimagine how the world builds not just financial systems, but every system and service.

TIP: Prevailing preconceptions about blockchain make the technological capabilities of the Internet Computer blockchain hard to grasp. Understanding the features as described, without trying to map them to knowledge of pre-existing blockchain architectures and limitations, provides a simpler entry.

The Purposes of the Internet Computer

The DFINITY Foundation was founded to pursue a big question: The Internet is a decentralized network that connects everyone and everything, but might its functionality be extended so that it can also become the primary platform upon which humanity builds information systems? Our answer to this question is the Internet Computer, which extends the functionality of the internet with an advanced novel blockchain network, providing a blockchain upon which fast, scalable information systems such as enterprise systems and internet services, and financial services such as DeFi, can be directly built, without need for intermediaries or traditional IT.

Naturally, this is achieved by adding new decentralized protocols. The internet is itself created by a decentralized protocol called IP (Internet Protocol) that weaves together millions of private networks to form a single global network that is highly resilient and easy to use, because it frees communicating software from thinking about how data must be routed across the underlying internetwork. The Internet Computer is similarly created by a decentralized protocol, this time of the blockchain variety, called ICP (Internet Computer Protocol), which weaves together the compute capacity of special node machines installed by data centers around the world to create a unified, easy-to-use, seamless universe that hosts an evolution of smart contract software and its data. Because the platform runs at web speed, and has unbounded capacity, and can serve content on the Web, it can be used to build websites, enterprise systems, mass market internet services, pan-industry platforms, DeFi, and much more using smart contracts.

History has shown that, all things being equal, the world prefers to build upon shared public platforms such as the internet that are permissionless, maximize interoperability, and neutralize the platform risks that are inherent to proprietary infrastructures and products whose vendors aim to create captive customers. The Internet Computer is extending the internet so that it can play the role of complete technology stack, enabling the world to build using smart contracts hosted in cyberspace without need for traditional IT such as cloud services, server machines, proprietary software stacks, databases and firewalls.

Freeing the world’s information system builders from proprietary IT is a worthy goal, but this is only a small part of the Internet Computer’s raison d’être: It turns out that once blockchain scalability, speed and cost limitations are resolved using advanced technology and a novel network architecture, and the smart contract software model is rethought and evolved to make it far more powerful, and smart contracts are enabled to directly serve user experiences into web browsers without intermediaries, that blockchain becomes a tamperproof and unstoppable computer with extraordinary advantages when compared against traditional IT. This can facilitate the reinvention of enterprise systems, mass market internet services, and the economy, as well as enable the complete reimagination of how many things work, just as DeFi reimagines finance.

For those of us working on the project, the Internet Computer is the ultimate expression of advanced blockchain science, and nearly all the technology involved is new. One of the greatest challenges met is the provision of a unified, on-chain environment, that can process any volume of smart contract computations and maintain any quantity of smart contract data. The network turns around oft-held preconceptions about blockchain limitations, presenting blockchain as a scaling solution that can be used to build mass market, hyperscale internet services using fewer lines of code and with more ease — not least because it reimagines the very nature of software itself, using innovations such as “orthogonal persistence” and providing developers with a means to write code that scales by creating “canister” objects that each incorporate additional memory into their overall systems. Meanwhile, the Internet Computer evolves core blockchain features such as autonomous code and tokenization, while also enabling developers to build services that end users can interact with without holding tokens, such that they need not even know that such services are running entirely from a blockchain.

The project is so wide in scope it can be difficult to comprehend the profound range of advantages building on the Internet Computer provides. Let’s enumerate some of them individually to make the proposition more digestible.

I’ll keep this review to 20–30 minutes, but those with short attention spans can also take a look at my high-level deck by clicking here.

The essential purpose of the Internet Computer is to create a far superior blockchain for humanity to build on. Inside that objective there are many more specific intentions, some of which I review in broad strokes:

  • Public Utility That Grows Exponentially With Builders
  • Systems and Services That Are Unstoppable Like the Internet
  • Systems and Services That Are Secure by Default and Preserve Privacy
  • Crushing Complexity and Scaling Using Reimagined Smart Contracts
  • Blockchain at Web Speed That Runs on the Internet’s “Edge”
  • Removing Troublesome Intermediaries From Blockchain Systems
  • Removing Critical Usability Issues From Blockchain Systems
  • Unleashing Intelligent Governance and Autonomous Evolution
  • “Open Internet Services” With Tokenized Governance Systems
  • A Trustless Programmable Web With Non-Revocable Sharing
  • Democratizing Tech Opportunity by Extending It to the 99 Percent
  • Building a Richer Open Internet That Beats Out Mega Monopolies
  • Using Computation to Provide Stable Liquidity to Contracts
  • Making WebAssembly the World’s Virtual Machine
  • Completing the Blockchain Trinity

Public Utility That Grows Exponentially With Builders

Technology platforms most often grow massive as the result of network effects, which occur when the utility of a system or service increases the more people use it, creating a positive feedback loop in which growth drives more growth. Value and utility can even increase exponentially with users. For reasons I shall describe, the Internet Computer benefits from such network effects, and its value will rapidly increase the more people build on it, enabling the network to provide ever-increasing utility to the world. A key purpose is that it provides for interoperability between code and systems at near zero cost. This is because it provides a seamless universe in which secure running code — in the form of an evolution of smart contracts (called “canisters”) — is hosted, in which, subject to permissions, code can call directly into any other code, incorporating its functionality much like traditional software has historically incorporated functionality from static software libraries. This means, for example, that if service A, which is built on the Internet Computer, shares functions with service B, the code of B can directly call the shared functions of A exactly as though it were calling its own functions, even through services A and B are actually instances of running software, rather than static libraries, and even when they are written in different programming languages — which is revolutionary.

Simple function calls can integrate standalone objects, or functionality that is part of some bigger system or service. In conjunction with other features I describe later in this post, such as non-revocable API sharing, this provides for new services to be easily composed by assembling the functionality and data of objects, systems and services already hosted on the Internet Computer, in much the same way developers construct new software by incorporating code from static software libraries today, empowering enterprises, system builders, entrepreneurs and innovators. This effect already exists and is proven: the Ethereum blockchain hosts smart contracts that can directly call the functions of other contracts, which is a key reason for the explosion of DeFi, since it allows financial rails to be easily integrated and extended. But unlike the Internet Computer, existing blockchain platforms cannot scale smart contract computation and data, cannot run them at web speed, cannot process computation and store data at the relatively tiny costs necessary, and cannot provide vastly more powerful software frameworks to developers. By removing these limitations, the Internet Computer will now unlock immensely powerful network effects similar to those that helped the public Internet beat out proprietary networks.

To understand this from an entrepreneurial perspective, consider that you wanted to build a mass market “open internet service” on the Internet Computer that provides the functionality of eBay. While creating a better auction framework would be relatively straightforward, creating the infrastructure necessary to deal with disputes arising between buyers and sellers would normally be burdensome. On the Internet Computer, however, you might solve this need simply by making a function call to an open internet service that provides arbitration and dispute resolution, which will later return the result with another simple function call. This is a bold new paradigm that makes it possible to quickly compose new systems and services by building upon the data, functionality, actors and users hosted within preexisting services. Through this dynamic, network effects will become increasingly powerful, as developers seek to build on the Internet Computer to leverage preexisting smart contract systems, in turn adding new systems.

Systems and Services That Are Unstoppable Like the Internet

The development of today’s internet is tremendously storied and the many players involved all had their own reasons for making their contributions. Some just wanted a network that would allow different kinds of mainframe computers to easily communicate with one another, but an important part of the internet’s story will always be inextricably be linked with the Cold War and the desire for robust communications systems that could withstand nuclear strikes, essentially by breaking communications into data packets and adaptively routing them over whichever network links remained available, as described in Paul Baran’s seminal 1964 paper.

Whatever role the Cold War played in making the internet robust, its ability to adapt and scale out with demand has been fundamental to how it has become an essential public utility that much of humankind now takes for granted, rather like the water supply. For example, the internet has fulfilled its purpose splendidly during the COVID-19 pandemic, enabling us to work remotely while streaming ever-increasing quantities of media. The design of the Internet Computer network ensures that it will continue this tradition. Not only can the Internet Computer withstand a nuclear strike, but it can scale with demand, and systems and services built using its smart contracts are unstoppable too.

In recent weeks, it has become especially clear why this is important. Our worldwide population of 7.8 billion people can only be supported using efficiencies granted by extensive automation, and online services increasingly play central roles in our everyday lives. Yet, even though COVID-19 behooves everyone in tech to keep society’s core information infrastructure running, there have just been several worldwide outages of proprietary centralized systems run by Big Tech, including:

A hyperscale data center owned by Amazon Web Services suffered an outage, taking down a large chunk of the internet’s services with it.

Google’s services suffered from widespread outages, and then Gmail failed.

Just as I write this, Slack seems to have failed.

The centralized hyperscale data centers we rely on today are vulnerable to everything from terrorist attacks, cyber attacks, and acts of God such as tornadoes ripping through them or electromagnetic pulses caused by solar flares disrupting their operation. We should be grateful that the consequences of their fragility thus far have resulted from simple misconfigurations and software bugs, which have greatly lessened their severity, and aim to rebuild the world’s infrastructure and critical online services in unstoppable form on the Internet Computer as soon as practicable.

Systems and Services That Are Secure by Default and Preserve Privacy

We have been building on traditional IT for so long, we have become inured to its most obvious failing. In the traditional model, we construct new systems and services using an assembly of components, including proprietary cloud services, server machines, databases, middleware, web servers, backup systems, load balancers, CDNs and other accelerators, and many other things, which we combine with our own software, which is then written using a wide variety of software stacks. The resulting constructions resemble Rube Goldberg machines and have enormous complexity that cannot be attributed to their far simpler purposes. But perhaps the biggest problem with these assemblies is that they are insecure by default.

Since systems and services built using traditional IT are insecure by default, we must find ways to protect them, typically by adding firewalls, and using SIEM logging and other security systems that an entire industry exists to supply, and by assigning dedicated security personnel and administrators who check software versions and configurations, and looking for insecure code that might give hackers a portal into our back end. So inured are we by the status quo that we neglect to ask the obvious question, which is: Shouldn’t we be building our systems and services using a tamperproof platform like a blockchain, where they might be secure by default?

The Internet Computer answers this calling by enabling us to build upon a web-speed, unbounded blockchain network, whose security derives from the underlying mathematics of its protocols, ensuring that hosted code runs in a tamperproof way. This guarantee is possible because the mathematics that underpin blockchain protocols provide a fundamentally stronger protection than firewalls, systems administrators, code reviews aiming to identify potential backdoors, and the many other security practices we rely upon today, any of which may fail through even a simple error, because even the most skilled of hackers cannot make 2+2=5. The Internet Computer guarantees that hosted code can only be run in authorized ways, that only the expected code runs against the correct and expected data, and the math used to form the platform leaves hackers with no means to subvert those guarantees. This continues the approach that the Bitcoin blockchain started, whose ledger now hosts more than half a trillion dollars in value as I write but does not rely on firewalls for protection, extending this incredible revolutionary property to the construction of arbitrary systems and services for the first time.

In addition to ensuring that hosted systems and services are tamperproof, the Internet Computer also extends security to data privacy. This is possible because it works differently than traditional blockchains, which make the blocks of transactions they process available for download so that interactions can be validated, allowing anyone to reconstruct all the computations and data they host. The Internet Computer does not need to do this, because it builds on something called Chain Key cryptography, which allows anyone to verify correctness by applying a simple “chain key” (which is akin to a public key) to interactions.

With Chain Key cryptography, it is no longer necessary to make historical transactions available for download, and each smart contract is therefore replicated over a specific subset of network nodes, deriving security and resilience from decentralization with far greater efficiency, while also ensuring that the data inside systems and services cannot just be downloaded by whoever wants a copy. Consequently, the only way to obtain data inside systems and services hosted on the Internet Computer is by interacting with them in authorized, tamperproof ways, in which their smart contract logic has complete control over what is shared. The network also incorporates privacy defenses against those who might gain physical access to node machines, such as malicious systems administrators in data centers: the standardized node hardware has features that ensure that if the hardware is accessed, only encrypted bytes can be seen on the memory chips and storage devices (this feature will be switched on after the network has transitioned to beta).

The need for a tamperproof compute platform such as the Internet Computer that can preserve the correctness of systems and the privacy of data in the face of attacks could never be more clear. Within the world of traditional software, it has become impossible to create systems and services that are secure against attackers. For example, Edward Snowden was able to exploit holes in the NSA’s internal security systems to rummage through their servers and steal 20,000 documents without leaving a trace (for many, he is a hero whistleblower, but that is irrelevant to the fact that the security of the systems he collected data from failed catastrophically and farcically). Meanwhile, a major superpower is using fighter jet designs derived from those of the F-35 that are worth hundreds of billions of dollars, which were stolen from Lockheed Martin, and hackers generally have extracted so much PII (personally identifiable information) from online services that there is now little of anyone’s personal details and private financial information that is not available on the dark web.

So desperate has the situation become, and so little privacy remains, that the value of PII has fallen, and many hackers have refocused their efforts on disabling enterprise systems using ransomware, which encrypts server machines and then demands payment in bitcoin to restore them. During 2020, the IT systems of governments and entire enterprises were brought down for weeks or months, bringing their normal operations to a crashing halt. Cyberattacks by state actors could be far more destructive — in short, there’s a growing security emergency, which building on the Internet Computer blockchain could solve.

The devastating “SolarWinds” hack, revealed just weeks ago as I write, demonstrates irrefutably the need for systems that are secure by default, and that we have reached a crisis point. Foreign hackers have had the freedom to roam the private systems and content of many of the Western world’s most critical institutions and corporations for months, stealing unimaginable quantities of sensitive information and content which will result in calamitous consequences that will now play out over decades on an international scale. This occurred despite, for example, the U.S. spending billons of dollars on a cyber defense system called Einstein. Only now, faced with this ultimate failure, and the clear impossibility of making legacy infrastructure secure, are non-technical people within the mainstream establishment beginning to demand more comprehensive approaches. Those writers at The Hill arguing for a “whole of society” security strategy, and at the Financial Times pointing out that ongoing cyberattacks derive from the failings of our IT infrastructure rather than the sophistication of attacks (which is true), need look no further than the Internet Computer blockchain for a potential solution.

Most importantly, of course, it is the tamperproof nature of blockchain that makes it possible to support autonomous open systems, and DeFi, using smart contracts. By extending the provision of security on blockchains to privacy, the design of systems and services based on smart contracts has been simplified and the scope of their application greatly extended.

Crushing Complexity and Scaling Using Reimagined Smart Contracts

Today, when building upon traditional IT, we generally think about costs in such terms as price per megabyte of storage, or price per hour of cloud computing instance, or software licensing. The great irony, however, is that these costs compose only a relatively small fraction of the world’s $3.8 trillion annual spend on IT. We often forget that the biggest cost is IT operations, which is essentially the cost of human beings employed to do jobs such as computer programming and systems administration.

In fact, around 85 percent of IT costs at a typical Fortune 500 company in the USA are attributed to IT operations. Furthermore, analysis reveals 90 percent or more of the efforts of IT operations are typically directed towards the soul destroying work of simply getting assemblies of components in the systems and services they build and maintain to work together, and tasks such as backing up data and maintaining security, rather than on crafting and evolving the essential logic and user experiences that define their fundamental purpose and functionality. This reveals a dramatic opportunity to reduce costs within IT generally — which will result, in practice, in human technical resources being redirected to more productive purposes — and to drastically simplify the development of internet services and to get technical ventures to market faster.

Early on, DFINITY had the insight that software might be reimagined in the form of an evolution of smart contracts to greatly reduce the complexity involved with the development and maintenance of information systems while also solving for emerging security needs, enabling the world to dramatically reduce wasteful technology costs by building on a fast, efficient, and unbounded blockchain. In 2015, the Ethereum blockchain teased how this might become possible by introducing smart contracts that could directly call into the code of other smart contracts, removing much of the costs involved with integrating co-hosted systems. Smart contracts on Ethereum also “ran forever” and would not crash and reset their data, enabling smart contract developers writing Solidity code to maintain data inside simple program variables and forget about marshaling it in and out of databases and files, simplifying code by removing the need to manage data persistence directly (although coders should note, Solidity compiles to lower-level code that maintains variables data inside the Ethereum state database).

DFINITY took such insights and channeled them into the design of the Internet Computer by evolving traditional smart contracts into software canisters. Canister code can directly call functions shared by any other canisters, providing the expected advantages, but also work differently to traditional smart contracts in various ways. This variety of smart contract is called a “canister” because it is a bundle of software code and persistent pages of memory the code runs inside, enabling the network to scale capacity by deterministically running canisters in parallel. (For the technically minded, the code is in the form of WebAssembly bytecode, which can be compiled down from any high-level programming language that describes a software actor.) Again, canisters cannot crash, but this time developers maintain data in a scheme of genuine orthogonal persistence, such that data simply resides within the very arbitrary variables, objects, collections and data types that they would naturally create during the course of their programming work, which in turn persist automagically within the memory. Such features, together with many other innovations, allow code to be written on the Internet Computer that is reduced to its essence of purpose, greatly stripping away complexity, and dramatically driving down the costs of implementing, maintaining, and administering software systems and services.

In addition, the Internet Computer also enables developers to write “code that scales” for the very first time, greatly simplifying, for example, the production of mass market internet services. When traditional IT is used, software must run entirely within the physical memory of the computer that is hosting it, which is shared with other software, such as the operating system, and is naturally bounded by whatever physical memory chips are installed. This creates a burden upon developers of systems that must scale — such as mass market internet services — who must escape these bounds by partitioning computation and data across multiple server computers and standalone systems such as databases using complex schemes such as “sharding.” By contrast, for the first time, the Internet Computer provides a truly seamless environment for code where such partitioning is unnecessary: Whenever developer code instantiates new canister objects, up to 4GB of additional memory pages are incorporated into their overall system, making it possible for the code to maintain exabytes of data in memory as though it were running upon a giant, infinitely powerful server computer — allowing for hyperscale internet services to be created using a tiny fraction of the lines of code that are necessary today.

Those reading may be concerned that the simplifications provided by the Internet Computer model will be offset by the extra cost of the hardware involved in creating the network, which might nullify the savings, or even make the public platform as expensive as traditional IT. This is not the case. Depending on configuration, thanks to the application of Chain Key cryptography within the network, the Internet Computer can replicate computation and data that is not governance-related across as few as seven node machines (drawn from seven independent data centers), while maintaining sufficient security and resilience, which involves only slightly more replication than Google uses. Furthermore, traditional IT actually involves lots of hidden replication, between database index and data files, for example, or more significantly to CDN (content distribution network) services that pre-distribute content to the internet’s edge. Thus, even though canister memory on the Internet Computer might be relatively expensive at around $5 per GB per year (which anyway is revolutionary in blockchain when considering that Ethereum smart contract data costs $5,000,000 per GB), the canister framework ensures far more efficient application of that memory.

The computational overhead of the cryptography used by the Internet Computer’s underlying blockchain protocols is a remaining concern. This must be processed by the CPUs of node machines, but it remains the case that the network can support the construction and maintenance of systems and services at far lower overall cost than traditional IT, because it focuses first and foremost on reducing the relatively far greater human costs resulting from unnecessary complexity and the difficulty of securing systems. In fact, the designers of the Internet Computer will, where necessary, consciously trade extra hardware usage for human efficiencies that are otherwise far harder to obtain. This makes perfect sense when one considers that the Sinclair Research ZX-81 personal computer I began coding on in 1981 had only 64KB of main memory, while my 16" MacBook Pro can now pack 60GB — increasing memory capacity by one million times in under 40 years — reflecting how hardware constantly becomes more powerful and less expensive by Moore’s Law, while the cost of sufficiently trained humans stays the same or grows.

Those interested in playing with the canister smart contract framework, and novel languages such as Motoko, should checkout the SDK provided by DFINITY at sdk.dfinity.org

Blockchain at Web Speed That Runs on the Internet’s “Edge”

One of the traditional complaints about blockchain is that it is too slow, and one of the greatest preconceptions is that it is necessarily slow. The roots of such thinking began with the design of the very first blockchain, Bitcoin, which takes 30–60 minutes to finalize transactions in expectation. Ethereum applied an updated form of Proof-of-Work from 2013 called GHOST in its design to speed things up enormously, lighting a path to the future. A key purpose of the Internet Computer has always been to smash these barriers, and host smart contracts that can perform even better than code hosted by traditional IT in some applications. This has been achieved through the application of Chain Key cryptography, whose low-level technical workings are being revealed imminently, and leaning on the observation that blockchain is, and has always been, a native “edge architecture.”

Chain Key cryptography allows the Internet Computer to finalize transactions that update smart contract state (i.e., update data hosted in cyberspace) in 1–2 seconds. This is an enormous improvement, but still insufficient alone to allow blockchain to provide competitive user experiences, which require that responses be provided to users in milliseconds. The Internet Computer solves this by splitting smart contract function execution into two types, known as “update calls” and “query calls.” Update calls are those we are already familiar with, and take 1–2 seconds to finalize their execution, while query calls work differently because any changes they make to state (in this case, the memory pages of canisters) are discarded after they run. Essentially, this allows query calls to execute in milliseconds.

To imagine how this would work in practice, imagine an open Internet Computer based alternative to Reddit. When a user browsed the forum, customized views of the hosted content would be formulated and served into their web browser by the execution of query calls, which an Internet Computer node in near proximity would run in milliseconds, providing a fantastic user experience. But when they occasionally wish to make a post, or provide a tip of tokens to the author of some post, this would involve update calls, which finalize in 1–2 seconds — which here is an acceptable delay, but otherwise might be hidden in the manner of a one-click payment that succeeds instantly on the assumption that the credit card being used won’t be rejected.

Through this model, the Internet Computer can actually improve user experiences when compared against services built using clouds running from Big Tech’s hyperscale data centers. This is because the Internet Computer replicates smart contract data across subsets of nodes that are distributed across independent data centers around the world. In fact, query calls will often be run by nodes “on the edge” that are in close proximity to the end user (with configurable levels of security soon described). So if a user browses the imagined open version of Reddit from Zurich, for example, the Internet Computer can execute and serve the query calls involved using nodes situated in nearby Swiss data centers, reducing the latency involved, and providing an even better user experience. The traditional model does not have the same advantages: custom Reddit content must be created in a hyperscale data center and then transported to the user. Reddit will most likely use a CDN to transparently cache media objects such as photos around the world, so that they can be served from machines in close proximity to users, but ultimately, they must still dynamically generate custom content within the central hyperscale data center that hosts the service, which must then be carried across the world to the user, introducing latency, which degrades the user experience.

Removing Troublesome Intermediaries From Blockchain Systems

Today’s public blockchain systems often rely on trusted intermediaries, undermining their key purpose. When we interact with systems and services built using Ethereum smart contracts, for example, we typically do so via websites hosted on Amazon Web Services or another Big Tech cloud. These websites are running from insecure cloud accounts, are administered by trusted parties and are vulnerable to the cloud operators, such that we can never really be sure that we are actually interacting with the intended smart contracts on the back end, or that the website has not been compromised and won’t serve malicious code into our web browser — a problem that must be fixed.

The Internet Computer solves this by enabling hosted smart contracts to directly serve content into the web browsers of end users, using mechanisms that enable users to be confident that the content loaded is whatever the contract developer actually created (such as HTML and JavaScript). This enables Internet Computer developers to create systems and services with end-to-end security, without the need for trusted intermediaries that might become corrupt, fail, or try to censor them.

The current malaise involving blockchain intermediaries threatens to become worse with the migration to Proof-of-Stake blockchain architectures, whose economics will drive miners to run their validator nodes upon a handful of mega monopoly cloud services to reduce costs — something that is already happening at scale. Once a sufficient number of nodes are running on the cloud, blockchains might be brought down by operators stealing their “validator keys”, or switching them off — perhaps as the result of some legal action by a hostile corporation or government, or just because the cloud service broke down, as Amazon Web Services recently did. The Internet Computer eschews the cloud, and its Network Nervous System only inducts node machines into the network that are sited within identified independent data centers to ensure that its security and resilience are maintained, and it divides computation and data among them. Moreover, if it were ever possible to fool the NNS and create a node using a cloud instance rather than the correct standardized node hardware, this would be revealed by statistical deviation, ensuring that clouds cannot host nodes anyway. Thus, the Internet Computer network prevents a handful of clouds becoming intermediaries for compute power. Instead, it runs exclusively upon dedicated node hardware installed in high-quality, independent data centers around the world, and can withstand corruption and failure across multiple geographies and jurisdictions.

At the risk of saying too much, I will offer a final note on this subject: Black swan events are always difficult to convince people to prepare for, since any specific situation proposed seems overly improbable. The hidden truth, however, is that there is often a huge number of such improbable situations, and therefore the chance that at least one improbable black swan event occurs is actually very significant. With respect to public blockchains relying upon Big Tech clouds, it might be that Big Tech feels threatened by their emergence and a trigger action by a regulator provides an excuse to switch them off, or it might be that an electro magnetic pulse caused by a solar flare from the sun disables some of the hyperscale data centers in which they live, or something else entirely; there are many different ways it could go catastrophically wrong. Consequently, we can surely say that the practice of using Big Tech cloud services to build public blockchains and related systems and services, which are meant to be open, unstoppable, and tamperproof, is folly and antithetical to this purpose. The Internet Computer provides a solution.

Removing Critical Usability Issues From Blockchain Systems

Blockchain has experienced many challenges during its attempts to go mainstream, and the Cryptokitties craze provides a case in point. This exciting game was powered by smart contract computations on Ethereum, and its popularity grew rapidly within the decentralized community in 2017. Its demise partly occurred because Ethereum ran out of capacity to process its transactions, and so the game ground to a crawl for users, but the scalability of blockchain was only one of many challenges that would have held back the game from going mainstream, and any similar systems and services that are built. In fact, the scalability issues that hit Cryptokitties masked even higher hurdles it would have faced, which ensured that the game became popular almost exclusively within the established blockchain community, with little chance of becoming a mainstream phenomenon. The single biggest obstacle was that to participate, one needed to have a wallet holding Ethereum’s native token, ether, and thereafter many interactions with the game inevitably involved manually initiating low level smart contract transactions using the wallet’s interface.

For those in the Ethereum community, with pre-configured wallets such as MetaMask already in hand, who were already familiar with the complexities of tokens and keen to find ways of using them, the requirement presented little challenge, but it presented an enormous hurdle for those outside of blockchain, including journalists reporting on the game, who would typically ask for a demonstration rather than playing it themselves. Anyone who has worked in the games industry knows that viral adoption is crucial to success, and this relies on removing as much friction as possible from the funnel that takes a player on a journey from first experimenting with the game through to becoming an avid user. Now consider that a mainstream user who had been interested in Cryptokitties would have required ether tokens to play inside their own wallet. To participate, they would first have had to sign up for an account at a cryptocurrency exchange such as Coinbase, then pass laborious KYC procedures, then transfer money in, then purchase ether, then withdraw that money into their real “unhosted wallet,” then try and workout how to make transactions within the game. In short, mainstream users were presented with barriers that were effectively unsurmountable.

The Internet Computer addresses these issues in two ways. Firstly, it enables users to directly interact with hosted online services through user experiences served by smart contracts into web browsers, for example, without any need to hold tokens whatsoever. This is possible because whereas traditional blockchains require users to pay for the smart contract computations that their interactions create, the Internet Computer uses a “reverse gas model” in which smart contracts pay for their own computation using “cycles” (which are roughly equivalent to gas on the Ethereum blockchain). Secondly, although users must identify themselves to the smart contract systems they are interacting with using a cryptographic key pair, management of such keys is made far easier when compared to the use of traditional blockchain wallets. Either user experiences can have users identify themselves by entering traditional usernames and passwords within the browser, which are deterministically converted into a key pair for them by code in the browser, or far more securely, and even more conveniently, they may be allowed to authenticate themselves using the emerging WebAuthn standard. This allows users to login quickly using the secure hardware features of modern client devices, such as by simply pressing the fingerprint sensor on their MacBook laptop, or authenticating themselves to their phone.

Without the features and design of the Internet Computer, it is difficult to see how the adoption of systems and services built on blockchain will be able to go mainstream. With the features it provides, it’s clear that blockchain-based services can easily create user experiences that are as friction-free as those built on traditional IT, and even go one better — because they can free their users from the hassle of having to manually enter usernames and passwords, offering instead the chance to simply press a button on their client device, while greatly improving their security through the transparent application of cryptography and secure hardware.

Unleashing Intelligent Governance and Autonomous Evolution

Governance has always been a thorny issue for blockchain. On the one hand, networks must truly be decentralized in order to not be controlled by any particular actors, whether affiliated persons or organizations, because if they were so controlled, the controllers could be petitioned to close them down, or the controller could become corrupt and subvert the guarantees of security provided by the networks to hosted tokens and code. On the other hand, some control should be exerted from somewhere, because such networks are formed by complex technology that must inevitably be fixed, refined and upgraded in production, and as they become more powerful — as the Internet Computer has become — the possibility that malign actors apply them in nefarious schemes with grave impacts increases. Regarding this latter point, a fair argument is that all technology can be misapplied — a smartphone can be used by a terrorist to detonate an IED, for example. Nonetheless, this does not absolve all considerations.

To date, blockchains have used messy methods to update themselves, which are often highly opaque and questionable. In the case of Bitcoin, whose community is truly without dominant organizing forces, debate can rage with little action, as it is near impossible to create the consensus necessary to get its miners to adopt changes, leading some to create fairly pointless network “forks” out of frustration. Meanwhile, Ethereum has benefited from its foundation, a not-for-profit organization that has helped guide the network, as well as prominent leaders such as Vitalik Buterin, which has made it easier to push through changes and pursue a more innovative trajectory. Nonetheless, the actual process of making changes is still difficult, both for reasons relating to the difficulty of bringing people to consensus, and also because of technical challenges, which make it extremely difficult to push global updates to the nodes hosting the network.

All things being equal, the more complex technology involved in the Internet Computer blockchain would make things much harder still, and moreover, some amount of “intelligence” must be applied to the evolution of its network architecture as it scales out capacity — for example, by creating new subnets by inducting new nodes. For such reasons, it has introduced an advanced, open algorithmic governance system into its protocols, which effectively controls the entire network. This is known as the Network Nervous System (or NNS) and can be traced back to an early 2017 proposal I published for a “Blockchain Nervous System,” when the DFINITY Foundation was working on a blockchain with less scope than the Internet Computer, which was intended as a much simpler sister for the Ethereum blockchain. Most of the concepts described in that original post remain, although I could not have imagined how much more complex much of the technical work required to support such a thing would necessarily become in practice.

The NNS enables the holders of ICP governance utility tokens to lock them in voting neurons, which can then be used to vote on proposals submitted to the system that can be automatically executed, and can be made to follow each other in various ways such that they vote automatically — which in some sense, represents a form of liquid democracy. The workings will be described more fully in forthcoming technical reveals, but needless to say, many of the proposal types that it can process relate to the management of the Internet Computer’s underlying network, such as pushing upgrades and fixes to node machines, and creating subnets to scale out capacity, and this is where the real complexity currently lies. The Internet Computer cannot really fork, in the traditional sense, and upgrades and network modifications have to work in close and exact sympathy with the workings of its Chain Key cryptography, which is itself composed of tightly interlocking and highly technical cryptography schemes. In all, it is a marvel that the system can control the network as it does, but through this work, the Internet Computer is able to remain autonomous while rapidly evolving its network through the evaluation and adoption of proposals that anyone can submit.

What is often missed, is that if the code running a blockchain network can be modified, then the tokens, code, and data it hosts are in fact also subject to modification. Since there is no blockchain in existence whose code and protocols have not been modified, everything upon them was and in fact remains subject to the communal will of their controlling communities (even if those communities have chosen not to make such modifications thus far, with some exceptions, such as the reversal of the 184 billion bitcoin bug, and the addressing the hack of “the DAO” on Ethereum). If code can be overridden by communities in these extreme cases, then why not also in extreme special cases where life and limb are at stake? For example, imagine that Interpol discovered that a system hosted on the Internet Computer was being used as a marketplace for human trafficking and that if its data were disgorged vulnerable and abused individuals might be rescued from unimaginably bad situations. This scenario is not an argument against the Internet Computer, which is a manifestation of technology that on balance will bring a tremendous amount of good into the world, but that in itself is not sufficient argument that nothing should be done if it can be, despite the many advantages of “the code is law.”

The Internet Computer and its controller, the NNS, are autonomous, and by design, I emphasize that neither I nor DFINITY can control what the NNS does — this will ultimately be a product of the tens of thousands of neurons that will exist at Genesis. However, I hope that it will enable us to better deal with special extreme cases like the one I mention above. I envisage organizations such as the EFF, Mozilla, and the newly formed Internet Computer Association, first creating and publishing voting neurons, which others may configure their own neurons to follow to decide how to vote on proposals in the #Ethics category. Then I hope they form ethics committees to whom relevant parties might confidentially submit requests for help, here allowing Interpol to ask for support for a proposal that they plan to submit to retrieve the information inside of the hypothetical human trafficking system. Now, of course, this trafficking system would be tamperproof and replicated across node machines that reveal nothing more than encrypted bytes if they are opened — however, the logic of Internet Computer nodes could be upgraded so that if the NNS adopted such a proposal, they would respond by encrypting the relevant data to the public key of the investigating agency and then making it available for export. Once this action had been taken, the ethics committees involved in supporting the adoption of the proposal would publish why they provided support in the interests of transparency.

The NNS and its power will of course be the subject of much debate within the blockchain community. Many will reasonably say that if it takes actions in extreme cases, then this will lead to it becoming routinely involved in numerous far less consequential actions, such as trying to reverse the ill effects of bugs in smart contract code or hackers stealing authentication keys from users. These are valid questions, and perhaps the answer is that it should, but what is important is that through the NNS, we as a community now have an efficient means to take efficient action when we want to and decide the course of action algorithmically. The game theory and economic incentives embedded within the design of the NNS will ensure that it seeks to adopt proposals that chart a course and takes actions most concordant with the Internet Computer’s core purpose of providing a compute platform for all of humanity, so even if it algorithmically moderates “the code is law,” it will remain the case that “in code we trust.”

“Open Internet Services” With Tokenized Governance Systems

A key purpose of blockchain has always been to remove the reliance of traditional systems on intermediaries and trusted parties, which greatly reduce security, the sovereignty of individuals, and introduce burdensome overheads. If I hold bitcoins in a native Bitcoin wallet, for example, I can use the internet to directly transfer that digital currency to anyone else’s Bitcoin wallet without needing to ask for permission, and without concern that some intermediary might steal the coins en route or deny my transfer. These blockchain guarantees also provide an excellent foundation for much more sophisticated processes, and so Ethereum introduced smart contracts to allow us to apply them to general computation, providing us with a means to free computation from messy human relations, frailty, and processes. But fully leveraging these benefits with smart contract code can be more involved for developers, because although such code might be made fully autonomous and exist without need for an owner, in such cases it cannot thereafter be updated, and improving and fixing code is a persevering requirement in the vast majority of complex systems.

An important purpose of the Internet Computer, is to enable communities of developers, entrepreneurs, investors, and end users to build out successful mass market open internet services using such autonomous smart contract code, enabling services to run as part of the very fabric of the internet rather than in the manner of blockchains themselves. These can provide enormous novel benefits to their users, which will help them successfully compete with legacy Big Tech services in ways I describe further below, but will necessarily involve highly complex systems that will often incorporate large numbers of smart contracts (or here, “canisters”) that will inevitably need updating. To solve this challenge, the Internet Computer allows an internet service to be converted into an open internet service that runs autonomously by assigning its contracts to a tokenized open governance system that it provides, and which is ultimately owned and controlled by the NNS itself. These governance systems are essentially derived from the same technology that creates the NNS, which is responsible for managing the overall Internet Computer network, and are controlled by their own governance tokens.

When a service is transformed into an open internet service, the NNS initializes a new tokenized governance system to which it passes control. The new governance system initially contains one billion native governance tokens, and these are passed to whomever took the action. To begin with, the service is still not truly an open internet service, since all the governance tokens are held by some party, but their aim must then be to distribute them far and wide to key project players such as developers, but also more widely within the community, such that as many people as possible can create “voting neurons” to make the system secure and allow its governance system to operate autonomously without dependence on any large holder or group of holders. Of course, in the case of an open internet service, the tokens may be sold to raise funding for development, but the technology can also be used for enterprise software systems, allowing control over key infrastructure to be disseminated between multiple parties in ways that are far more secure than those used today. Once a governance system has been initialized and taken control, all further upgrades to the code and configurations of a service must be performed by submitting proposals, which the governance system will decide to adopt and execute or reject.

A Trustless Programmable Web With Non-Revocable Sharing

Even as recently as 2013, Aaron Swartz was writing about the “programmable web.” Essentially, the internet community held a grand vision in which internet services would provide others with online APIs (application programming interfaces) through which their own services could incorporate shared functionality and data. Many of us, myself included, naively assumed that sharing would be the default model since it would provide the sharer with network effects, and that the programmable web would become endlessly richer, providing more and more ways to extend functionality and, with the permission of users, repurpose data to deliver more value through the provision of additional functionality. This was only partly true. In the early days of Web 2.0 and its explosive growth, many of today’s Big Tech organizations were still large startups, and indeed sought to share their data and functionality to accelerate their growth through network effects lent by the building of others, and embed themselves ever more deeply within the overall internet ecosystem. Then something unfortunate happened: those organizations secured monopolistic positions within the ecosystem, and the advantages of hijacking the user data they had accumulated within their systems became much greater, and they began reneging on the sharing guarantees they made.

Today, the dream of the programmable web has become a distant one, for reasons best understood through a historical example. LinkedIn started out by sharing the professional profiles it hosts with other services, which often treated it as a database, directing new users to submit their profiles there. Thousands of internet services thus incorporated professional profiles from LinkedIn into their own functionality, such as RelateIQ, which was based in downtown Palo Alto, which created browsable communications graphs for organizations in which the vertices were edges, and if you hovered over them, a profile drawn from LinkedIn popped up to show you who was communicating with whom. RelateIQ was becoming popular, it had a fantastic team, and it gained a large valuation, becoming a unicorn. Then in 2014 it became apparent that LinkedIn, which by this time had secured a near-monopoly over professional profiles, had seemingly determined that the benefits of sharing profiles had diminished, such that it later sent out notifications to the thousands of services using its APIs that their access was being restricted by new terms (in effect, it was revoked). But in fact, this was not done universally — by virtue of its size, Salesforce was able to maintain access, and RelateIQ was therefore sold to Saleforce so that it could continue functioning, arguably for far less than its former worth.

This is a case of what is known as “platform risk,” which is incurred whenever you build a new system or service in such a way that it depends upon another. It is extremely insidious because it may come out of nowhere. For example, recently the CEO of Tinder requested a meeting with Mark Zuckerberg, the CEO of Facebook, upon whose APIs the functionality of Tinder heavily depends, after learning that Facebook was entering the internet dating game itself . Zuckerberg quickly declined this request, remarking to his staff, “I don’t think he’s that relevant. He probably just wants to make sure we won’t turn off their API.” The potential for this problem should have been made obvious by Zygna’s experiences after becoming a public company, with its shares falling 85 percent in three months after Facebook changed rules relating to the social games published on its platform.

The point is that if you build on the APIs of Big Tech’s services today, you are certainly building on sand, but even when you build using the APIs of those who aren’t monopolies in the making, you still run an enormous risk that one day they will decide to demand payment, revoke your access owing to strategic considerations, or simply fail. Consequently, it has become increasingly difficult to raise money to develop internet services that depend on shared data and functionality, and while the tech world becomes ever more monopolistic, innovation and economic opportunity increasingly suffer, and the original dream of a programmable web is dying on the vine.

Reversing this situation is a key purpose of the Internet Computer, which is bringing back the programmable web in a new and far more powerful and impactful form, by enabling hosted open internet services to publish non-revokable, “permanent” APIs. Essentially, this involves their developers marking shared functions as permanent, which ensures that if their controlling governance systems try to push a software upgrade that would remove or change them, it will automatically fail. This is only part of the solution, however, since an open internet service might still degrade the functionality behind a shared permanent API to gain some advantage, in the case of an open version of LinkedIn, say, always returning the same professional profile whichever user profile is requested. Here, the Internet Computer leans on the power of the Network Nervous System. Those affected by the loss of a permanent API may apply to the NNS for a remedy. Upon adoption of a relevant proposal, the NNS will then begin progressively inflating the governance tokens controlling the open internet service (by creating new tokens) until such point that it restores full functionality and honors its guarantees. Clearly, whatever the intent of their governance systems, open internet services shall never try such tricks, since the holders of their governance tokens will not want to be diluted.

The power of this new programmable web, which allows developers to build upon shared data and functionality without having to trust the provider, cannot be overstated. It essentially lays the groundwork for massive collaboration and innovation between open internet services, and ensures that those that become successful by sharing data and functionality to gain the benefits of network effects created by what others build, cannot later reverse course, laying the groundwork for much more constructive economics.

Democratizing Tech Opportunity by Extending It to the 99 Percent

Today, the world population has grown to more than 7.8 billion people, yet the distribution of wealth and opportunity remains very unequal. Within this mass of people exists an extraordinary wealth of untapped talent, and even genius, which could be brought to bear for the benefit of all humanity. It has long been hoped that blockchain might help address this challenge through financial inclusion and other means, and a key purpose of the Internet Computer is to facilitate the rebuilding of our global society’s core information infrastructures in more open form in a manner that allows talent to participate from anywhere. To achieve this, it must address imbalances that have so far ensured that most technology growth and innovation is driven from Silicon Valley, while 99 percent or more of the world’s talent is situated elsewhere with little chance to participate. The Internet Computer introduces a three-pronged trident to help change the status quo: the provision of mechanisms that distribute access to capital, the provision of means to easily build enterprise systems and mass market internet services from anywhere, and the provision of mechanisms supporting “open internet services” that can gain decisive advantages in competition with incumbent proprietary internet services and ecosystems operated by Big Tech.

To distribute access to capital, the Internet Computer enables the creation of open internet services controlled by tokenized governance systems. Upon the network, any team of developers with access to the internet, wherever they are located in the world, is now empowered to start building new open services, and as they develop them, to sell the governance tokens they were granted as a means to fundraise. These can gain value since services can generate revenues through fees, advertising or other means, and distribute them to holders in the form of voting rewards, enabling them to share in the value that is being created. The advantage is clear. During the dotcom era, I worked with a team of enormously talented developers based in Tomsk, Siberia. These developers had absolutely no access to capital, and anyone presenting themselves in the role of venture capitalist might even have been dangerous to get involved with. Consequently, they could only work in the role of offshore developers, rather than primary innovators, creating an opportunity cost for humanity by wasting economic potential. With the advent of the Internet Computer, such teams can begin imagining and building their own innovations, and apply their talent to building the world’s next generation of information infrastructure and services, thereby unlocking enormous value for the world economy, and providing them with a fair chance to share in the opportunity Tech offers.

Distributing opportunity also involves reinventing how we build systems and services, so that they can be built from anywhere using low cost tools without their developers incurring significant disadvantage. In practice, that means it must be possible to build a mass market open internet service using only a relatively basic client computer with internet access, such as a cheap Chromebook, or even a smartphone. The Internet Computer makes it possible to write code on such devices and deploy it directly to the internet, without need for accounts on ancillary clouds or other services. In fact, within DFINITY we already have tools that allows Motoko code to be written using a development environment that loads from the Internet Computer into web browsers, and then deployed by “writing it back to the internet.” By default, the Internet Computer automatically generates a web-based user interface for every smart contract it hosts, which allows developers to begin interacting with their functions, enabling anyone to build, deploy, share and test functionality with ease. This democratizing technology will help level the playing field. It now remains for these systems to be introduced into emerging markets, and to computer science students, so that a new generation of programmers can begin developing their skills, just like I was able to through my access to early personal computers as a child.

Lowering the barriers involved in the development of new systems and services is enormously important, and will help all regions of the world progress the development of their information systems, but a great part of entrepreneurial upside in tech must be derived from platforms that enable mass market internet services and surrounding ecosystems that can win and dominate niches in a highly competitive environment. This is something the Internet Computer has also been designed to facilitate from the very start…

Building a Richer Open Internet That Beats Out Mega Monopolies

The mega monopoly internet ecosystem of Big Tech becomes more entrenched by the day, reducing personal freedoms and sovereignty, narrowing economic opportunity and the growth provided, and slowing innovation. Nobody can help but notice how the major innovation once involved with services from Google or Facebook have been stalled for years, while their operators concentrate on monetizing users ever more efficiently by tracking their habits and desires, extending the foundations of their empires by entering new fields and acquiring competitors, hiring and neutralizing talent that might otherwise build startups, lobbying lawmakers to act in their favor, and goading regulators to introduce new regulations that can hinder startup competition in a process of regulator capture. Meanwhile, as opportunity in the internet’s field of dreams narrows, a colossal supply of investor capital and worldwide entrepreneurial and technical talent seeks to seize back the initiative by building a new open Internet, as exemplified by the blockchain and ICO boom. All it needs is the means to win.

The Internet Computer shall provide the essential tool for this purpose. On the one hand, it provides a means for open internet services to deliver unique features and advantages to users that internet services built on traditional IT stacks cannot match, and on the other, it provides a means to build a new open internet ecosystem where the means to trustlessly share functionality and data will fuel dynamism and ongoing innovation and network effects by powering low-friction, negotiation-free, collaboration between teams of developers and entrepreneurs, by making it possible for any party to extend the functionality and data of any service without incurring platform risk. Also, as previously described, the Internet Computer will democratize the access worldwide talent has to funding, such that eventually far more might be brought to bear building the open internet than the monopolistic ecosystems of Big Tech, using a technological framework that makes it possible to build internet services with greater ease than ever before. Let’s think about how these things work in practice.

How the Internet Computer provides a platform for the creation of winning features can be understood by considering what open versions of Google Photos, Uber and TikTok might offer. An “Open Photos” internet service would first of all be secure, which compares well to Apple’s iCloud, say, which was recently completely compromised by white hat hackers who found that they could see the photos in user accounts. But more obviously for consumers, the trustless sharing empowered by the new programmable web functionality on the Internet Computer would ensure that vastly more photo filters are available, and that photos could be exported into a wide variety of additional services, making them more valuable to their owners and providing a far richer user experience. Open Photos could also allow its users to make one-time payments in return for, say, a terabyte of “eternal” photo storage that never has to be paid for again. This would be implemented by installing the deposit within a DeFi system that generates interest to pay for the ongoing data storage, such as Compound, using a simple function call.

An “Open Rides” service might seek to replace Uber and Lyft in a number of ways. First of all, it would recognize that early drivers and riders would play an essential role in its success, similar to founding team members. To recognize this, and create incentivizes for viral adoption, Open Rides would grant early drivers and riders allotments of governance tokens when they gave or took rides, or made referrals, such that if Open Rides became successful, they would share in its success as team members can at startup ventures. If rapid adoption resulted, powerful competitors might be irked and try to slow things down with lawsuits, but here Open Rides would have an advantage: open internet services run autonomously as part of the fabric of the internet, here in the mode of an advanced P2P protocol that connects drivers with riders, and code cannot easily be stopped. As autonomous code on the internet, Open Rides might be made instantly available in all territories around the world, without expensive negotiations with regional governments who are doing the bidding of local taxi monopolies wishing to protect their turf — a dynamic that has stymied Uber, which still cannot operate in many places — and reducing costs further while ensuring that drivers would retain an even greater portion of ride fees. Lastly, of course, Open Rides could be easily integrated by other services that might want to organize the transport of people automatically, using simple function calls, and both drivers and riders could be sure that the reputation and system accurately recorded reviews, because the system would be tamperproof.

The DFINITY Foundation is developing a sample app called CanCan, which is a reimagining of TikTok as an open internet service. Its initial purpose was to demonstrate how large quantities of user videos and other data could be uploaded to the Internet Computer and then streamed back to users, but work on its “tokenization” is now under way. The first purpose is to show how tokenization can make the app more compelling for consumers than the one that inspired it, but the other is to help drive another generational change: services such as Facebook found new ways to generate profits by making their users the product. That is, they work by attracting users into environments where they can track their interests and keep them engaged by feeding them content that engages them, and then sell access to their engagement to advertisers. The Internet Computer’s blockchain and tokenization provides a means to make the situation more equitable by extending the journey — through tokenization, besides being the product, the users of a service can become the team. Let’s look at how this can work.

CanCan introduces the idea of convertible “reward points” that users can earn in various ways. One major change is that in addition to the usual “like button”, which users use to show appreciation for videos, they are also provided with a “super like” button. Only a maximum of 10 super likes can be made every 24 hours, and users must aim to super like videos that they think will become very popular. When a video is sufficiently successful, CanCan looks at the order in which users super liked it, and those that did so early on participate in a shower of reward points, which are desirable to have for several reasons. Firstly, every few days there is a “Drop Day” when users can exchange rewards points for prizes offered by advertisers, who can then use the points to pay for advertising in the service, or can exchange them for CanCan governance tokens, enabling them to obtain a kind of ownership in the services. Secondly, users are also provided with a “red letter” icon, which enables them to send tips of reward points to the creators of videos to show their appreciation.

Many users of CanCan will naturally want to play an exciting game, in which they browser large volumes of video looking for new posts that might become successful, where they can deploy their super likes and hopefully win reward points if they become popular. This provides underlying smart contract code with signal about what content should be highlighted to users when they first open the app to view some videos, powerfully augmenting other content sorting and selection mechanisms based on techniques such as Bayesian classifiers, and making participating users part of the team running the service. Moreover, by engaging users in a game to win reward points, which they can convert into prizes or a kind of part ownership of the service, the service is made more sticky and the volume of content they consume is increased, providing more opportunities to display tailored advertising.

Lastly, this mechanism allows CanCan, as an autonomous service, to address crucial needs for content moderation. Without this, especially with a video-sharing services in the mold of TikTok, an extraordinary quantity of filthy content will quickly pollute the environment and ruin the user experience, preventing its popularity from growing outside of a small niche. To solve this, when new video content is uploaded, it is first placed into a randomized “unmoderated” feed, and is only transferred to the “main” feed once it has survived for an hour. Within the unmoderated feed, users are provided with a flag button through which they can earn reward points by identifying content early that is later taken down upon sufficient consensus, in another game. This further incorporates users as team members, and moreover demonstrates a way that work opportunities can be distributed. Today, content moderation on platforms such as Facebook is often performed in North America, where the salaries paid for the often harrowing work are relatively poor. CanCan makes it possible for any user to do moderation from anywhere, while paying at a level that does not discriminate according to the location of the user, such that free markets can distribute employment so that those performing the work are relatively well paid in their local geographies.

Of course, open internet services have many other advantages as well. They are transparent because they are controlled by open tokenized governance systems, and there is simply no opportunity for an open version of Facebook to export data to Cambridge Analytica, or for an open Zoom to export data to Facebook, without users being aware. These governance systems can also be used to distribute rewards and bounties to open source developers, and ensure there is always an army of bright minds extending them, but the open internet entrepreneurs of tomorrow should consider that currently many consumers don’t really care about issues of privacy and transparency, or open source models, and the creation of winning services on blockchain will primarily be achieved by provision of more engaging features, better viral growth engines, a richer ecosystem, and tokenization.

Using Computation to Provide Stable Liquidity to Contracts

The Internet Computer network’s main utility token is called ICP (the token takes its name from the ICP protocol, and was earlier called “DFN” after the DFINITY Foundation). It has two purposes. The first is to allow users to participate in network management, by locking them inside “voting neurons” within the Network Nervous System, through which they can earn “voting rewards.” The second is to provide a source token that can be transformed into the “cycles” that are needed to power compute on the Internet Computer. Cycles play a role analogous to gas on the Ethereum blockchain, but in contrast exist within the network as an independent token. This is because the Internet Computer uses a “reverse gas” model where smart contract software is pre-charged with cycles that they later burn in the manner of fuel to power their own computations and maintain data — such that, effectively, every unit of software is paying for its own compute needs at the point of resource consumption. This means that users do not have to be inconvenienced by paying, as on traditional blockchains. Naturally, cycles must be transferred to smart contracts in advance so that they are available when needed. (For example, a hyperscale mass market internet service might be composed from billions of individual smart contract objects, and use management contracts to perform such distributions of cycles.) Consequently, cycles are transferrable between contracts.

The Internet Computer constantly scales out its compute capacity by inducting new node machines into its network such that it never runs out of it, allowing the pricing of resources used by smart contracts to be closely derived from the underlying cost of the hardware involved in providing them. This works very different than traditional blockchains, where the compute capacity available to hosted smart contracts is finite, and remains so no matter how much additional hardware is added to their networks, requiring them to auction their finite capacity to whomever will pay the most using “transaction fee markets” (which is why computations on Ethereum can cost tens of dollars to run, while comparable computations on the Internet Computer cost only fractions of a cent). Because the cost of compute resources on the Internet Computer can be made approximately constant, this makes it far easier to manage the resources needed to run systems and services, whose operational costs become much more predictable. But the provision of compute resources at constant cost is only part of what is needed. On the Internet Computer, smart contracts must be pre-charged with cycles to provide them with fuel that pays down compute resources at the moment of consumption — which occurs in the future. This means that cycles should also have constant value, so that the number of cycles placed inside smart contracts predicts the amount of compute that they can actually pay for.

In blockchain, tokens with constant value, which are often referred to as “stablecoins,” can play some useful role, but creating them has proved challenging in practice. One might hope that tokens, like cycles, can simply be collateralized by dollars held in a bank account — but a decentralized network cannot take such an approach with native tokens, since this would create a dependency upon fragile banking relationships, the administration of bank accounts, whomever must issue and redeem the tokens, and so on. Meanwhile, creating tokens with “stable value” using truly decentralized mechanisms that do not rely on outside assets has proven exceptionally difficult.

I myself spent much time on the cryptofinance newsgroup in 2014, where the underlying mechanisms involved in today’s DeFi stablecoin schemes were first proposed and discussed. The problem with the designs we explored, which remains the case with stablecoin schemes in use today, is that they peg the price of stabilized tokens to external measures of value, such as the US dollar, using schemes that rely in one form or another upon other tokens locked inside smart contracts as collateral, such as ether and bitcoin, whose value is highly volatile, such that during periods of market turbulence, which occur regularly in crypto, assumptions made about the collateral involved become false and black swan collapses occur. Thus, none of today’s DeFi stablecoin schemes, or any that have been proposed in the past, provide a suitable means for ensuring that cycles have constant value. A much simpler and more trustworthy mechanism is needed, in which the stable value of cycles does not depend upon complex securitization schemes involving other tokens.

On the Internet Computer, as it turns out, cycles will tend toward constant value, without need for a stablecoin scheme, thanks to the ongoing computation performed upon the network. First of all, the network allows users to convert ICP utility tokens they hold into cycles at a rate that is set by the NNS. The conversion rate will be anchored to IMF SDRs, which are made up from a basket of major fiat currencies, such that ICP utility tokens judged to be worth 0.65 SDR on external markets (which currently is very roughly equivalent in value to a Swiss franc or a US dollar) can be converted into 1 trillion cycles. Clearly, this provides a ceiling on the value of cycles, as nobody would ever buy them at a higher price, as they could simply buy ICP utility tokens and convert them into cycles themselves. But what about the floor? What happens if someone buys a large volume of cycles, for example, and then decides that they do not need them, making them available for sale? Here things get interesting! Such sellers must price their cycles below the ceiling, reducing the price. Consequently, those that wish to acquire cycles to power computation, either directly or by reselling them to others, will purchase them because they are cheaper. Naturally, these cheaper cycles will eventually all be removed from the market and burned by computations performed on the Internet Computer such that they disappear, and new cycles must be created from ICP utility tokens once again, returning their value to the ceiling. For this to be true, it is only necessary that the Internet Computer continue performing computations.

Making WebAssembly the World’s Virtual Machine

In the distant past, high-level computer programming languages were primarily compiled down to software in the form of low-level machine instructions that could run directly on computer hardware under the supervision of an operating system. For example, a program written in C would be compiled down to X86 assembly instructions, designed originally by Intel, but then adopted by other computer processor manufacturers that can run directly on market-dominant X86 family silicon. But software in the form of low-level machine instructions has various drawbacks, including that its execution can be hard to sandbox (make secure), and that it must be run on a computer with a particular hardware architecture and arranged in a format that only works on some expected operating system.

To address these issues, interpreted languages were introduced, which allowed people to distribute high-level code without compiling it, but these suffer from poor performance. Consequently, another approach was to have software compilation target a “process virtual machine” — a virtual computer architecture that is implemented in software. For example, the Java language could be compiled down to low-level bytecode, which can be run on any Java Virtual Machine (JVM) implementation. Since a highly optimized virtual machine can be developed for any combination of operating system and computer hardware, and the JVM provides a secure sandbox in which bytecode can be run, this made it possible for software written in Java to run efficiently and securely anywhere.

The JVM was developed by Sun Microsystems, which was acquired by Oracle Corporation, and has become mired in copyright and patent issues. Meanwhile, it was designed in such a way that it was only suitable for application with programming languages that use garbage collection, which popular and efficient languages such as Rust and C++ don’t use. Furthermore, its complexity and inherent non-determinism makes it unsuitable for running smart contract code on blockchains. For such reasons, when the Ethereum project needed a virtual machine for its blockchain to run low-level smart contract bytecode to be compiled down from high-level languages such as Solidity, it chose to build its own Ethereum Virtual Machine (EVM). This involved a gallant R&D effort, but developing highly secure and efficient virtual machines that can support execution of complex and powerful software is a very major technical undertaking requiring substantial ongoing effort. Consequently, this now greatly constrains what Ethereum smart contracts are capable of doing. There were few other options, however, when Ethereum was being designed and built in 2014.

In March 2017, an MVP specification for a new low-level instruction format called WebAssembly was proposed, for which anyone might build a virtual machine implementation. This new format brought many advantages. Among other things, it could support a wide variety of high-level programming languages, its bytecode was compact and could easily be run at speeds comparable to native machine instructions, and it intentionally provided a strong platform for advanced features such as formal software verification. The format soon became an increasingly important open standard developed by a W3C Community Group and W3C Working Group, and the project now has a large following. Crucially, support for WebAssembly code is now supported by all of the major web browser engines, so that it can play a part in the web experiences of billions of people. This has attracted the substantial resources necessary to develop a virtual machine “for the world,” and ensures that it benefits from testing on a vast scale. While the development of WebAssembly continues, it is showing that it is not only suitable for client applications but also back-end server applications and, most importantly, blockchains. Essentially, WebAssembly looks likely to become the virtual machine format of the internet.

The Internet Computer project was fortunate that early team member Timo Hanke introduced Andreas Rossberg, a co-designer of WebAssembly, to the DFINITY Foundation soon after the MVP standard was published, and Andreas became a Principal Researcher and Engineer. This guaranteed that WebAssembly became the low-level virtual machine format used by the Internet Computer from early in the project. The Internet Computer is designed to take full advantage of the current standard and future evolutions of it that are coming. Because the Internet Computer canister framework runs WebAssembly bytecode, smart contracts can potentially be created using almost any programming language. Currently, the canister SDK developed by the DFINITY Foundation supports the development of smart contracts using the Rust language and Motoko, a new language developed by our languages division in an effort led by Andreas. (Support for several other languages is also being developed.)

Motoko is a modern, easy-to-use language that can be quickly learned by anyone who knows JavaScript. It is also powerful and expressive, and has been designed to maximize the value of both novel Internet Computer environment features, such as orthogonal persistence and many aspects of WebAssembly itself. The Internet Computer aims to cement WebAssembly as the virtual machine of the back end as well as front ends, while reimagining the back end as blockchain that runs secure, scalable, efficient, and powerful smart contracts that connect to the web.

Completing the Blockchain Trinity

The Internet Computer is designed to complete a trinity of public blockchain flavors, which started with Bitcoin, and progressed through Ethereum. The need for this third major innovation derives from the essentially different purposes, design choices, and tradeoffs involved in the three networks, which can complement each other greatly.

Of course, the journey began with Bitcoin, which introduced the first true blockchain network to the world (although precursor concepts provided stepping stones such as b-money, which I came across while using Wei Dai’s crypto++ library in 1999). In this first public network, the blockchain mechanism was more the enabler rather than the purpose, which was to bootstrap native value in cyberspace, which had been a longstanding aim of cypherpunks. The blockchain mechanism was used to create cryptocurrency tokens within a ledger whose rules guaranteed their all-time supply would be fixed, such that their price would rise with demand, and which could be used to pay those hosting the network so that the network was self-sustaining. These tokens would be fungible, and could be held and transferred directly by anyone with an internet connection, in the manner of a digital substance that plays the roles of store of value and medium of exchange independently of the control, influence, or support of any person or organization.

Now, after more than 12 years of subsequent research and development, the design of Bitcoin seems remarkably simple. This simplicity has proven an enormous strength. As the first cryptocurrency, it seems unlikely that its pseudonymous founder, Satoshi Nakamoto, would have remained unscathed had he been developing such a disruptive and controversial invention as a public person, nor would he have been able to amplify his efforts through an organization, and so it was a great advantage that the project could be developed entirely by the contributions of an open source community of enthusiasts. Further, its simplicity provided a purpose that is clear and unpolluted, and it is establishing itself as digital gold, with each bitcoin now worth more than $35,000, such that the market capitalization of all bitcoin exceeds $655,000,000, making Bitcoin by far the most valuable network in existence. But the simplicity that is Bitcoin’s strength was also very limiting for many potential applications of cryptocurrency.

The Bitcoin ledger essentially consists of three columns: an address, which plays the role of a bank account number; a balance of bitcoins at the address; and an access control script that, when unlocked by a new “transaction,” enables the balance of bitcoin at the address to be moved to new addresses. Once Bitcoin had been running for a few years, some people became intrigued with the idea of using the access control scripts as the foundation for other functionality, as per the Mastercoin project that Vitalik Buterin described in Bitcoin Magazine in 2013. Many things were attempted. In 2015, I briefly advised a project that sought to create “mirror assets” on the Bitcoin ledger, whose value would track real-world assets such as stocks and commodities by using interlocking access control scripts to create decentralized contracts for difference through techniques related to the design of the Lightning Network proposed by Joseph Poon. But the Bitcoin network proved an unsuitable foundation for more general purpose blockchain endeavors because its access control scripts support only limited functionality to protect the network against bad logic, the scripts disappear when their bitcoin balances are spent to new addresses, and the network is also relatively slow and expensive.

This led to Vitalik Buterin proposing Ethereum in 2013, inspiring an effort that launched the network in 2015. Essentially, Vitalik described the design of what some have called “highly programmable cryptocurrency.” In his concept, the last two columns of the Bitcoin ledger were effectively swapped around, such that scripts resided permanently at addresses and Ethereum’s balances of ether cryptocurrency could shuttle between them. The scripts were conceived as “smart contract” software and made far more powerful by running them on a new virtual machine that allowed them to be “Turing Complete,” which, in principle, would allow them to be used to implement any system. Since hosted logic could now contain infinite loops or otherwise engage in expensive computation, Ethereum introduced the concept of gas, which limited the amount of computation that any one transaction could perform and required those submitting transactions to pay for it. Meanwhile, Ethereum re-used Bitcoin’s Proof-of-Work mechanism, and other features of the network, primarily contenting itself with speeding it up using the GHOST enhancement previously proposed for use with Bitcoin.

Ethereum explosively expanded the scope of blockchain. Whereas before, only cryptocurrency could be hosted inside the tamperproof trust zone in cyberspace that a blockchain creates, these could now be combined with powerful smart contract software within that zone. Very quickly, fascinating DeFi concepts began to appear, such as The DAO, which eventually fell prey to a security flaw but which proved the potential of decentralized models for creating financial enterprises in code using smart contracts. Ethereum would soon go on to power the ICO boom of 2017–18, controversially enabling projects around the world to raise billions of dollars directly from investors over the internet, again fundamentally transforming our world, even if much of the money raised in this emergent Wild West environment was squandered. Now, network effects enabled by the ease of integrating smart contracts is driving amazing growth of the DeFi ecosystem Ethereum hosts. My view is that Ethereum introduced a new kind of blockchain, and it has proven a tremendous success.

I remain an avid Ethereum supporter and enthusiast, and during the years 2014–16 I regularly spoke at related events on technical matters. One particular part of the general vision began to particularly enthrall me, however. Some Ethereum proponents had begun to speak of the concept of a “World Computer,” and this obsessed me, not least because I had devoted much time to studying how blockchains could be sped up using different network consensus schemes, and how blockchain capacity might be scaled out without bound by applying cryptography. Before I began to work full-time on blockchain in 2013, I had created an online game and the distributed systems behind it, which had successfully scaled to support millions of users. For me, to be worthy of the name, a “World Computer” would have to be capable of playing the role of humanity’s primary compute platform, supporting the construction of mass market internet services on-chain using smart contracts. This was the task I set myself to thinking about, and over time I had many conversations with project leaders in the Ethereum community.

Originally, I had no intention of launching a new blockchain network, and I widely proposed ideas with the hope that they might provide the foundation of some future version of Ethereum. By 2015, I was enjoying myself exploring techniques derived from the generation of random numbers, which I saw could be generated in an efficient and unstoppable manner using threshold cryptography in a decentralized network. To grab attention, I gave my concept the name DFINITY, an abbreviation of “Decentralized Infinity,” and created a simple website that remains accessible on the Internet Archive’s Wayback Machine. At the time, the concept of a blockchain that could run at web speed and host an unbounded volume of smart contract computation and data seemed truly implausible, and it was too great a leap for many in the blockchain community — especially since Ethereum was already advancing the game so far. This certainly wasn’t helped by the need to apply distributed computing techniques and cryptography in challenging ways that people were unfamiliar with. Eventually, I realized that aspects of the Ethereum project, and the nature of the network that had been created, meant that it could not provide a foundation through which the dream and technical direction I was proposing could be pursued. It was for this reason that the DFINITY project decided to create a new network.

Bitcoin, Ethereum, and the Internet Computer exist on a continuum that starts with pure cryptocurrency and moves through highly programmable cryptocurrency toward a “blockchain computer” that can play the role of a general purpose public compute platform. The creation of a blockchain network that can run at web speed, increase its capacity without bound, host computation and data at a tiny fraction of the cost, support much more powerful smart contract software that can be used to create dapps that can easily scale, and whose smart contracts can securely serve content into the web browsers of end users, such that it can be used to build a far wider range of systems and services, necessitates radically different approaches across the board. It would be impossible to form the underlying network from nodes running in the cloud using a Proof-of-Stake scheme, for example. The network must use special standardized hardware within identified independent data centers, it needs a powerful open governance system within its protocols so that it can evolve aspects of its architecture to scale capacity, and the science and engineering that’s involved is necessarily far more complex. This latter point has meant that the DFINITY Foundation has had to build out a large R&D operation across multiple international research centers, and it has taken major expenditures and several years of work to reach Mercury.

Finally, after so much waiting, the Internet Computer will complete and dramatically expand the spectrum of powers that public blockchain has. Going forward, I predict that the Bitcoin, Ethereum, and Internet Computer networks will add value to each other. How this will occur may already be seen in the way that Ethereum systems now wrap bitcoins and use them as collateral within DeFi schemes, in effect driving their utility and value. It will also be the case that the Internet Computer will expand the applications of the Ethereum network and provide its dapps with greater capabilities. In fact, efforts are already underway to integrate Ethereum with the Internet Computer, which were spurred by an earlier post I wrote. Cryptographic systems drawn from the Internet Computer’s underlying Chain Key cryptography will now be repurposed to enable its smart contracts to create Ethereum transactions. In the other direction, efforts are being made to mirror the entire Ethereum blockchain within Internet Computer smart contracts, which together will enable bi-directional calling between Ethereum and Internet Computer smart contracts, without slow expensive hubs or bridges. Ethereum dapps might use the Internet Computer to securely serve web experiences to users rather than relying on trusted cloud services such as Amazon Web Services, for example. What is clear is that the Internet Computer will now help make blockchain more interesting and valuable than ever before.

The Remaining Path to Genesis

Mercury launched mainnet in alpha form, decentralizing the network by placing its nodes under the control of its Network Nervous System. The network will transition into beta when the NNS triggers the Genesis event. This will allow those who have acquired ICP utility tokens by contributing to the project, or through community participation, to withdraw them into “voting neurons,” swelling their number, which is currently in the tens, to around 50,000. Utility token holders will then be able to participate in network governance and earn voting rewards or dissolve their neurons to release the tokens locked inside, whereupon they might be converted into cycles that can power smart contract computation or be transferred.

Although the network is now decentralized and running across hundreds of node machines operated by independent parties around the world, transitioning to beta is a major undertaking. The DFINITY Foundation, the Internet Computer Association, and many independent external parties and community contributors will now set themselves to this important task such and we are now in striking distance of this step. Genesis will be triggered by the NNS adopting a proposal that anyone can submit. Very likely, however, Genesis will only be triggered when the following important gates have been passed:

  • Releasing Code and Designs: The DFINITY Foundation must release all related source code, technical designs, and novel protocol math and cryptography for public inspection.
  • Educational Materials: The code, technical designs, and math involved are complex, and easy-to-digest educational materials must be made available make them more accessible to the community such that the true nature of the project is widely understood.
  • Developer Experience: Additional developer tools will be released to support developers who are building on the Internet Computer, together with an end-to-end implementation of CanCan, an open internet service that reimagines TikTok, which will support entrepreneurs with bootstrapping new hyperscale internet projects on blockchain.
  • Decentralization: Further information about the physical Internet Computer network and its participants, and about key organizations working to support the Internet Computer project, will be disseminated. The network is also still growing, and it is intended that it has 896 node machines running from 32 data center at Genesis.
  • Ecosystem Coordination: The newly formed Internet Computer Association will ramp up to provide a forum and help coordinate community stakeholders, such as independent data centers and network funding partners, and expand access to developer programs, making sure that everybody who wishes to participate has access to the necessary information and community support.
  • Technical Work (Final Features, Security Audits, Stress Tests): Before the network transitions to beta, the R&D team wishes to add a few important final features, and the security team will demand that final security audits and stress tests are passed.
  • Feature Roadmap: Some features will be missing from Genesis. For example, the network will activate a feature that protects the data stored on node machines from those with physical access to them, after Genesis, to make it easier to address any early bugs that arise. A feature roadmap will provide full details to support the effective planning of those building on the network.

The 20-Year Roadmap

Going forward, the DFINITY Foundation, the newly formed Internet Computer Association, and many other organizations shall work tirelessly to improve Internet Computer technology and support those participating in the ecosystem. For its part, the DFINITY Foundation plans to continue scaling out its R&D operations. Currently, its team members have collectively published more than 15,000 scientific papers, received almost 100,000 citations, and filed more than 200 patents. Many of our team members are well-known figures within the field of computer science, such as Jan Camenisch, our VP of Research, a famous ACM-decorated cryptographer, and many of our engineers come from senior roles at tech industry giants such as Google (our most common former employer). Although we are approaching 150 team members, we plan to continue expansion apace, and hope to double in size by the end of 2021 and then continue scaling our organization into the thousands. Our goals are not short-term but long-term, reflecting the nature of the project and the profound positive impact we expect it to have on the world. We will be just as determined and uncompromising in the future as we have in the past. Launch of the alpha mainnet has kicked off an exciting 20 year roadmap.

Note: This roadmap expresses our aspirations and plans for the future, rather than guarantees. We hope you will help us get there. Perhaps even faster than envisioned here…

5 YEARS

In five years’ time, everybody who is interested in tech will have heard about the Internet Computer network, and there will be widespread understanding of its nature and purpose. Meanwhile, ever-increasing numbers of entrepreneurs and developer teams will be choosing to build mass market open internet services on the Internet Computer rather than using traditional IT. This will enable them to raise money more easily, better recruit and retain teams, and implement features that allow their new services to compete far more effectively. Building on the Internet Computer will have become a mainstream option, after some open internet services have led the way by succeeding on a grand scale, and many investors will insist that it is used. Schools and universities will also be teaching Internet Computer and Motoko, feeding ever larger numbers of young developers with nothing to lose into the ecosystem. Meanwhile, open internet services will be devising ever more compelling features using tokenization and by leveraging DeFi functionality. Within the enterprise space, integrators and business consultants, inspired by the opportunity to help enterprise reinvent IT to make it secure and unstoppable, will be assisting increasing numbers of enterprises build on the public platform, and pan-industry systems will be proliferating, following the lead of organizations such as Origyn.

10 YEARS

In 10 years’ time, it will be widely recognized by the tech community that the Internet Computer is on a likely trajectory that will one day make it humanity’s primary compute platform for building systems and services, and that the “open internet” will now near-certainly predominate over Big Tech’s closed proprietary ecosystem. Further, extraordinary growth within the DeFi ecosystem will have it approaching par with the traditional financial industry, generating more energy. The spirit and enthusiasm within the blockchain community will have spread out far and wide around the world, and vastly more people than ever before will be building on the internet rather than closed systems. The democratization of access to opportunity in tech to the 99 percent outside of Silicon Valley will have leveled the playing field and brought vastly more talent to bear. While Silicon Valley will remain a force to be reckoned with, its investors will increasingly be directing funds abroad to help support exciting and successful new services in far-flung locations where they could never have been built before. The provision of economic opportunity around the world will recruit many fervent new fans to the cause, further catalyzing the ecosystem. Very few computer science students will graduate without first having created a smart contract on the Internet Computer.

20 YEARS

In 20 years’ time, the open internet will finally be significantly bigger than Big Tech’s closed proprietary ecosystem, which will now be in terminal decline, but will take forever to disappear for similar reasons that explain why COBOL code is still running. Much of our crucial global society’s information infrastructure, systems, and services will now be running on the open, unstoppable, and tamperproof Internet Computer blockchain network. This will bring about profound transformations in how things work, and support an unimaginably richer internet ecosystem that incorporates more innovation, collaboration, and dynamism that drives positive economic growth around the world. Much of what is considered the developing world today will have skipped the Big Tech ecosystem and will be running entirely on the open internet, providing advantages and efficiencies that help further equalize opportunity. Smart contract technology will by now have delivered deep and meaningful changes to how society operates around the world, improving personal privacy, freedoms, and sovereignty on a massive scale, and DeFi will now be very significantly larger than traditional finance. Meanwhile, the network will by now have incorporated all kinds of new science, through quantum-safe cryptography to new features, such that it looks quite different to how it does today, and a new vanguard of researchers and engineers will be driving it forwards.

Please join us on this journey!

Wishing You All the Best and a Happy 2021, Dominic.

--

--