From Storing the Past to Computing the Future: The Hyper Parallel Computer of AO

YBB
YBB Capital
Published in
14 min readApr 3, 2024

Author: YBB Capital Researcher Zeke

Foreword

Today’s Web3 has branched into two main blockchain architectural designs, leading to a certain degree of aesthetic fatigue. Whether it’s the proliferation of modular public chains or the new L1s that emphasize performance without showing a clear advantage, their ecosystems can be said to be replicas or minor improvements of the Ethereum ecosystem, offering highly homogeneous experiences that have long ceased to surprise users. However, Arweave’s newly proposed AO protocol is a breath of fresh air, achieving super high-performance computing on a storage blockchain and even reaching a quasi-Web2 experience. This seems to diverge significantly from the expansion methods and architectural designs we are familiar with. So, what exactly is AO, and where does the logic supporting its performance come from?

Understanding AO

The name AO derives from the acronym for Actor Oriented, a programming paradigm within the concurrent computing model known as the Actor Model. The overall design idea stems from an extension of Smart Weave and also follows the Actor Model’s core principle of message passing. Simply put, AO can be understood as a “super parallel computer” running on the Arweave network through a modular architecture. From the implementation perspective, AO is not the commonly seen modular execution layer but a communication protocol that standardizes message passing and data processing. The core goal of this protocol is to facilitate collaborative computing among different processes within the node network through message passing. Based on the complexity of the tasks, it can achieve a network capable of any scale of parallel computing, ultimately allowing Arweave, this “giant hard drive,” to possess centralized cloud-level speed, scalable computing power, and expandability in a decentralized trust environment.

Architecture of AO

The architecture of AO seems to share some similarities with the “Core Time” segmentation and recombination that Gavin Wood introduced at last year’s Polkadot Decoded conference. Both aim to create a so-called “high-performance world computer” through the scheduling and coordination of computational resources. However, there are fundamental differences between them. Exotic Scheduling deconstructs and reorganizes the block space resources of the relay chain without significant changes to Polkadot’s architecture. Although it breaks the limitations of a single parallel chain under the slot model, its ceiling is still limited by the maximum number of idle cores in Polkadot. In theory, AO could offer nearly unlimited computational power through horizontal expansion of nodes (depending on the level of network incentives) and greater freedom. In terms of architecture, AO standardizes data processing and message expression, and accomplishes message sorting, scheduling, and computing through three network units (subnets). According to official information, the functions of these units and their specific roles can be summarized as follows:

  • Processes: Processes can be seen as collections of instructions executed within AO. At initialization, a process can define its required computing environment, including the virtual machine, scheduler, memory requirements, and necessary extensions. These processes maintain a “holographic” state (each process’s data can independently store its state in Arweave’s message log, with a detailed explanation of the holographic state provided in the “Verifiable Issues” section). This means processes can work independently, with dynamic execution handled by the appropriate computing unit. Besides receiving messages from user wallets, processes can also forward messages from other processes through the Messenger Unit (MU).
  • Messages: Every interaction between a user (or another process) and a process is represented by a message. These messages must conform to Arweave’s native ANS-104 data items to maintain a consistent native structure, facilitating Arweave’s storage of information. Messages can be likened to traditional blockchain transaction IDs (TX IDs), although they are not entirely the same.
  • Messenger Units (MU): MUs relay messages through a process called ‘cranking’, responsible for the system’s communication, ensuring seamless interactions. Once a message is sent, the MU routes it to the appropriate destination within the network (SU), coordinating interactions and recursively processing any generated outbox messages until all messages are processed. Besides relaying messages, MUs offer various functions, including managing process subscriptions and handling scheduled cron interactions.
  • Scheduler Units (SU): Upon receiving a message, SUs initiate a series of key operations to maintain the process’s continuity and integrity. After receiving a message, SUs assign a unique incremental nonce to ensure order relative to other messages within the same process. This assignment process is formalized through cryptographic signatures, ensuring authenticity and sequence integrity. To further enhance process reliability, SUs upload the signature assignment and messages to the Arweave data layer, ensuring message availability and immutability and preventing data tampering or loss.
  • Computing Units (CU): CUs compete with each other in a peer-to-peer computing market to complete services for resolving computing process states for users and SUs. Once state computation is complete, CUs return a signed proof with specific message results to the caller. Additionally, CUs can generate and publish signed state proofs that other nodes can load, although this requires paying a certain fee.

The Operating System AOS

AOS can be viewed as the operating system or terminal tool within the AO protocol, usable for downloading, running, and managing threads. It provides an environment that enables developers to develop, deploy, and run applications. On AOS, developers can utilize the AO protocol for the development and deployment of applications and interact with the AO network.

Operational Logic

The Actor Model advocates a philosophy that “everything is an actor.” All components and entities within this model are considered “actors,” each with its state, behavior, and mailbox. They communicate and collaborate through asynchronous messaging, allowing the entire system to organize and operate in a distributed and concurrent manner. The operational logic of the AO network follows suit, with components and even users abstracted as “actors” communicating through a messaging layer, linking processes together to establish a distributed work system capable of parallel computation without shared state.

Here’s a brief description of the steps in the information passing process:

1.Message Initiation:

  • Users or processes create a message to send requests to other processes.
  • The MU (Messenger Unit) receives the message and sends it to other services using a POST request.

2.Message Processing and Forwarding:

  • The MU processes the POST request and forwards the message to the SU (Scheduler Unit).
  • The SU interacts with Arweave storage or the data layer to store the message.

3.Retrieving Results by Message ID:

  • The CU (Computing Unit) receives a GET request, retrieves results based on the message ID, and evaluates the message’s status on the process. It can return results based on individual message identifiers.

4.Retrieving Information:

  • The SU receives a GET request and retrieves message information based on the given time range and process ID.

5.Pushing Outbox Messages:

  • The final step involves pushing all outbox messages, involving checking the messages and generation within the result object.
  • Based on the results of this check, steps 2, 3, and 4 can be repeated for each relevant message or generation.

What Does AO Change?

Differences from Common Public Blockchains:

  • Parallel Processing Capability: Unlike networks such as Ethereum, where the base layer and each Rollup essentially operate as a single process, AO supports an arbitrary number of processes running in parallel while ensuring the integrity of computational verifiability remains intact. Moreover, while these networks operate under a globally synchronized state, AO processes maintain their independent states. This independence enables AO processes to handle a higher number of interactions and computational scalability, making it particularly suited for applications requiring high performance and reliability.
  • Verifiable Reproducibility: While some decentralized networks, like Akash and peer-to-peer system Urbit, do offer massive computational capabilities, they do not provide the verifiable reproducibility of interactions like AO, or they rely on non-permanent storage solutions to preserve their interaction logs.

Differences Between AO’s Node Network and Traditional Computing Environments:

  • Compatibility: AO supports various forms of threads, whether based on WASM or EVM, that can be technically bridged to AO.
  • Co-creation Projects: AO also supports co-creation projects where atomic NFTs can be published on AO, allowing data combined with UDL to build NFTs on AO.
  • Data Composability: NFTs on AR and AO can achieve data composability, allowing an article or content to be shared and displayed across multiple platforms while maintaining the consistency and original properties of the data source. When content is updated, the AO network can broadcast these updated statuses to all relevant platforms, ensuring the synchronization and dissemination of the latest content status.
  • Value Feedback and Ownership: Content creators can sell their works as NFTs and transmit ownership information through the AO network, realizing the value feedback of content.

Support for Projects:

  • Built on Arweave: Leveraging Arweave’s features, AO eliminates vulnerabilities associated with centralized providers, such as single points of failure, data leaks, and censorship regimes. Computation on AO is transparent and can be verified through decentralized trust minimization characteristics and reproducible message logs stored on Arweave.
  • Decentralized Foundation: AO’s decentralized foundation helps overcome scalability limits imposed by physical infrastructure. Anyone can easily create an AO process from their terminal without specialized knowledge, tools, or infrastructure, ensuring even individuals and small-scale entities can have global impact and participation.

The Verifiability Issue of AO

After understanding the framework and logic of AO, a common question arises. Without any global characteristics of a decentralized protocol or blockchain, how does merely uploading some data to Arweave achieve verifiability and decentralization? This is precisely where the ingenuity of AO’s design lies. AO itself is an off-chain implementation that neither solves the verifiability problem nor changes the consensus. The AR team’s approach is to separate and then modularly connect the functions of AO and Arweave, with AO handling communication and computation, and Arweave providing storage and verification. The relationship between the two is more akin to mapping; AO only needs to ensure that interaction logs are stored on Arweave, projecting its state onto Arweave, thereby creating a holographic map. This projection of holographic state ensures consistency, reliability, and determinacy in computational state outputs. Additionally, message logs on Arweave can also trigger specific operations in the AO process (which can self-awaken based on preset conditions and schedules, and perform corresponding dynamic operations).

According to Hill and Outprog’s sharing, if the verification logic is simplified further, AO can be imagined as an inscription computing framework based on a super parallel indexer. As we know, Bitcoin inscription indexers verify inscriptions by extracting JSON information from inscriptions and recording balance information in an off-chain database, completing verification through a set of indexing rules. Although the indexer is an off-chain verification, users can verify inscriptions by switching between multiple indexers or running their indexer, thus eliminating concerns about indexer misconduct. As mentioned earlier, data such as message sorting and the holographic state of processes are uploaded to Arweave. Therefore, based on the SCP paradigm (Storage Consensus Paradigm, which can be simply understood as an on-chain indexer of indexing rules, it’s worth noting that SCP appeared much earlier than indexers), anyone can recover AO or any thread on AO through the holographic data on Arweave. Users do not need to run a full node to verify a trusted state. Just like switching indexers, users can simply make query requests to one or multiple CU nodes through SU. With Arweave’s high storage capacity and low cost, this logic allows AO developers to implement a supercomputing layer far exceeding the functionalities of Bitcoin inscriptions.

AO and ICP

Let’s summarize the characteristics of AO with some key terms: a massive native hard drive, unlimited parallelism, unlimited computation, a modular architecture, and holographic state processes. All of this sounds particularly promising, but those familiar with various blockchain projects might notice a striking similarity between AO and the once-celebrated “Internet Computer” project, ICP.

ICP was once touted as the final “king-level” project in the blockchain world, heavily supported by top institutions, and reached a FDV of 200 billion USD during the crazy bull market of 2021. However, as the tide receded, the token value of ICP also plummeted. By the bear market of 2023, the token value of ICP had fallen nearly 260 times from its all-time high. But setting aside the token price performance and reevaluating ICP at this point in time, its technological features still possess many unique aspects. Many of the impressive advantages that AO boasts today were also present in ICP. So, could AO fail like ICP did? Let’s first understand why the two are so similar. Both ICP and AO are designed based on the Actor Model, focusing on blockchains that operate locally, which is why they share many features. ICP’s subnet blockchain consists of independent, high-performance hardware devices (node machines) that own and control them, running the Internet Computer Protocol (ICP). The Internet Computer Protocol is implemented by many software components, bundled together as a replica, as they replicate state and computation across all nodes in the subnet blockchain.

ICP’s replication architecture can be divided into four layers from top to bottom:

  • Peer-to-Peer (P2P) Network Layer: Used for collecting and announcing messages from users, other nodes in their subnet blockchain, and other subnet blockchains. Messages received by the peer layer are replicated across all nodes in the subnet to ensure security, reliability, and resilience.
  • Consensus Layer: Selects and orders messages received from users and different subnets to create blockchain blocks, which can be notarized and finally determined through Byzantine fault-tolerant consensus forming an evolving blockchain. These final blocks are passed to the message routing layer.
  • Message Routing Layer: Used for routing user and system-generated messages between subnets, managing input and output queues for Dapps, and arranging message execution.
  • Execution Environment Layer: Computes deterministic calculations involved in executing smart contracts by processing messages received from the message routing layer.

Subnet Blockchains

The so-called subnets are collections of interacting replicas running separate instances of the consensus mechanism to create their own blockchains, on which a set of “canisters” can run. Each subnet can communicate with other subnets and is controlled by the root subnet, which delegates its authority to the various subnets using chain key encryption technology. ICP uses subnets to allow for its infinite scalability. The problem with traditional blockchains (as well as each subnet) is that they are limited by the computational power of individual node machines, as each node must run everything that happens on the blockchain to participate in the consensus algorithm. Running multiple independent subnets in parallel allows ICP to break through this single-machine barrier.

Why It Failed

As mentioned above, the goal of the ICP architecture, simply put, is to be a decentralized cloud server. A few years ago, this concept was as shocking as AO is today, but why did it fail? Simply put, it fell short of achieving both Web3 and the functionality of centralized cloud services, finding itself in an awkward position of being neither here nor there. There are three main issues. First, ICP’s program system Canister, also referred to as “containers” in the article, is somewhat similar to AO’s AOS and processes, but not the same. ICP’s programs are encapsulated within Canisters, invisible from the outside, and require specific interfaces to access data. This setup is unfriendly to DeFi protocol contract calls under asynchronous communication, so ICP failed to capture the corresponding financial value during the DeFi Summer.

The second issue is the extremely high hardware requirements, leading to the project not being decentralized. The image below shows the minimum hardware configuration for an ICP node at the time, which is still quite exaggerated even by today’s standards, far exceeding Solana’s requirements and even higher than those of storage blockchains in terms of storage needs.

The third issue is the lack of an ecosystem. Even now, ICP is a high-performance blockchain. If there are no DeFi applications, what about other applications? Unfortunately, ICP has not produced a killer application since its inception, failing to capture both Web2 and Web3 users. With such insufficient decentralization, why not use rich and mature centralized applications directly? Nevertheless, it’s undeniable that ICP’s technology remains top-notch. Its reverse gas, high compatibility, and unlimited scalability are still attractive features necessary for attracting the next billion users. In the current AI wave, if ICP can leverage its architectural advantages, there may still be a chance for a turnaround.

So, returning to the question in the article, will AO fail like ICP did? Personally, I believe AO won’t make the same mistakes. The latter two reasons behind ICP’s failure are not issues for AO; Arweave already has a solid ecological foundation, and holographic state projection solves the centralization problem. Compatibility-wise, AO is also more flexible. More challenges may focus on the design of the economic model, support for DeFi, and a century-old question: in non-financial and storage domains, what form should Web3 take?

Web3 Should Not End with Narratives

In the world of Web3, the term “narrative” is undoubtedly one of the most frequently used words. We have even become accustomed to evaluating the value of most tokens from the perspective of narratives. This naturally stems from the predicament that most Web3 projects have grand visions but are awkward to use. In contrast, Arweave already has many fully realized applications that match Web2-level experiences. For instance, if you have used projects like Mirror or ArDrive, you would hardly feel any difference from traditional applications. However, as a storage blockchain, Arweave still faces significant limitations in value capture, and computing may be an inevitable path. Especially in today’s external world, AI has become a major trend, and there are many natural barriers in the integration of Web3 at this stage, which we have also discussed in past articles. Now, Arweave’s AO, with its non-Ethereum modular architecture, provides a great new infrastructure for Web3 x AI. From the Library of Alexandria to hyper parallel computers, Arweave is forging its own paradigm.

About YBB

YBB is a web3 fund dedicating itself to identify Web3-defining projects with a vision to create a better online habitat for all internet residents. Founded by a group of blockchain believers who have been actively participated in this industry since 2013, YBB is always willing to help early-stage projects to evolve from 0 to 1.We value innovation, self-driven passion, and user-oriented products while recognizing the potential of cryptos and blockchain applications.

Website | Twi: @YBBCapital

Reference Articles:

  1. AO Quick Start: Introduction to the Super Parallel Computer: https://medium.com/@permadao/ao-快速入门-超级并行计算机简介-088ebe90e12f
  2. X Space Event Record | Is AO an Ethereum Killer, How Will It Drive New Narratives in Blockchain?: https://medium.com/@permadao/x-space-活动实录-ao-是不是以太坊杀手-它将怎样推动区块链的新叙事-bea5a22d462c
  3. ICP White Paper: https://internetcomputer.org/docs/current/concepts/subnet-types
  4. AO CookBook: https://cookbook_ao.arweave.dev/concepts/tour.html
  5. AO — The Super Parallel Computer Beyond Your Imagination: https://medium.com/@permadao/ao-你无法想象的超并行计算机-1949f5ef038f
  6. A Multi-angle Analysis of ICP’s Decline: Unique Technology and a Sparse Ecosystem: https://www.chaincatcher.com/article/2098499

--

--

YBB
YBB Capital

A leading Web3 fund driving the future through innovative investments.