Just the rambling about technology, not even an article. Let’s banter about AO .
Global Shared Hard Drive
In early July 2020, we heard about Arweave for the first time. By the second week, we got down to discussing the possibility of trustless computing in a café. Imagine: Arweave as the tape of a Turing machine, with ubiquitous state machines being the clients in users’ hands, where the users’ operating systems and all running programs would be downloaded from Arweave. A global, omnipresent computing unit, sharing a massive blockchain hard drive. Every application running under this system could achieve consensus and trustlessness.
The conclusion of the discussion was that this goal would be hard to achieve, clearly giants like Microsoft and Apple would unlikely put their operating systems and applications on Arweave.
Now, AO has made all this possible. Initially, when designing the MSG Protocol (the prototype of AO) with Sam, I thought of it as a Kafka on blockchain. However, the focus isn’t on providing decentralized message queues for applications, but on replacing HTTP communications in the C/S architecture with trustless msg communications. If both user requests and server responses use AO, we will create a truly decentralized internet ecosystem on AO, just as envisioned in July 2020.
AO has the potential to move the entire internet onto Arweave. Arweave known as the Library of Alexandria is no longer limited to store the past. With AO, we can document today’s stories and distribute the value of the future in a decentralized manner.
Controversy
Recently, there have been controversial points about AO, mainly focusing on two issues:
- How does AO achieve verifiability?
First and foremost, AO does not address verifiability directly. Verifiability comes from Arweave’s immutable storage. Arweave stores the holographic data of every AO Process (including the holographic data of AO itself). Anyone can recover AO and any thread on AO using this holographic data. This is guaranteed by mathematics and is verifiable! This is based on the storage consensus paradigm, often referred to as SCP.
Key point: We know that UTXO transactions are verified before being added to the blockchain, but under the SCP paradigm, any data are added to the Arweave blockchain. This is significantly different from BTC’s UTXO. But does the relay attack or double-spent transactions prove the verifiability? To an SCP program, you only need to run all holographic transactions. Duplicate transactions would be discarded by the SCP program.
2. We all know that SCP applications are verifiable. But how does AO address the verifiability issue? Do users need to run a full node?
It’s inevitable that users would nitpick whether to compute the entire ledger to do the verification. This contradicts the problem BTC/ETH’s Merkle tree aims to solve, where the Merkle tree is the core of verification, with the Merkle root being added to the blockchain through PoW to do the verification. Now, you’re telling me there is no Merkle tree on AO? So AO cannot be verified, which turned out no consensus, a scam! This brings the discussion back to question 1. Typically, readers will be stuck in this ‘verifiable or running full node’ problem instead of diving deep into AO’s design architecture and principles.
Key point: AO does not address the verifiability issue, because the functions of AR and AO are completely separate. AR handles immutable storage to ensure the security and the verifiability. Both Merkle trees and consensus can be found here! The consensus here is the data order, not the state of data computation. AO can compute the ordered data on AR and generate states, but cannot alter the order of data on AR. This means AO cannot change the consensus. Verifying the computed states through PoW/PoS is onchain computation paradigm, which is completely different from SCP.
AO is responsible for computing the immutable data and display states of these computations. We use SCP to solve the verifiable problem instead of Merkle trees or PoW/PoS. If you’re still with us, let’s take a look at AO’s practice:
AO has implemented a verifiable Token using SCP, which incentivizes users to provide accurate data with its economic model (similar to Chainlink’s oracle). Please bear in mind, AO focuses on displaying the state instead of ensuring the verifiability. The Token’s economic model slashes providing the incorrect state and rewarding (mint) the correct ones.
Another key point : If AO doesn’t generate consensus, how can it perform slash and mint actions? Once again, AO is an SCP application. Both the state of query and response events are added to the Arweave blockchain, and the AO SCP program will load these two events (query and response) and calculate the results for mint and slash. It’s much clear to show you the code: https://github.com/outprog/slash-demo/blob/main/vm/vm_test.go.
The conclusion is: Users don’t need to run a full node for an application; service operators will run the CU (Compute Units). Users submit a query request to AO, which is allocated to a specific CU by the SU (Scheduler Unit). The CU computes the state according to the user’s request, which is signed by the compute node (which will also be added to the blockchain), and the state will be provided back to the user. If the user doesn’t trust a single CU, they can request more CUs for more trustworthy states. Each state returned by a CU is signed by the CU node (verifiable). If the CU provides an incorrect state, its stake will be slashed. For specific practices, see Sam’s X https://twitter.com/samecwilliams/status/1764023657058148718.
No Oracles? Oracle is everywhere!
When data needs to interact with a blockchain, we require oracles to attach “divinations” to the chain. Oracles that blockchains need are produced by a group of users through multi-signature; the divinations of oracles that humans need come from the consensus generated by blockchain algorithms. However, when humans read this consensus, they often need a third party to convey it, commonly known as infura.io. We tend to trust the messages from Infura, but are these messages authentic? (Is this melon ripe?)
It’s important to note:
- The trust in BTC/ETH is limited only within the on-chain environment, and it’s challenging to share the trust with off-chain environments. That is, when users request information from BTC/ETH nodes, they can only obtain status via the HTTP protocol. It’s impossible for users to verify the requested node themselves. To perform status verification, users must run the light node or the full node.
- In the AO/SCP network, both user requests and the status returned by nodes needs signature. All records stored on Arweave are “holographic data” to ensure the verifiability. Users don’t need to run any node to obtain a trustworthy status. In the AO/SCP model, all network information, including queries and responses (HTTP requests are excluded), is recorded on-chain. AO addresses the last mile problem of trustlessness.
In engineering practice, AO/SCP breaks down the concept of on-chain/off-chain, integrating trusted computing and oracles into a unified system. This system has decentralized characteristics, which eliminates the barriers between Web2 and Web3. AO/SCP and on-chain computing represent two completely different paradigms. This system can be viewed as a global computer entirely composed of oracles, where the divinations provided by oracles are immutable objective truths.
Elastic Verification
I don’t know since when consensus became a binary issue. With consensus or no consensus. Is there a neutral view between them?
This binary perspective is evident in the blockchain industry. You’re either a god or a shit. Much like religious beliefs, it’s either to be or not to be.
However, AO offers a completely different consensus architecture — rooted in the enduring nature of Arweave and the SCP paradigm, providing a solid foundation of consensus, or what we often refer to as “verifiability.” Yet, the extent to which applications choose to verify this consensus is flexible.
The discussion stems from an ongoing debate on X, where people often get stuck in the two types of controversies mentioned above. After a series of arguments, the question arose: how do two AO threads verify a msg? It’s well-understood that all msgs are signed, so we won’t delve into that. But how does a Process know if the received msg is trustworthy? We simulate two trust models below to explain this.
Imagine a compute unit CU1, running two processes P1 and P2, denoted as CU1(P1, P2).
Now we have: CU1(P1, P2) & CU2(P3, P4) & CU3(P1) & CU4(P1).
Compute goal: P3 in CU2 requests trustworthy computation information from P1.
Single Trust Model
- The code in P3 sends a computation request to P1.
- The SU scheduler assigns P3’s request to CU4(P1) for computation.
- CU4(P1) responds with the computation result, returning it to P3.
At this point, P3 fully trusts the computation result from P1 in CU4 and proceeds with its operations.
Multi Trust Model
- The code in P3 sends a computation request to P1, and P3 requests the SU to allocate multiple compute units.
- The SU assigns P3’s request to CU1(P1), CU3(P1), and CU4(P1).
- CU1(P1), CU3(P1), and CU4(P1) respond with computation results.
Now, P3 receives multiple results and can compare them to decide on their trustworthiness. For example, P3 may require that all responses from nodes be identical. Or, P3 may require that 2/3 of the results match exactly.
The trust model is just one development mode on AO, and developers can create judgment rules based on their need of trust. The program in P3 could even require computations from 100 CUs and demand that all 100 CUs’ results match exactly. It depends on how the developers implement the code in P3.
Thus, you can decide the trust model and verification expenditure you need. However, remember that the ultimate consensus security is still guaranteed by the perpetual nature of Arweave and SCP!