The Todalarity is Here, Part One: SingularityNET / TODA Synergy at the Core of the Emerging Global Brain — Ben Goertzel
For the last few months, we at SingularityNET Foundation have been exploring the potential of a close partnership with the family of organizations behind the TODA protocol — a thoroughgoing collaboration spanning the technical, business and community-organization aspects of what we do.
The exploration has now proceeded far enough that we have decided to advance the SingularityNET/TODA partnership full speed ahead — including, among other initiatives, the creation of a new “product accelerator” called Todalarity, with a mission of helping promising new AI-related startups to integrate their products into the TODA and SingularityNET ecosystems.
To understand why the TODA/SingularityNET collaboration makes so much sense, one has to look carefully at the essential missions and architectures of the two projects, the historical contexts that produced each of them, and the futures that each are working to build.
In this series of 3 blog posts, we’re going to take a fairly deep dive — but those who bear with us till the end will be rewarded with a genuine understanding of the profound potential that SingularityNET, TODA and Todalarity, working together, have to seed the emergence of the next level of Internet intelligence. By which I mean both immediately practical, applied AI products and services — and slightly longer term, the transition from today’s narrow AI systems to powerful AGI systems resident in and emergent from the global AI network.
Online information resources regarding TODA are in rapid development this fall; for now Toda.Network, TODAQ and Todalarity are the places to look.
From Yesterday’s Internet to Today’s Global Brain
Many of the core concepts behind SingularityNET were articulated first in my 2001 book Creating Internet Intelligence. From 1995 when I first began experimenting with the Java programming language — the first language that made it really straightforward and simple to write software that used the Internet, rather than some particular computer, as its playground — it was clear to me that the path to advanced Artificial General Intelligence, and to maximum human benefit via AI technology, was going to be through harnessing the collective computational power of the Internet as a whole.
The value I saw in the Internet as an AI computing fabric wasn’t just in the sheer amount of computing power and data it aggregated. It was also in the flexible, open-ended architecture that it brought. Human brains and bodies have a tremendous number of degrees of freedom — each cell in the body has its own unique intelligence, and the ability humans have to adapt to new circumstances and goals in creative ways comes out of this internal complexity and variability. Individual computers and computer programs most commonly used are much more regimented in their structure and dynamics, leading to brittleness as one sees in most modern software, AI included. On the other hand, the Internet as a whole possesses a richness and complexity on multiple scales, that seems in some ways more capable of displaying creative intelligence meeting or exceeding that of biological systems.
Perhaps, I began seriously thinking in the late 1990s, the first truly powerful AGI systems would emerge not from individual intelligent software programs, but from heterogeneous networks of interacting software programs.
That vision is still one that animates me, but the infrastructure of that era wasn’t quite there yet. Today in 2019, things have gotten a lot more interesting.
In the late 90s, we had seen the transition from the old-style Internet focused on textual communication and remote access to computing resources to the “Web” focused on a richer variety of media. We were in the midst of the transition from the original Web to the “Web 2.0” in which Web pages — the most common nodes in the distributed information network of the Internet — ran embedded software code, allowing different Web pages to communicate and coordinate with each other. We were still a long way from the virtualization and containerization revolution, which could usher in the current Internet era wherein a lot of the nodes in the Internet information network are simulated machines of various sorts.
The containerization revolution was one of the developments making the SingularityNET architecture possible. A SingularityNET network is a collection of “AI agents” that offer AI services to external clients and to other AI agents. Via providing tools making it easy for AI agents to describe what services they offer, to find other agents meeting given criteria, and to rate the quality of other agents, SingularityNET fosters the emergence of complexes of AI agents possessing a synergetic combined intelligence richer and greater than the sum of the intelligences of the component agents. On the software level, this is enabled by Docker, LXC and other containerization tools that make it easy for diverse AI systems to carry out a basic level of operations and interactions in a uniform way.
Compute power and memory are generally scarce resources for AI systems — so far at least, no matter how powerful the computing infrastructure gets, the latest and greatest AI algorithms find a way to fruitfully leverage as much compute power as their owners are willing to throw at them. It follows that if one wants to create a global network of AI agents self-organizing into an emergent global intelligence, one has to deal effectively with the allocation of resources in the network. One also has to deal adeptly with issues of overall network control — since along with a scarcity of resources, almost inevitably follows a struggle for power.
A global network of AI agents must have some method of effective value propagation mechanism at its core because the “assignment of credit” problem — determining which components in a system are responsible for a certain good or bad act to which extents — is key to every complex cognitive system. At the infrastructure level, however, there has not historically existed any reasonably satisfactory method of exchanging value in complex networks of Internet-based agents.
Bitcoin was the first major breakthrough in this direction. The Bitcoin design addressed some fascinating and peculiar requirements: To be a secure mechanism for storing and transmitting value, operational in a purely decentralized way without any privileged controller, and capable of functioning in the context of a large or small network of participants unpredictably separated in space and time. The replicated ledger architecture underlying Bitcoin, implementing using the famous chain of blocks, fulfills these requirements admirably well.
Bitcoin, however, enshrined in its design one particular way of encapsulating value. Attempts to generalize the scope, such as colored coins, fell relatively flat. There was no suitably simple way to use the Bitcoin infrastructure to implement the variety of value management mechanisms that the modern “Internet of value” requires.
Ethereum solved this problem, in concept at any rate, by introducing the concept of “smart contracts” — which are in most cases neither smart nor contracts but are rather persistent scripts capable of programmatically controlling value transactions executed via a secure and decentralized protocol. The Solidity scripting language used for coding Ethereum smart contracts has various serious limitations — for instance, it doesn’t make it easy to define simpler domain-specific sublanguages handling the limited variety of smart contracts needed in specific application contexts. However, compared to what came before, when it first emerged it was amazing. It was the first practical tool capable of relatively simply being used to create a secure, decentralized world computer.
Ethereum contained many innovations in its design, but it inherited the replicated ledger architecture underlying Bitcoin, and along with this the inefficiency of executing transactions in the Bitcoin network. Various projects now exist aiming to combat this inefficiency via mechanisms such as “sharding” the replicated ledger into replicated pieces-of-ledger, or meta-level tooling allowing most transactions in a network to take place “off-chain” without confirmation by the network’s primary decentralized consensus mechanisms. However, these fixes have somewhat of the feeling of “putting lipstick on a pig.”
One gets the feeling that a fresh design start might be better than an updating and optimization of Ethereum — a restart taking into account the different requirements extant today, versus at the time Bitcoin was created. Bitcoin was designed to work effectively in a network consisting of few participants engaging in sporadic transactions. The requirement to work effectively in a teeming global network comprising numerous geographically distributed participants carrying out frequent transactions is quite different and could sensibly be expected to lead to quite different technical designs.
When we founded SingularityNET in 2017, we realized that Ethereum, as it existed at that time, was not remotely adequate to serve as the infrastructure of a mature SingularityNET network. In a mature SingularityNET, AI agents will be creating new AI agents all the time, and there may be rampant new agent creation and old agent destruction and inter-agent communication in the course of answering a single query from a human or a machine.
This would be no problem in the context of a centralized control system — after all, every time a consumer asks a question of their Amazon Alexa, a new Docker container is spawned on the Amazon servers to encapsulate the process of answering the question. But it’s completely infeasible given the state of the current Ethereum network. As a user of the Ethereum network for regular financial transactions, I note it’s quite common for simple transactions to take minutes to complete, or to fail completely and need to be re-initiated.
Ethereum is improving all the time, and in 2017 my collaborators and I were guardedly optimistic that Ethereum would rapidly evolve to the level of efficiency needed to meet SingularityNET’s needs. And this may still happen — for SingularityNET as it exists at this moment, the Ethereum network is mostly OK, and progress in the Ethereum world, at time, seems quite rapid.
When I understood the TODA architecture, however, it became clear to me that I was looking at the next step in the evolution of the Internet of value. By setting aside some of the assumptions Ethereum had inherited from Bitcoin, the core TODA Protocol underlying the TODA ecosystem of platforms and products provides the scalable, secure, decentralized infrastructure needed to support value propagation between the internal components of a highly dynamic global brain.
TODA leverages the same core algorithms and data structures as Bitcoin, Ethereum and other modern cryptographic networks — hash functions and Merkle trees and so forth — but it embeds them in very different software architecture. There is no replicated ledger, and no sharded replicated ledger. There is just a network of nodes, each containing a small fraction of the global Merkle tree (“Toda tree”) used for consolidating transactions in the network. To canonicalize a proposed transaction between two nodes, other nodes in the neighbourhood are called in to help.
For a node to provide help building the Merkle tree in TODA requires solving some simple math problems, but it’s not a lot of work, and the design is such that having more compute power at your disposal doesn’t enable you to gain a significant economic advantage in the network by doing more work. So no mining is needed in the TODA framework — implying a radically different economic ecosystem than what one sees with Bitcoin and Ethereum.
Bitcoin, Ethereum current implementations and other now-standard blockchain protocols have the admirable property that, all else equal, they become more secure as their networks gain more participants. However, they also tend to become slower as their networks gain more participants. The TODA network’s speed, as well as its security, increases as the number of participants increases. This represents a significant advantage as decentralized networks move toward large-scale adoption.
Another cool thing about the technology, because it is on the network layer, certain implementations of Bitcoin, Ethereum and any other blockchain can benefit from all the above, plus true peer to peer interoperability, without introducing any additional layer. Folks reading this may feel it’s too good to be true, I encourage anyone, get to the point where you are able to explain TODA, you would understand the promises everyone has been making about blockchains are actually coming to life. This, of course, is only one piece of the puzzle, but one missing piece that now is no longer missing and eventually will make itself to every device, everywhere. From a technical security perspective, the elements that ensure the proper dispersed dynamic necessity of devices are reliant on certain constants and limitations that are based on the laws of physics. For example, the network speed can never travel faster than the speed of light, if certain attacks and collusion would require a network 100x faster than the speed of light, we are in good shape from security by design perspective.
Via offering a more efficient framework for secure decentralized transactions, TODA makes it possible to build an Internet in which data, identity and value are stored and exchanged in a reliable way that is also thoroughly privacy-protecting, leaving each person or company with maximal sovereignty over their own digital assets.
A system of AI agents, residing in a decentralized network of containers, offering a diversity of AI services to external parties and to each other, carrying out secure messaging with a scalable decentralized protocol — this, finally, gives us the sort of infrastructure we need to make the long-time vision of a self-organizing, democratically-governed Internet-wide Global Brain a reality.
Singularity-on-TODA?
At the moment some members of the SingularityNET and TODA technical teams are experimenting with bringing the two technologies together on the operational level, by creating a prototype “Singularity-on-TODA” system in which SingularityNET AI agents can utilize the TODA protocol rather than Ethereum for their interaction. In Singularity-on-TODA, the various channels between SingularityNET agents that are set up using Ethereum in the ordinary SingularityNET (Beta V2) design, are instead set up to operate using TODA.
Initial experimentation with Singularity-on-TODA is being done using an easy-to-use deployment of the TODA protocol called Toda-as-a-Service (TaaS); however one of the next steps is to explore the integration of the Go implementation of TODA with SingularityNET’s Daemon component which also involves the Go language.
No firm and final decisions regarding the practical deployment of the current Singularity-on-TODA experiments have been made yet. However, one possibility under hypothetical discussion is to, at some point, make SingularityNET’s infrastructure multi-chain in nature. Multi-chain-ization has been part of the conceptual plan for SingularityNET from the start, but the current work with TODA is the first concrete development to occur that militates in this direction.
One way this could happen would be: Some fixed number of the current AGI-on-ETH tokens are frozen, and an equivalent number of AGI-on-TODA tokens are created, and the platform is modified so that the AI agents on the network accept either of the two types of AGI token as equally valuable. This would not change the total number of AGI tokens in existence, and it wouldn’t alter the basic tokenomic logic of the SingularityNET ecosystem. If it enabled more scalable functionality on the part of the network, it would increase the utilization of the network and decrease the cost of operation of the network, thus indirectly increasing the value of all the AGI tokens regardless of their underlying chain.
In a multi-chain SingularityNET like this, for instance, there might be a bias for AI agents to use AGI-on-TODA tokens for fast-paced interactions between AI agents, where the use of Ethereum would be too slow or expensive to be practicable. On the other hand, external entities paying AI agents directly for services, or fiat-to-crypto gateways mediating fiat payments by customers for SingularityNET AI services, could use either AGI-on-TODA or AGI-on-ETH depending on their preference.
Such large changes to SingularityNET infrastructure would be made only in consultation with the broader SingularityNET community, however we consider it important to be flexible and agile as development of the network proceeds — we are operating at the intersection of a number of incredibly rapidly-evolving technology spaces and what was cutting-edge one year may be dinosauric the next.