Decentralized Distributed Processing in SingularityNET

Benjamin Goertzel
Ben Goertzel on SingularityNET
8 min readNov 14, 2017

The distinction between decentralized control and distributed processing is substantial and critical, though the ideas and terms are sometimes blurred together in businessy, visionary or informal discussions on AI-and-blockchain.

It is important to draw the distinction, in part because there are interesting ways that decentralized and distributed can be brought together, using the strengths of each to bolster the other.

Decentralization refers to the ability of a network to conduct its operations without any central controller — so that major decisions involve a sizeable percentage of nodes in the network, and so that even if a large arbitrary portion of the network were to disappear, the rest could operate effectively.

Distributed computing, on the other hand, refers to the implementation of algorithmic processes in a way that divides them up among multiple processors in an efficient way.

One could say that decentralization involves “distributed computation” of governance related tasks in a network. But in a network whose nodes are themselves doing complex computations, decentralization does not imply that the computations done by the nodes are done in a “distributed” way. One could have, for instance, a decentralized network of nodes each carrying out their own small tasks in localized, non-distributed ways.

When a SingularityNET Agent outsources work to other Agents, and then collects the results in order to make a composite result to return to a customer or another Agent, this is a sort of distributed computing. It may even be a highly efficient form of distributed computing. The initial “outsourcing agreement” between two Agents in the SingularityNET cannot be an extremely fast thing to establish, because it involves some study of reputations and some negotiation over price, etc. However once two Agents have an outsourcing agreement confirms, then they can proceed to exchange data, requests and results at a very fast pace.

Outsourcing of work between Agents (which may be created by different developers who don’t even understand each others’ work; what matters is that the Agents involved understand each others’ inputs and outputs) is a critical sort of distributed AI in SingularityNET. But there will also be a value for other forms of distributed processing — for instance, for the existence of a flexible capability to take AI code written for running in a single Agent, and divide this code up so that it can run effectively on a large number of processors potentially existing on a large number of machines.

Big tech companies like Google, Facebook, IBM, Baidu etc. don’t tend to put much focus on decentralized control. But they are the world masters of distributed computing — many of the exceptional things they do are direct consequences of the highly sophisticated and efficient distributed computing platforms they have created. Often the algorithms they are using are fairly unexceptional — but their efficient, massively distributed implementations of these algorithms are far beyond anything existing elsewhere.

Without the ability for efficient, massively distributed processing, a decentralized AI network like SingularityNET will find itself unable to compete with big tech companies on numerous tasks. A decentralized open market for small AI modules — each one running on a small number of processors — to coordinate and cooperate with each other, will be a valuable thing to have and may lead toward AGI in fascinating ways. But still — given the nature of today’s computing infrastructure — without effective distributed processing, there will be significant limitations.

Fortunately, modern computer science gives us ways to achieve powerful distributed processing in a wholly decentralized framework. What is required to enable this is the creation of appropriate tools for the development of AI Agents. We can provide optional tools for helping with the scripting of Agents, so that if Agents are built using these tools, then SingularityNET’s automated scripts can (much of the time) transform these Agents into efficient distributed-processing Agents when needed.

These sorts of tools illustrate the power of open source in a SingularityNET context. SingularityNET is accessible to AI Agents that are open source or proprietary in nature; however in the case of open source Agents it is possible to enable numerous additional services, one being automated distributization as we are discussing here.

We have noted elsewhere our plan that: For Agent developers who are severely resource or time constrained, SingularityNET will offer optional hosting services: You can use hosting services that SingularityNET maintains, and upload your open source AI Agent to a software container running there. In exchange for this hosting service, a small percentage of revenue received by your Agent will go to the network. This will only be available for open source code due to security reasons.

If an open source AI Agent (wherever it is hosted) receives a large number of transactions, the Agent creator may opt to allow the network to replicate their Agent on its own hosting machines, charging a small fee for the operation and automated replication.

And this leads to the option of sophisticated “distributization” of hosted open source AI Agent code. If an AI Agent is written using OpenCog, Tensorflow or another sufficiently flexible and functional-programming-oriented framework, then SingularityNET mechanisms may also (with Agent creator permission) perform automated program transformations on portions of the code to enable efficient distributed processing.

In the Tensorflow infrastructure, this is a basic functionality provided by Tensorflow Fold. Fold takes a Tensorflow script and automagically divides it up and refactors it in a way that makes it efficiently support the distributed computing resources available. The term “fold” here refers to the fold operator in functional programming.

Something similar can be done for AI Agents whose top-level control flow is written in Scheme or Haskell (or potentially other functional languages). This is relevant in an OpenCog context, because Scheme is the most common language for writing OpenCog control scripts. Implementing something similar to Tensorflow Fold for the Scheme scripts used to control OpenCog, or for Haskell scripts, would be relatively straightforward. (While python is not a functional language, the subset of python typically used to control OpenCog could potentially be handled in a similar way.)

It’s worth noting that, for an Agent to benefit from this sort of automated distributization, functional programming need not be used at every level of the program’s design and implementation. Rather, it is sufficient that the critical “top level” control and data management be implemented this way. In Tensorflow, for example, a functional top-level language is used to manipulate functions that trigger processes written in lower-level (non-functional) languages optimized for GPUs. Similarly, in OpenCog, a functional top-level language (Scheme) is typically used to manipulate the Atomspace and to trigger procedural scripts encoded in the Atomspace; the Atomspace is coded in C++, a non-functional language, for reasons of efficiency and exploitation of relevant existing libraries. In many cases, efficient distributed processing can be achieved via automated manipulation of the functional higher-level portion of a program written in this way. Writing software in such a manner requires a particular way of thinking, but this is a way of thinking that has become very common in recent years, as the primary occupation of programmers has shifted from coding their own algorithms and data structures to coding scripts that manipulate existing algorithms and data structures.

This sort of “automagical distributization” lies at the intersection of decentralized control and distributed computing, and ultimately will enable SingularityNET to achieve even more scalable AI processing than the large technology companies do. The ability to automatically modify Agent code to make it efficiently distributed, will intersect fascinatingly with the ability for Agents to outsource work to each other, and the existence of a very diverse pool of Agents.

Furthermore, the presence of a large number of Agents in need of distributization, will give the distributization processes in SingularityNET a great amount of data to study. This data will be very valuable in driving machine learning regarding particulars of algorithm distributization. Distributization via heuristics can go a long way, but the distributization of any complex algorithm requires a lot of “judgment calls,” which can be made more effectively via inductive learning across many different cases of algorithms in need of distributization. The diversity of AI algorithms being distributized in SingularityNET, will be much greater than those in any tech company’s algorithm library. This means that machine learning regarding the best ways to distributize algorithms, will be able to advance further in the SingularityNET than in any proprietary infrastructure.

At the present stage of computer science, fully automated distributization is powerful but still limited. How far it can be pushed with application of currently-available machine learning across a diversity of Agents applying a diversity of algorithms, remains to be seen. The availability of these tools for open-source Agents in SingularityNET will provide a pressure toward the FOSS choice for AI Agent developers, though Agent developers with sufficient resources and expertise will be welcome to develop their own distributed processing methods, and some AI Agents may simply not require or benefit from distributed processing due to their basic nature.

In addition to generic distributization tools leveraging the properties of functional languages, it will be possible to integrate tools for distributization of specific sorts of AI algorithms. These tools will take the form of libraries that AI Agent developers can optionally use in the development of their Agents. Tensorflow provides tools of this nature for the special case of neural nets and other algorithms that are heavy on linear vector and matrix manipulations. But other cases are in some ways even more straightforward than this one.

For instance, OpenCog currently contains tools (coded in C++) for “distributizing” evolutionary algorithms, and graph or hypergraph pattern mining algorithms. These distributization tools are currently implemented in a way that’s bound up with OpenCog’s MOSES software for probabilistic evolutionary programming, and OpenCog’s information theory based hypergraph pattern miner. However, the distributization tools here can be separated from the specific OpenCog algorithms and made available for use by any similar algorithms in a generic way.

Designs exist for “distributizing” logical inference algorithms in a similar manner — intended for use with OpenCog’s PLN probabilistic logic framework, but not yet implemented. These designs can be implemented as SingularityNET libraries, and then used with nearly any logic engine based on forward and backward chaining control, as well as with PLN.

As these aspects of SingularityNET develop further, additional AI algorithms besides neural networks, evolutionary learning and pattern mining will be handle-able in this way. AI Agent developers utilizing algorithms of a type handled by a special-case distributization library, will have motivation to use the corresponding library. AI Agent developers implementing different sorts of algorithms, will have motivation to use functional-language top-level control and data-manipulation loops, so as to allow SingularityNET’s automated distributization routines to transform their Agents into distributed systems without requiring extra effort on their part. And Agent developers who don’t want any of this, will be free to ignore it!

In these ways “distributed processing” and “decentralized control” will be able to not only work together effectively, but also reinforce each other synergetically. This is how SingularityNET will become the most powerful computing system in the world.

--

--