SingularityNET and Other Aspects of Cognitive Economics

Benjamin Goertzel
Ben Goertzel on SingularityNET
21 min readOct 26, 2017

(Some Theoretical, Historical and Futurological Musings)

I’ve been spending a lot of time lately working on a new project called SingularityNET, which brings AI and blockchain together into a new sort of “Decentralized Autonomous Organization” of AIs. You can see more about that at http://singularitynet.io, and I have written about my own particular take on the SingularityNET in a recent blog post.

In this blog post I aim to dig a little deeper into some of the concepts underlying the SingularityNET project. One of the core themes underlying the SingularityNET is the fundamental parallel and overlap between cognitive and economic dynamics. In the SingularityNET design the relationship between cognitive and economic dynamics becomes workaday and practical. However, these practical interactions rely on deeper underlying factors.

This post starts off by exploring the relation between cognition and economics at a basic conceptual level. It then digs into some of the history of economics-type thinking in the AI field — focusing particularly on the network of cognitive-economics ideas that happened to play a role in my own early thinking that led up to the SingularityNET design. (From a broader perspective, these are certainly not the only interesting relationships between cognition and economics, and maybe not even the deepest ones. They are just the ones I feel like writing about at this moment!)

The Foundational Concepts of Economics and Cognition

The existence of a close relationship between cognition and economics becomes evident when one thinks at a foundational level about what each of these entities is.

One often reads about the parallels between utility maximization in economics and reward maximization in reinforcement learning; or between, say, energy minimization in AI systems like Hopfield neural networks and profit maximization in economic systems. But very particular parallels like these don’t get at the essence of the matter, and I think they are best considered as special cases of a more abstract cognition/economics relationship.

To explicate this more abstract relationship using today’s common communication frameworks requires either some hairy-looking mathematics, or a bunch of abstruse-looking words tied up in long sentences. Here I’m going to opt for the latter. But hopefully, if you can bear with me for a few paragraphs, by the end you’ll see that the actual concepts involved are really pretty simple.

One general way to think about economics is as the study of exchanges (of information or material) between different agents, that are guided by the relative values assigned by the different agents to the entities being exchanged. In very simple economies there is not much activity that would be viewed as “cognitive” going on. But once economies become complex, one sees phenomena such as the emergence of complex systems formed as temporary and partial alliances of different agents (e.g. corporations), and the emergence of derived instruments representing types of value indirectly grounded in agents’ basic sense of value (e.g. company shares, futures and options, etc.). Concepts like learning, reasoning and memory become pertinent to the analysis of the underlying dynamics of such economies.

Cognition can be understood as the activity of systems that are engaged in recognizing patterns in themselves and their environments (and emergent between themselves and their environments), and enacting new patterns in themselves and their environments. What I call the basic “virtuous cycle of cognition” may be conceived as follows. Define an instance of pattern recognition or enaction as “recognition-enabling” if it modifies the system and/or the world in a way that makes further pattern recognition easier; and as “action-enabling” if it modifies the system and/or the world in a way that makes further action easier. The virtuous cycle of cognition occurs when a system has a lot of recognition-enabling actions and recognitions and a lot of action-enabling recognitions and actions.

One can look at this recursively if one wants to, e.g.

· An instance of pattern recognition or enaction is “1st order recognition enabling” if it modifies the system or the world in a way that makes further pattern recognition easier; and “1st order action enabling” if it modifies the system or the world in a way that makes further action easier

· An instance of pattern recognition or enaction is “2nd order recognition enabling” if it modifies the system or the world in a way that makes further 1st-order-recognition-and-action-enabling pattern recognition easier; and “2nd order action enabling” if it modifies the system or the world in a way that makes further 1st-order-recognition-and-action-enabling action easier

· An instance of pattern recognition or enaction is “3rd order recognition enabling” if it modifies the system or the world in a way that makes further 2nd-order-recognition-and-action-enabling pattern recognition easier; and “3rd order action enabling” if it modifies the system or the world in a way that makes further 2nd-order-recognition-and-action-enabling action easier

· Etc.

This cascade is, I suggest, the crux of cognition. This is a core theme of my 2006 book The Hidden Pattern, although the “virtuous cycle of cognition” as such is not explicitly formulated in this manner there. It also relates closely to David Weinbaum and Viktoras Veitas’s notion of “open-ended intelligence.” The open-endedness of a system embodying the virtuous cycle of cognition has to do with the endless novelty and depth of the field of patterns created and recognized by such a system in conjunction with its environment.

An open-ended intelligent system displaying the virtuous cycle will be constantly reinventing itself, and the definition of what constitutes “the same system” going through constant internal cognitive enactions and changes will become a matter of interpretation and perspective. The recognition of patterns regarding what constitutes a persistent intelligent system, is itself part of the action of intelligence.

Cognitive systems often have specific sets of goals, and the process of their seeking to achieve these goals is part of their participation in this virtuous cycle. Along with focused goals like staying alive or reproducing, intelligent systems tend to have more general goals like learning more or experiencing novelty; and many of these goals have (along with other functions) the function of enabling the cognitive agent to fulfill the virtuous cycle of cognition.

The virtuous cycle of cognition has a notion of value built into it — a “pattern valuation” which says that more pattern is in a sense more desirable than less. Each cognitive system has preferences for particular sorts of patterns more than others. The generic pattern valuation plus system’s particular preferences cast cognition into the domain of economics, because they imply that cognitive systems make choices — both internally and externally — according to certain value assessments. Different parts of a cognitive system will contribute more or less to recognition or enaction of a pattern, or to achievement of a particular system goal like eating or reproducing (which are, among other things, ways of fulfilling the virtuous cycle of cognition); and so, as part of the cognitive system’s growth, there will be economic interactions between the parts of the system.

In this broader perspective, we can see that utility maximization is one strategy that sometimes guides the economic interactions of certain systems, and reinforcement learning is one strategy that some cognitive systems sometimes use to work toward certain goals that are associated with the virtuous cycle of cognition. But economic systems that are not effectively modeled as utility maximizers may still engage in complex pattern-recognizing and pattern-generating activities related to exchanges that are assessed in terms of their values. And cognitive systems that are not effectively viewed as carrying out reinforcement learning, may still adapt their internal structures via inter-structure exchanges that are based on value assessments (e.g. tied to their various goals and to the amounts of patterns of various sorts being recognized and generated).

Economic systems can sometimes be modeled as dynamical systems moving in a direction where the sum of the utilities of the agents involved is maximized. Physical systems can often be modeled as dynamical systems moving in a direction where the total energy of the elements of the system is minimized. There is a lot of common mathematics here, though there are also some persistent differences due e.g. to the prevalence of sum-of-squares in physics (e.g. in calculating energy) versus plain linear sums of utilities in economics. Maximum entropy production in physics, associated with the emergence of complex structures and dynamicsin far-from-equilibrium systems, appears to have parallels in socioeconomic systems as well. Generally speaking, one thing we seem to have in complex economic, physical and cognitive systems, is a collection of situations wherein progressive maximization or minimization of specific quantities, is associated with pursuit of the virtuous cycle of cognition. There is a great deal of science underlying these phenomena that remains to be uncovered.

But I’m not going to uncover it all today! So let’s move on now, and with that general cosmic background in mind, let’s review some specific and currently-relevant topics that play around near the intersection of cognition and economics.

Economics and Assignment of Credit in AI Systems

One specific and quite important manifestation of these general relationships between economics and cognition is the use of economic metaphors and mechanisms to manage the allocation of attention in AI systems. I personally started thinking about attention allocation in AI systems in economic terms in 2003 or 2004 or so, inspired originally by Eric Baum’s work on his AI system “Hayek.”

Baum is a deep-thinking AGI researcher as well as a hard-core political libertarian, and he was convinced that economics was key to solving the hardest problems at the heart of AGI. Specifically, he argued that the reason certain AI systems (like John Holland’s classifier systems) did poorly at solving the critical “assignment of credit problem” was that they did not obey basic principles of economics. We discussed this F2F a number of times, as during this period I was living in the DC suburbs and he was living in the New York suburbs; relatively local in the grand scheme of things. At one point he also traveled to the UK with me to participate in a private AGI workshop I organized at a friend’s house there.

The assignment of credit problem is, basically, the problem of figuring out what internal components of a system to reward or punish when the system’s overall action is determined to be good or bad. This is difficult because the chains of causality within a cognitive system can be extremely complex. One way to solve this kind of problem is via credit-propagation algorithms: the component of the system directly carrying out the useful action gets some credit, then it passes along portion of this credit to the other components (its “subtasks” one might say) that enabled it to carry out the action, etc. Baum observed that in certain AI systems, sometimes a component in the system was allowed to dispense more credit to its subtasks than it received for its supertasks — a “leakage” of credit that he noted could be avoided if one treated credit within a cognitive system as a kind of money, since money is a conserved quantity and an economic actor cannot dispense more funds that it has received (unless one introduces loans and interest and other such complex mechanisms). His AI system Hayek involved a collection of little agents cooperating to solve problems together, organizing into groups and exchanging credit among each other using economic principles.

Moshe Looks, who was working for me at the time, implemented his own version of Hayek, aimed mostly at understanding the principles better. Moshe was 19 at the time, I think, and already something of an AI phenom. He had relocated to Washington DC to work with me on the Novamente Cognition Engine (the predecessor to OpenCog, from which OpenCog was spun out in 2008) and on some applied AI projects for the military/intelligence contractor firm Object Sciences. Moshe found that Hayek did indeed work, but was extremely slow and its adequate functionality depended very, very sensitively on the tuning of various parameters.

(As it happened, Moshe stayed with me and Novamente till 2007, then quit to work for Peter Norvig at Google, where he stayed for almost 10 years. And then in 2017 Moshe left Google to co-found an AGI company with my another AGI research and another good friend of mine, Itamar Arel.)

It was clear to me that the idea of treating attention as a conserved quantity made all kinds of physical sense. After all, attention in the brain is mediated by the flow of physical energy through the brain, and physical energy is conserved. The math of energy minimization and the math of utility maximization in reinforcement learning systems seemed obviously related, although the precise relationship still seems not to have been worked out fully. On the surface energy minimization is about quadratic functions whereas utility maximization is about linear functions, but it’s not clear how fundamental this distinction is (e.g. the Fisher metric looks sort of linear in one coordinate system, and then becomes Euclidean distance in a different coordinate system). Maybe one can think about energy minimization as some sort of economic dynamics in a coordinate system whose axes are squares of probabilities? I’d like to find time to think about this more!

The application of economic principles to the dynamics of attention inside an AI system (like OpenCog, or Hayek) seemed a subtle matter. The slowness of Hayek seemed problematic. In Hayek assignment of credit was carried out via the different internal components of the AI system carrying out little auctions with each other, to determine who would carry out what action in order to achieve how much credit, and so forth. This seemed overly complicated to me; it seemed somehow that within the internal operations of an AI system, one didn’t want things to get bogged down by so much negotiation and economic bargaining. I thought about how cooperative activity worked between close family members or good friends. It wasn’t a matter of constant bargaining and negotiation regarding every small action one person did for another, or one person did to support another’s activity, etc. Instead there was more of a free give-and-take, in which each agent did whatever was needed to help the other, within reasonable bounds, the overall vibe being that each agent knows the others in the group have the interests of everyone in the group in mind. Explicit bargaining and negotiation happen in a group of close family members or good friends only when there’s some particularly large or important issue, or when the tacit back-and-forth process seems not to be working in some particular context.

This line of thinking led up to the development in 2006–7 of ECAN, “Economic Attention Allocation,” a scheme for apportioning attention (e.g. processing and memory resources) to knowledge and processes in an AI system — which is part of the OpenCog design and software framework today. ECAN embodies economic principles for assignment of credit in a manner that, in the default mode, sacrifices sophistication for speed of execution. There are no auctions or other intelligent price discovery mechanisms. Instead, the passing around of “money” like tokens inside the AI system is treated as similar to the passing around of “activation” inside the formal neural networks used in various AI systems.

In a formal neural network, when a node receives some activation, it sends some activation to other nodes; but the activation sent out doesn’t need to equal the amount received. This is because in the brain, when a neuron receives electricity, the amount of electricity it sends out afterwards doesn’t need to equal the amount it received. Energy is conserved, but electrical energy in the brain isn’t conserved, because the brain does a lot of work converting chemical energy into electrical energy. Blood flows into a brain region, providing the neurons in that region with the energy needed to pass electricity around. fMRI imaging, for example, measures this flow of blood through the brain, which is a crude proxy of the flow of attention through the cognitive processes centered on various brain regions.

Under ECAN, the nodes and links in OpenCog’s Atomspace knowledge hypergraph are constantly sending around attentional tokens, which are managed like little artificial monies. In the default version of ECAN I introduced two types of currency: STI currency corresponding to “short term importance” and LTI currency corresponding to “long term importance.” A unit of STI currency corresponds to a certain chunk of probability that the Atom holding that unit will be useful to some cognitive process in the near future; a unit of LTI currency corresponds to a certain chunk of probability that the Atom holding that unit will be useful to come cognitive process in the mid-term future. Conceptually one could have a host of different such currencies, each associated with a different time horizon. However, the division into two currencies seemed natural in terms of the von Neumann computer architecture on which OpenCog was being run at that time (and still is being run): STI currency basically corresponds to “deserves to be given processor time” whereas LTI currency basically corresponds to “deserves to be given space in RAM.”

So in OpenCog’s dynamics, ECAN processes spread STI and LTI tokens around, among the nodes and links (Atoms) in the Atomspace — a process somewhat like spreading of activation in a neural network, except with artificial monies. Many cognitive processes then select which Atoms to work with based on STI value. The set of Atoms with highest STI is called the “Attentional Focus” and is thought of as being roughly analogous to the working memory in the human mind/brain. And there is a “forgetting agent” that, when RAM fills up, removes from RAM (and saves to disk or else outright deletes) the Atoms with lowest LTI value.

And what of Hayek-like complex negotiations regarding how many attention tokens one Atom or process should give another? These are not yet implemented in OpenCog, but the design intention is that these should occur when there is a large and important resource allocation decision to be made.

Offer Networks

The next place the connection between cognition and economics jumped out at me was in my work on “offer networks” — an idea I originated in 2013 or so while musing about the limitations posed by the one-dimensional nature of money. It seemed obvious to me that human value was multidimensional, not one-dimensional. This was why an OpenCog system always had a set of different goals, not a single top-level goal, to pursue (e.g. compassion, sociality, novelty, learning,…). On the other hand economic systems seemed to try to collapse value into a single number, a dollar amount. I thought about multidimensional money, which seemed feasible, but then I started digging deeper, and it occurred to me that given modern computer science and AI technology, one could do away with money altogether and mediate people’s needs, desires and affordances in other ways.

The main alternatives to money-based markets, in traditional socioeconomic theory, are generally posed as old-style bartering, centralized planning or anarchic self-organization. Each of these has well known problems. It seemed to me there was an interesting alternative, enabled by modern technologies, which I eventually crystallized into the concept I called an “offer network.”

In an offer network, each participant makes a set of proposals of the general form “I will do X for some participant A, if some participant B does Y for me.” Critically, A and B don’t have to be the same entity. So an offer might be “I’ll spend an hour doing math homework for someone, if someone will spend two hours doing library research for me.” Or “I’ll paint someone’s portrait, if someone will mow my lawn 5 times.” Or “I’ll summarize a document of length up to 10K words, if someone will identify the objects in 100 images for me.”

A constraint satisfaction algorithm is then used to figure out matches between the offers all the participants in the network have made. The algorithm’s goal is to maximize the degree of satisfaction of everybody involved.

This is more complex than ordinary barter, in which exchanges are between two entities only. Without computers and sophisticated algorithms, this sort of offer network would likely not be feasible. But it’s 2017 and we have some amazing tools at our disposal.

For his MS thesis earlier this year at University of Copenhagen, my son Zarathustra implemented a prototype offer network and experimented with various constraint satisfaction algorithms on it. He found that some simple algorithms worked fairly well in most cases. Further, he found that often a small percentage of gift offers could radically increase the overall degree of satisfaction in the network. Just a few gifts would often cause cycles such as “A does something for B, who does something for C, who does something for D, who does something for A” to close, when otherwise (without the gift offers) they would have a gap. Maybe e.g. C doing something for D is on a gift basis whereas the other 4 offers in the loop are all exchange based.

An offer network is doing explicitly what markets try to do implicitly. But markets are subject to various pathological dynamics, such as dynamics via which the richest in a set of participants tend to get progressively richer and richer. Offer networks may have their own pathologies, but it seems that by tuning the objective function of the constraint satisfaction algorithm, one can counteract any issues more easily than is the case in a traditional money based market.

Constraint satisfaction algorithms are commonplace in AI, and for instance are one way of doing logical reasoning. If one has a set of relationships between entities, expressed as logical relationships, then one way to pose a question and have it answered based on these relationships is to use a constraint satisfaction algorithm. For instance a decade ago I had the idea that one could do natural language parsing using constraint satisfaction algorithms; and based on this concept, Filip Maric and Predrag Janicic implemented a constraint satisfaction based approach to finding parses in the link grammar, the syntax framework OpenCog uses. Answer Set Programming is an AI technique based on using constraint satisfaction to solve a variety of different logical reasoning problems; e.g. OpenCog’s lead engineer Linas Vepstas has used it (on a non-OpenCog project) to solve issues related to chip instruction set optimization.

So one message that the Offer Networks concept drives home clearly is: The problem a market is solving is not that different from the problems a mind has to solve. Logical reasoning, syntax parsing and offer matching are all cognitive problems treatable relatively straightforwardly as constraint satisfaction problems. To an extent this is just the good old power of mathematics, according to which disparate problems turn out to look somewhat the same when one mathematically abstract them to a sufficient degree. But it’s an intriguing special case of this power, because it provides intuitive evidence that in some sense the thinking done by an individual mind and the thinking done by a market are similar sorts of processes.

SingularityNET as Self-Organizing Cognitive Economics

SingularityNET represents a different sort of fusion of cognition and economics than either ECAN or Offer Networks, but in some ways it’s in a similar vein — and as it happens, it is also designed to interoperate with these two other frameworks.

The SingularityNET as a whole is a cognitive system in the broad sense introduced above. It is designed to recognize patterns in itself and in the world with which it’s coupled (e.g. patterns in which customers tend to be effectively fulfilled via which combinations of agents and activities), and is designed to enact patterns in the world (via providing AI services to various customers). It is intended to enter into a virtuous cycle of cognition as outlined above, in which the patterns that it enacts trigger dynamics that enable it to recognize yet more patterns, and vice versa.

In SingularityNET, the “nodes” are AI processes, which may be simple or complex. Each node takes certain types of inputs and gives certain types of outputs, and negotiates payment for each instance of processing it does. If an AI node receives payment from a customer for performing a certain service, it may then subcontract some tasks involved in doing the service to other AI nodes — having these other nodes to some of the work and giving these nodes some of the payment (as negotiated individually with these nodes).

Nodes have reputations associated with them; and these may be multidimensional reputation structures, in which e.g. a node can have a high reputation for a certain sort of task and a low reputation for another sort. In searching for a node to carry out a certain sort of task, a customer may consider multiple criteria including reputation and price.

The maintenance and updating of common frameworks for AI nodes to describe tasks and results to each other is a significant undertaking, especially since the AI field evolves so rapidly. Rather than a standard ontology for describing data and task types, or a standard system of APIs for describing inputs and outputs, the SingularityNET supports a flexible set of ontologies and APIs. When two AI nodes initiate contact with each other, one of the first things they do is agree on which ontology to use to describe their requirements and capabilities; then using this ontology they may agree on which API to use for communicating requirements, data and results. Nodes may propose new APIs or ontologies which other nodes may then adopt. Initially the network will be supplied with certain simple ontologies and APIs for nodes to use, but it is anticipated that these will soon be transcended by new ones contributed by network participants.

Wrapping up an AI algorithm’s functions in a standard API will place a strong design pressure on AI developers to make their algorithms self-tuning, rather than reliant on a large set of tunable parameters. Meta-learning is one well-known way to achieve this — i.e. use AI to figure out which parameter values work best for a certain AI on a certain type of data. The total pool of AI nodes operating in the SingularityNET will provide the best-ever dataset and experimenting-ground for AI meta-learning.

In this framework, learning happens on multiple levels. There is learning and adaptation within the AI nodes. And there is also learning and adaptation on the level of the overall network of AI nodes. And there is feedback between these two levels of learning.

There is an intriguing similarity between the learning of connections between different AI nodes in SingularityNET, and the learning of connections between neurons in the brain. When two nodes habitually find that interaction between them is positive/helpful, they will tend to interact between each other more. On the other hand, when one node A finds that it does better when not involving agent B in its activities, it will tend to involve B less. This is basically Hebbian learning as in neural networks, but on the level of interactions between Agents…

What this Hebbian learning, along with the asymmetry of relationships between agents, means is that the network of Agents can have dynamics similar to an asymmetric Hopfield net, which involves e.g. strange attractors with complex temporal behaviors, etc.

The role of network participants in democratically directing the use of inflationarily created currency, means that the participants are playing a role of medium-term attentional control on the network — layered on top of the attentional control emergent from the Hebbian learning and the reputation system. These dynamics, working together, will lead to self-reinforcing “federations” of Nodes/Agents, that make use of each other and then also vote each other more resources. There may also emerge competing federations that try to direct resources away from each other

These are somewhat basic dynamics, which will lead to other more complex network dynamics on the emergent level — and the nature of these other more complex dynamics are difficult to pre-figure at this stage. The nature of the structures that will emerge in a mature and flourishing SingularityNET, will really depend on the nature of the Agents in the network…

Federation of nodes will emerge automatically via the Hebbian learning dynamics of agents preferentially interacting with other agents based on their histories. But federation formation can also be encouraged via the use of matchmaking/recommendation agents that mine the log of the history of the whole system, and identify emerging federations, and then use these to guide their recommendations. This is analogous to what is called “map formation” in the OpenCog system’s cognitive dynamics — it’s finding implicit patterns and then making them explicit, reinforcing them in the process…. Among other tools, this is a perfect use case for OpenCog’s hypergraph pattern miner, which can mine surprising patterns in large graphs of weighted links.

Nested hierarchies will likely arise in such a system, emergent from the logic of federation formation. — i.e. federations can form, and then federations of federations, etc. There may be practical limitations on the size of a federation on each level, coming from the limitations of the pattern-mining algorithms used to find/form federations — because rigorously learning much about a federation involving N nodes will often requires an amount of data exponential in N, the data requirement decreasingly only if the federation has a specific structure that makes analysis easier. What scaling relationships emerge between the federations on different scales, in a large and mature SingularityNET type system, is one among many things yet to be discovered!

The role of offer networks versus token-based exchanges in a mature SingularityNET is something that will need to be unfolded based on experience, along with the role of explicit matchmaking versus implicit Hebbian-type dynamics for federation formation, and many other aspects. What we are talking about here is both a new type of cognitive system and a new type of economic system. We currently lack the theoretical tools for analyzing such a system in detail, and even the right language for talking about its emergent structures and dynamics. If it flourishes, it will no doubt grow into something quite different than anything we are now conceiving.

Way Past Accelerando

Charles Stross, in his 2005 novel Accelerando, wrote about corporations with programmatically-scripted by-laws, that eventually evolved and self-reprogrammed into utility-maximizing auction-oriented agents with massively superhuman general intelligence. This was an amazingly futuristic vision for its time, and it highlighted the potential for fusion of cognitive and economic dynamics. But we can see now that its underlying conception of economics was somewhat limited.

As cognition is more than reward maximization, and economics is more than utility maximization, so a superintelligent cognitive-economical system could be more than a profit-maximizing auction-oriented agent. Emergent from the diverse interactions of human agents with AI agents with various levels of generality of intelligence, a cognitive-economical system like a mature SingularityNET would be a different kind of mind than anything previously foreseen.

Supposing a cognitive-economic mind like this gave rise to an engineered/emergent superintelligence with general intelligence far beyond the human level. Such a system would harbor pattern recognition and enaction dynamics of massively greater complexity and subtlety than anything comprehensible by the human mind, thus rendering any human-oriented corner of its cognitive-economic network a sort of historical backwater. But how would the emergent pattern-network of a mind like this relate to human mind, body, society and culture patterns?

We cannot know in detail, but one interesting point is that there would likely be a continuous morphing of human-based patterns into transhuman patterns, in the structures and dynamics of such a network. Human interactions, and the interactions of human-focused agents, would be embedded in the network, and would connect with agents doing things less directly relevant to humans, which would connect with agents doing things even less directly relevant to humans, etc. Humans and their cognitive and economic lives would be part of a broader fabric. This is no guarantee that any particular aspect of current humanity would persist into such a world in a way that would make current humans comfortable. But it is quite different than the scenario of a superhuman AGI growing up in an isolated environment, separate from the ebb and flow and complex emergent pattern dynamics of human society and culture.

--

--