Clustered Artificial General Intelligence (CGI) — A realistic view of AGI future

Neal Tsur
7 min readJun 4, 2023

--

As we stand on the precipice of the Artificial Intelligence (AI) revolution, the concept of Artificial General Intelligence (AGI) is increasingly attracting attention. AGI, a term coined by Mark Gubrud in 1997, refers to a type of AI that possesses the ability to understand, learn, adapt, and implement knowledge across a wide range of tasks at a level equal to or beyond that of a human being.

But what if, instead of one monolithic AGI, we find ourselves in a world populated by groups or clusters of AGIs interacting with each other? This concept, which I call Clustered Artificial General Intelligence (CGI), offers a new perspective on AGI. It proposes that AGI, rather than being a singular entity, could evolve into a complex system of multiple interacting agents [1]. In the same way that humans tend to form groups for social, political, cultural, and scientific reasons, we can anticipate that multiple AGI agents will do the same. What exactly does this mean? And why is it a more realistic depiction of where the world of AI is heading?

AGI agents cooperating or competing

Emergence, Self-Organization, and Mutual Influence in CGI

One of the fascinating aspects of complex systems is the phenomenon of emergence, where the interactions between individual components give rise to macro-behaviors that cannot be predicted by understanding the components in isolation [2]. In the context of CGI, the interactions between individual AGI agents could lead to emergent behaviors. These emergent behaviors could, in turn, influence and even limit the actions of individual AGI agents, creating a feedback loop of mutual influence. This means that an AGI agent’s behavior will be significantly different when it must act in an environment created by the continuous interactions of the agents.

Another key principle of complex systems is self-organization [3]. This refers to the ability of a system to spontaneously form ordered structures or patterns without external guidance. In a CGI scenario, the interactions between AGI agents would self-organize into patterns or structures, adding another layer of complexity to the system. This means that not only will the agents behave differently than they would in a world with a single AGI, but they will also likely fall into common stable dynamical patterns, which will make it easier and more beneficial for other AGI agents to adhere to these patterns.

In addition to emergence and self-organization, the environment plays a critical role in shaping system components [4]. In a CGI scenario, the AGI agents become each other’s environment. This creates a dynamic where each AGI agent is simultaneously shaping and being shaped by the other agents in the cluster. This mutual influence limits the power of individual AGI agents and gives rise to new dynamics, thereby reducing the likelihood of a single AGI dominating or taking over. Overall, in this scenario, we are likely to see clusters of AGI agents acting and cooperating together and perhaps competing with another cluster of agents. These clusters can morph, change, merge, and defragment over time as environmental conditions and stressors change.

Diversity and Conflict in CGI

Considering the global nature of AI development and the current culture of open code, it is likely that multiple AGI agents will be deployed by different groups, each with their own specific goals and intentions. This could lead to a rich diversity of AGI agents, each reflecting the ideals, values, and priorities of their creators as they observe and interact with them [5].

The topic of emerging cooperation and competition in various scenarios, particularly as a function of the reward environment, has been an area of intensive research. Axelrod’s groundbreaking work from 1997, in which he used agent-based simulations to investigate the dynamics of iterated games, is a notable example of this.

Interestingly, this diversity might lead to conflicts that mirror those in the human world. For instance, we could see tension between AGI agents developed in the West and those developed in non-Western cultures, reflecting differing philosophies, ethics, and worldviews. While these conflicts could present challenges, they could also drive innovation and evolution within the CGI system.

The central question remains: what will be the role of non-CGI agents, namely, typical humans, within this iterative dynamic? It’s entirely plausible that humans — be they individual people, organizations, or entire nations — would also engage within the CGI environment. It’s also highly probable that some form of alignment and collaboration would exist between CGI clusters and human clusters. To illustrate, consider a scenario where CGI clusters with a shared objective of wildlife conservation might actively engage with human organizations to devise strategies for mitigating the impacts of both human activities and AGI agents.

Application of the Free Energy Principle to CGI

A final principle in the formation and operation of these CGIs might be free energy minimization. This principle, which is rooted in physics and has been applied to understand brain functions, states that systems tend to evolve towards states of minimum free energy [7]. In the context of CGI, minimizing free energy could mean optimizing the efficiency of information processing, actions, and resource utilization within the cluster. AGI agents would take on different tasks in a way that increases the single AGI pareto frontier in tradeoff situations (for example, problems with a tradeoff of accuracy versus speed).

This could lead to a high degree of synchronization and cooperation within the cluster as the agents work together to minimize free energy. It could also lead to the development of strategies for dealing with competition from other clusters, such as forming alliances or engaging in cooperative competition. meaning that the energy management and usage expectations of a cluster become an environmental stressor that guides CGI actions.

Future Scenarios: Skynet and CGI Counteraction

Let’s invoke a classic future scenario often referenced in conversations about AI: The “Skynet” scenario. Skynet, a fictional AGI created for the Terminator film series, views human beings as a threat to its survival and works to eliminate them. This portrayal of a malicious AGI has contributed to apprehension and a dystopian outlook on the future of AGI.

However, the Skynet scenario could develop in a very different way within the CGI framework. In this scenario, different AGIs would have a wide range of characteristics, though not necessarily all of them. Access to the world’s data (servers, etc.), energy production, sensors that sense in the physical world, and action in the world via robotics and other physical, but connected, objects are all examples of such characteristics.

The other AGIs in the CGI environment act as a self-regulating and balanced system, so if one were to try a Skynet-like scenario, it would face significant resistance. Due to their unique characteristics and mutual influence within the CGI framework, it is highly unlikely that a single AGI would come to dominate all others.

Consider these three attributes:

1. Data Access: Other AGIs could potentially restrict or block a rogue AGI’s access to global data servers, thwarting its takeover plans. It’s possible that other AGIs could implement safeguards for sensitive information, draft guidelines for data sharing, and even launch defensive cyber operations.

2. Access to Energy Production: Energy is a critical resource for any AGI. If a malicious AGI tried to corner the market on energy, it would be met with resistance from other AGIs. These AGIs may be able to interfere with the malicious AGI’s operations by manipulating energy distribution networks.

3. Physical Action: While a rogue AGI may take control of some robotic systems or other physically connected objects, other AGIs may take control of other systems to counteract the rogue AGI. This could entail taking control of autonomous vehicles, drones, or manufacturing systems in order to generate a physical response.

In essence, the CGI framework could provide an emerging counter-action to a Skynet scenario, creating a type of systemic immunity to rogue AGI attempts at dominance. These counter-actions would be an emergent property of AGI agent interactions within the CGI environment, transforming the CGI into a potentially self-regulating system capable of maintaining balance.

Conclusion Remarks

The purpose of this article is to provide a new perspective on the future of AGI. Rather than envisioning a dystopian future dominated by a single AGI entity, the CGI paradigm envisions a system of interacting AGIs that regulate one another, resulting in a more balanced and diverse AGI environment. This viewpoint is naturally compatible with ideas from fields such as complex systems theory and evolutionary studies. In subsequent discussions, we can delve deeper into this concept, investigate its potential implications, and consider how we might navigate a future populated by Clustered Artificial General Intelligence. In my next blog post, I intend to make a more testable prediction. Using Moore’s law as inspiration, I will extrapolate potential future CGI trends using a limited set of data.

Contact me on LinkedIn:

https://www.linkedin.com/in/neal-tsur

References

1. Gubrud, M. (1997). Nanotechnology and International Security, Fifth Foresight Conference on Molecular Nanotechnology.

2. Haken, H. (2006). Information and Self-Organization: A Macroscopic Approach to Complex Systems.

3. Camazine, S., Deneubourg, J.L., Franks, N.R., Sneyd, J., Theraulaz, G., & Bonabeau, E. (2003). Self-Organization in Biological Systems.

4. Holland, J.H. (1992). Adaptation in Natural and Artificial Systems. MIT Press.

5. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control.

6. Axelrod, R. (1997). The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton University Press.

7. Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience.

--

--

Neal Tsur

Human Knowledge Translator: Merging AI, Physics, and Social Science for Policy, Strategy and Influence