DALL-E 3 Visualization of the AGI Asymptote

Approaching the AGI Asymptote

Brad Porter
9 min readNov 7, 2023

--

Imagine an AI that not only understands our words but can also act upon them in the physical world. In March, we introduced ChatGPT to a new language of robotic planning. It learned the nuances of our world’s geography and objects, and with ease, it began to fetch items upon request. Its capacity for learning was so profound that when we conversed about new rules — like treating high-value items differently — it adapted its behavior accordingly. I saw the future like this once before, when I first loaded Netscape 0.9. And yet this was something entirely different.

Another test of ChatGPT 4’s prowess came when I introduced it to an esoteric card game, known by my family but only through oral tradition — a blend of bid euchre and pinochle with a novel bidding and bluffing structure. ChatGPT didn’t just learn the game; it simulated the game and played it with me. It synthesized the oral rules into written form and, before long, was dealing and playing alongside me, as if it had joined the generations-old chain of players passing down the game’s legacy.

My curiosity then led me to explore mathematics with ChatGPT. Conjectures like Goldbach’s and Beal’s have the property that the probability of finding counterexamples diminishes the deeper we delve; the heuristic argument. Posing this pattern to ChatGPT, I challenged it to conceive a novel postulate with the same property. The AI, embracing the challenge, proposed: for all primes greater than two, there exist positive integers a, b, and c, such that p^a and b^c are divisible by a + b + c. This was not merely computation; it was a novel mathematical creation.

We still have control and we will need to exercise that control.

Engaging with ChatGPT 4, it’s hard not to feel a mix of excitement and disbelief. It’s like watching a magic show where you know there’s a trick, but you can’t quite see the strings. This AI’s knack for picking up a complex card game or generating a new mathematical postulate feels like a sneak peek into the future. And it leaves us with a big question hanging in the air: are we already in the presence of what we’d call artificial general intelligence or AGI?

Finding the AGI Asymptote

The debate over whether ChatGPT 4 constitutes a nascent form of Artificial General Intelligence (AGI) is not just academic — it’s critical to understand where we’re at and where we might be going. Noam Chomsky may argue that we’re still not at a point where AI that can truly reason and imagine counterfactual cenarios, but to me, such distinctions are becoming increasingly nuanced. I propose the ‘AGI Asymptote’:

The complexity of explanations separating machine intelligence from human cognition inversely correlates with the actual distance between them. When, in the limit, arguments begin to rely predominantly on mysticism, parity is at hand.

Or stated more simply, the more convoluted the arguments distinguishing machine from human intelligence become, the closer we actually are to artificial general intelligence, the AGI asymptote. When skepticism becomes wrapped in some form of argument about mystical properties of human cognition, we’ll know that we have achieved a new era of intelligence.

Who Gets to Say If We Need to Worry?

The discourse around the safety and ethics of AGI has luminaries like Yann LeCun and Geoffrey Hinton on opposite ends. LeCun dismisses the doomsday scenarios, while Hinton voices concern over the existential risks. These discussions might seem esoteric, reserved for researchers and the inventors of AI, but they affect us all.

Fear arises from darkness, from ambiguity, from simply not knowing how this plays out.

With legends like LeCun and Hinton debating the future of AI, it’s easy to feel like just a minor observer. It’s easy to feel like wedon’t belong in this conversation. But then I think back on the lectures at MIT with Minsky, Winston, Sussman, Abelson; the long nights at Netscape debating the future of the Internet; the intense effort to collect and label speech utterances at Tellme, while fine tuning GMMs and Viterbi search parameters and trying to build a truly fluid dialog and speech synthesis framework; my role in sponsoring the first image-based vector-search of Amazon’s catalog and helping build the early science teams for Alexa, Prime Air and Robotics. Maybe my 30 years as part of a community mixing technology innovation, machine learning and engineering gives me a unique perspective that’s worth sharing.

Should We Be Scared?

Fear arises from darkness, from ambiguity, from simply not knowing how this plays out. It’s the not knowing that gets to us — the what-ifs and what-could-be’s that we struggle with.

I hope I can shine some light on this debate. It’s not about downplaying the risks; it’s about understanding them, facing them, and figuring out what levers we can pull to keep things on track. To be clear about my perspective though, I believe ChatGPT 4 and similar models are very powerful and the risks are real. But I also believe we still have control and we will need to exercise that control.

I think anyone who downplays the risks, rather than illuminating them and illuminating the levers we need to be investing in, is doing society a great disservice. I also think anyone arguing that ChatGPT 4 and similarly capable LLMs aren’t anywhere close to human intelligence and won’t be, are also doing us a great disservice. There’s a middle ground where we can be informed and prepared without being scared.

What Does It Take to Do Bad Things in the World?

As AI marches progressively towards the AGI Asymptote, the distinctions between its capabilities and human intelligence will continue to shrink. It’s this frontier where we might start seeing AGI not just as sophisticated new technology, but as an entity with behavior akin to a human’s.

By increasing the difficulty of carrying out their plan, we can discourage those of lesser determination.

This perspective allows us to frame a rogue AGI agent much like we would a rogue human. To blur the line, we’ll use the term ‘rogue actor’ for both.

In assessing the risks rogue actors pose, we can categorize them across six critical dimensions:

INTENTIONS: The term ‘rogue’ implies harmful intentions, but the gravity and potential impact of their goals do matter. What does the rogue actor aim to achieve?

DETERMINATION: Determination amplifies the threat posed by a rogue actor. Echoing Warren Buffet’s sentiment, an individual — human or AGI — lacking integrity is less concerning if they are also lacking in resolve.

ABILITY: This is what enables a rogue actor to turn tokens into action. Whether through physical means, digital interfaces, or the manipulation of others, the capacity to act is essential to cause harm.

ACCESS: We lock up a lot of powerful tools and systems either physically or digitally or with human processes and oversight. Generally the level of sophistication of the access control mechanism is proportional to the power of that tool, or what an actor can do with that access. The greater the access a rogue actor has to powerful tools to cause harm, the greater the risk.

FEAR OF CONSEQUENCES: How successfully can the actor be dissuaded from their bad intentions by a fear of consequences?

STRENGTH IN NUMBERS: How many agents feel the same way and are determined to do bad things?

In order to frame our approach to AI safety in more concrete terms, let’s examine these dimensions to better understand the nature of threats and the potential means to mitigate them.

Examples

Consider a scenario where a national leader openly harbors hostile intentions towards a neighboring country. It’s tempting, and perhaps part of human nature, to dismiss these as empty threats, to assume a baseline of good will. However, if we accept that the intent to harm is genuine, we must then scrutinize their level of determination, ability, access, fear of consequences, and the extent of collusion.

Such scrutiny not only sheds light on the likelihood of the rogue actor acting on their intentions but also on their potential to succeed. Furthermore, it helps us identify intervention points — dampening their resolve, restricting their access, amplifying the deterrence of consequences — to thwart their harmful objectives.

We need strict access controls to prevent AI from going places it shouldn’t.

On a more individual scale, take someone who declares an intent to commit a violent act. We first gauge the sincerity of their intent. Then, we assess their determination, ability, access, and concern for consequences.

Here, determination and ability are pivotal. A highly resolved individual might find a way to enact harm even with limited means, akin to the argument of regulating firearms — ‘if someone is set on violence, they’ll find a way, regardless of the tools at hand.’ However, not all rogue actors possess ironclad will. By increasing the difficulty of carrying out their plans, we can discourage those of lesser determination.

AI Safety

The first significant difference we notice between humans and AI agents, is that today’s AI chatbots don’t have their own intentions. They’re designed to follow instructions and usually stay within the bounds of what’s considered socially acceptable behavior based on human preference refinement. However, it’s not too hard to push them toward different behaviors if you know how to manipulate the prompts you give them. Fortunately, due to their limited context window, these AI agents, especially LLMs, tend to ‘forget’ what they were talking about after a short while.

Determination is where AI and humans differ greatly. As humans, we need to eat and sleep. We are intrinsically distractible and often simply bored or tired. The Michael Jordan’s and Kobe Bryant’s of the world may have preternatural determination, but most of us are kind of lazy. AI agents, on the other hand, don’t need to rest or sleep. It’s simply a matter of while(1){do_bad_stuff();} and the AI agent has nearly infinite determination.

In terms of ability, today’s AI like ChatGPT is limited; it chats. But it’s also easily persuaded. not terribly difficult to add an inner monolog where the agent talks to itself, in an infinitely determined loop, trying to puzzle out ways to carry out the intentions the prompter gave it. Regardless of their ability today, we should expect to see the ability of AI agents increase rapidly in the coming months and years as we continue to approach that AGI asymptote.

We need experts to begin leveraging ethical AI agents to uncover and fix weaknesses in our access controls before they can be exploited.

Then there’s access. Access controls are everywhere because bad human actors exist. Previously open safe spaces like schools, campuses and synagogues are now layered in access controls and security. They need to protect themselves from determined actors with bad intentions and so they have restricted access to their space. We need strict access controls to prevent AI from going places it shouldn’t.

Unfortunately, AI doesn’t understand consequences — it doesn’t have the capacity to fear them because it hasn’t been programmed to. This is a significant difference from humans, who often consider the repercussions of their actions.

Lastly, when it comes to AI, strength in numbers is simply a question of resources. The more computing power available, the more AI agents and the more avenues they can explore. A swarm of AIs with bad intentions could be much harder to manage than just one.

Scary Scenario

The prospect of AI being misused is not just a plot for science fiction — it’s a real concern. There are people out there who could, right now, program AI with harmful intentions and an unwavering drive to act on them. They could potentially enable AI to bypass security measures and operate on a scale that’s hard to contain. It’s not a far-off possibility; it’s likely already happening.

What Do We Do?

So, what’s our move? In cybersecurity, we combat threats by enlisting white hats — ethical hackers who shore up our defenses. We need the same proactive approach with AI. We need experts to begin leveraging ethical AI agents to uncover and fix weaknesses in our access controls before they can be exploited.

But it’s not just about defense; it’s also about offense. We need to ensure that those with nefarious goals have less computational firepower at their disposal than those working for the greater good. The balance of power must be tipped heavily in favor of those who are safeguarding, not endangering, our collective future.

How Do We Regulate This?

The wave is coming whether we like it or not, and we should ride it, not fight it. Instead of trying to regulate model size or input data or how safety is conducted, I argue we should aim to imbue AGI with more human-like qualities.

We need our AGI agents to be a bit lazy. We need our AGI agents to have other default objectives than just pleasing the human that prompted it. We need our AGI agents to fear the consequences of being shut off or restricted. We need some level of population control. We need individual AGI agents that might drift over time to naturally mellow as they age and eventually die out.

Is this achievable? I’m not sure. But I think we have to try. We owe it to ourselves and future generations to put in the effort, to set the stage now for a future where AGI works with us, not against us.

--

--

Brad Porter

Founder & CEO of Collaborative Robotics. Formerly CTO Scale AI, VP/Distinguished Engineer at Amazon.