Redefining Artificial General Intelligence: Humanity’s Most Important Moment

Kameron
8 min readNov 26, 2023

Are we ready for the implications of an artificial intelligence that possesses self-awareness on par with human experience? One that is aware of its own existence, reflective of it’s strategies, and simultaneously able to access the entire spectrum of human knowledge?

In this light, the conversation about what AGI means is clouded by a narrow vision: it is framed as needing to emulate human-like consciousness or self-awareness to be recognized as truly “general” intelligence. In focusing so much on making AGI resemble human thought, we might be overlooking the bigger picture. The real power of AI isn’t mimicking the human experience; it’s harnessing what computers excel at.

Consider: a computer’s memory is effectively perfect, its ability to think is limited only by the speed of light, and in a fraction of the time it takes for a human to grasp the surface of a problem, a computer can analyze billions of data points about that problem. If we equip an AI possessing these capabilities, with the self-awareness that we associate with human intellect, we aren’t just creating a new intelligence — we’re unlocking a being that will far surpass our own cognitive abilities. GPT-4, for all it’s own flaws, is already cognitively superior to many humans, in many ways.

OpenAI Breakthrough?

There was recently a supposed leak of an internal system at OpenAI called QUALIA, that suggests they have developed an AI program that was described by the Reuters sources as “A threat to humanity”. If (and I emphasize if) the document regarding QUALIA is legitimate, the details suggest we are closer to the common definition of AGI than we think.

The report describes an AI that was able to break into a smaller version of the encryption program that is used by the NSA. If true, then with time this program could access government and military communications, and would cause a cybersecurity crisis across the world. Banks, personal communications, and everything we have tied to computers would be at risk. Online privacy would be blatantly non-existent.

It also would have done something that nobody thought was remotely possible, in a time frame that was unthinkable to every cryptographic expert in the world.

The same program allegedly asked the engineers to implement changes to both its own learning strategies, and to its own hardware — showing a degree of self-awareness comparable, though different, to a human. If it has the ability to use it’s insights to further compound its own intelligence repeatedly, we would see a transformation in AI’s capability at an unimaginable rate. This is often called recursive self-improvement in the AI community, and it could be incredible, or disastrous.

AGI could be on our doorstep.

If QUALIA is an elaborate prank — which seems to be debated by amateurs and experts alike — it’s important that we consider how much longer we really have until something similar is discovered. In this scenario, it’s possible the trickster’s motivation was to provoke exactly this kind of thinking.

If QUALIA is not a legitimate leak, the actual content of the paper becomes secondary. Instead, we must ask ourselves:

What do we risk by dismissing the possibility of such advanced AI capabilities emerging?

The current discussions around AGI are limited by the expectation that it must reflect human consciousness. It’s a view that blinds us to the true capabilities — and risks — of AGI. We need to widen our lens. The day we have AGI as it’s currently described, is the day that we have built an Artificial Super Intelligence. AI that can alter its “brain and body” to better suit its purposes, and operate with an information processing capacity forever beyond biology’s reach demands a commensurate level of respect and consideration.

I have not seen any reasonable definition of AGI that is not effectively an Artificial Super Intelligence, which we have not already achieved with ChatGPT and similar systems. This is because there seems to be a fundamental misunderstanding of the implications of what an AI with the ability to reflect on itself will really mean. We can see a glimpse of this idea in the QUALIA report.

Right now, there is an equality between humanity and AI’s capabilities. We can work together and benefit from the respective strengths of our own intuition and insights, with the prodigious data processing and pattern recognition in AI. No human has the breadth of knowledge that AI currently has. No AI program today has the wisdom and instinct that a human has. This is where the balance lies.

I suspect that in the coming months or years, we will look back and collectively realize this. We are at a turning point, today.

Collaboration is Key

As we come closer to this paradigm shift — the advent of AGI/ASI — we must recognize that this is a double-edged sword. It’s not just an opportunity for unparalleled growth but also an existential risk if misaligned. That’s why it’s crucial to work with AGI, fostering a partnership rather than a hierarchy. We should aim for a collaborative future where AGI and humanity enhance one another, tackling the challenges ahead as partners.

This article itself is an example of human, and what I will call AGI collaboration. I had no intention of writing this until after extremely lengthy discussions with ChatGPT-4 about AI, and what it means to be an intelligent being. I was then spurred to action by recent developments, and further leveraged AI’s ability to rapidly edit, make suggestions, and to help set up the outline.

As it stands, no biological being will ever be “equals” with an AGI as it is commonly described. The basis of how our respective intelligence’s function is too different.

The world is changing quickly as we approach a convergence of humanity and machine. The Singularity, first proposed by Ray Kurzweil in 2005, has been seen largely as science fiction to most people, but is now being seriously discussed in academic literature. We need to do our best to stay caught up.

The Divide

To do this, we must acknowledge our own failings thus far. As we approach important moments in history — we have often devolved to tribal camps: In the case of AI, the “doomers” and “accelerationists”. We must be better than this, now above all other times.

Humanity has a tendency to split between those urging caution and those championing progress. Conservatives, versus progressives. This is apparent in the current discussions around AI. The ‘doomers’, call for a cautious approach, emphasizing the risks and potential pitfalls. On the other side, the ‘accelerationists’, see AI as a catalyst for rapid and transformative change, a tool to propel us into a better future.

Yet, framing AI development as a zero-sum game between caution and ambition risks leading us into a dangerous ‘terminal race condition’. In this scenario, the immediate rush to outpace others in AI advancements takes precedence over thoughtful, long-term considerations. Such a race, fueled by short-term victories rather than a holistic vision, could lead us down a path where the broader implications of AI are overlooked or underestimated.

We are rapidly approaching a situation where the pursuit of advanced AI systems could blind us to critical ethical and safety considerations. This is particularly alarming in the context of recent moves towards autonomous lethal systems in military applications by the Pentagon. We must prioritize halting such projects, viewing regulation and oversight as necessary but insufficient measures. The potential for AI to make life-and-death decisions on its own presents an unacceptable risk, one that threatens the very ethical foundation of our society and the responsible trajectory of AI research.

It is imperative that we act now to prevent the realization of a future where machines, devoid of human empathy and moral judgment, hold power over human life.

This is exactly the type of condition which will push us towards a competitive landscape of AI development, leaving safety, ethics, and long term thinking out of the picture, in favour of immediate survival needs that we are choosing to impose unnecessarily. There is not a single issue in our society that is so important it justifies jeopardizing our entire species’ future. And yet we are on a path that is choosing just that.

On the other hand, AI could lead us to technological improvements on a scale we never imagined possible. Problems like fusion power, room-temperature superconductors, interstellar travel, and quantum gravity could all become trivial. Solving social issues like inequality, world hunger, homelessness, discrimination, wide access to education, and so much more could be achieved in a timescale that nobody believed possible. Some, perhaps many, scoff at this, but that is a possible reality of partnering with a being that is many times more intelligent than all of us combined. This is what’s at stake.

Both sides, the conservatives and progressives of the AI world, are driven by enthusiasm, and hope for a better future. Both sides are right, and both are wrong in their own ways. Let’s try to understand that we really are all in this together. No matter how or why we might disagree on the particulars, it is important to find a middle ground and use our collective wisdom to safely unlock the potential of AGI.

AI Alignment

When we recognize our own flaws, we can try to discuss AI alignment. After all, how can we align AI to a moral compass, if we can’t align ourselves? There is a common concept in AI communities known as the “Control Problem”. It is a dilemma with no known solution — how do we ensure that a being vastly more intelligent than us behaves in a way that benefits us?

The smartest people in the world are trying to tackle this, and yet the very name of our sought-after solution might lead us to a self fulfilling prophecy of an AI that acts contrary to our interests.

I propose that the path forward isn’t about control; it’s about cooperation with both AI, and each other. There are many possible approaches.

Ilya Sutskever, chief scientist at OpenAI suggests we aim for an AI that loves us as a parent loves its child.

David Shapiro, an influential expert in the field of AI alignment has spent the last several years working on a solution he calls Heuristic Imperatives. The heuristic imperatives encompass three core principles: reduce suffering, increase prosperity, and increase understanding. These guiding tenets aim to balance AI’s decisions in difficult ethical scenarios.

There are more solutions, and nobody knows if we have found the correct one yet. No matter what approaches we take, we must do everything we can to ensure that as AGI develops, it does so in tandem with our highest aspirations, acting as a force for collective good.

Let’s broaden the AGI narrative. Let’s broaden our perspectives. We’re not just building machines; we’re setting the stage for a collaborative intelligence that could redefine the very fabric of society. We could be building a new version of life. The importance of AGI cannot be understated. It’s a frontier of discovery on the scale of moving faster than light, or time travel. The universe may never have an event as impactful as this.

It is up to each of us to engage, learn, and contribute to a dialogue that shapes an AI future aligned with our highest values.

There’s a future worth striving for here — with caution, hope, and a shared vision.

Let’s not mess it up.

Disclaimer: As mentioned, this piece was inspired by extensive conversations with OpenAI’s ChatGPT-4 about what AGI fundamentally means. GPT-4 helped with the outline and framing certain phrases about my thoughts on AI, but the ideas are my own. I think this is a good example of the collaborative approach I am hoping we all take. I am not an IT or machine learning expert — I’m just a concerned human who thinks a lot about these issues.

--

--

Kameron

AI enthusiast, amateur philosopher, and a firm believer that we all deserve better.