Are we closing in on Artificial General Intelligence?

tarat
8 min readNov 24, 2023

--

Why are people suddenly talking about AGI and how is this related to the OpenAI’s recent firing of its CEO?

There’s a lot of buzz going around these days about the new and upcoming breakthrough in AI, which will lead to AGI.

AGI stands for Artificial General Intelligence, which is the ability of a machine to perform any task that a human can do.

But why are people suddenly talking about AGI and how is this related to the OpenAI’s recent firing of its CEO? To understand that, we first need to understand the current state of AI and how we got here.

How did we even get here?

Back in 2013, Word2Vec was released, which was the first model that could understand natural language. It was a huge breakthrough in the field of NLP and it paved the way for many other models like BERT, GPT-3, etc.

Word2Vec was trained on a huge corpus of text data and it learned the meaning of words by looking at the context in which they appear. For instance, it learned that the word “king” is related to the word “queen” because they both appear in the same context.

This was a huge breakthrough because it was the first time that a model could understand natural language.

After that, we saw a lot of advancements in the field of NLP and we now have models like GPT-3, which can generate text that is almost indistinguishable from human-written text.

Similarly, we have models like DALL-E, which can generate images of people and animals that look so real that it’s hard to distinguish them from real images.

Ok so now we know that we have models that can understand natural language and generate images, but is this enough to achieve AGI….?

No, it’s not.

The Problem with Current AI Models

Language and Vision are two of the very important aspects of human intelligence, but they’re not the only ones. There’s one more aspect that is as important if not more, and that is Logical Reasoning.

Logical reasoning is closely related to the understanding of Mathematics.

ChatGPT is great at understanding natural language, but it can’t do basic arithmetic. That’s because it doesn’t have an understanding of Mathematics.

When you ask it to add 2 and 2, it might give you the answer as 4 but it doesn’t know why it’s 4. It doesn’t know the concept of addition. It’s just memorizing the answer from the training data. That’s why it often gives wrong answers when you ask it some logical questions that involve numbers.

Math underpins everything

Math is the language of the universe. It’s the language of Physics, Chemistry, Biology, and every other field of science. We represent images as numbers, we represent sound as numbers, we represent text as numbers, and we represent everything else as numbers. Thus, it’s not surprising that Math is the key to AGI.

Once we have a model that can understand Math, we can then work on extending it’s knowledge to understand logic, reasoning, planning and other aspects of human intelligence.

2013’s Word2Vec was a huge breakthrough in the field of NLP, but after so many years and so many advancements in the field of NLP, we couldn’t make those models understand Math. Thus people started to think that maybe we need an entirely new approach, a new breakthrough algorithm that can help AI understand Math like humans do.

But no one actively shared any new ideas or research papers on this topic (publicly at least). No one said that they’re investing in this area.

Until recently, when OpenAI fired its CEO that led to a chain of events that made people wonder if OpenAI has made a breakthrough in AGI.

The OpenAI Drama

OpenAI is a non-profit AI research company founded by Sam Altman. It is one of the leading AI research companies in the world and it has made many breakthroughs in the field of AI with the development of its GPT models.

On November 17, 2023, OpenAI made a public announcement through an official blog post the board members have decided to fire its CEO Sam Altman because

“he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”

This was a huge shock to the AI community because Sam Altman was the one who founded OpenAI and he was the one who led the company to make so many breakthroughs in the field of AI.

Now this drama unfolded in a very interesting way.

  • Sam got hired by Microsoft the very next day after he was fired from OpenAI.
  • OpenAI Staff threatened to quit unless the Board resigned and Sam was reinstated as CEO.
  • Interim CEO, Mira Murati, initiated questioning of the board members on their decision to fire Sam to which they didn’t respond with a satisfactory answer.
  • OpenAI Board resigns and Sam is reinstated as CEO.

This entire timeline was very interesting for multiple reasons.

Sam Altman has been known to push rapid development in the field of AI and he has been very vocal about pushing the boundaries of AI research to reach AGI as soon as possible.

However, the board members of OpenAI have been known to be more conservative and they have been known to be more focused on the safety aspect of AI.

So when Sam was fired, people started to wonder if the board members had decided to take a more conservative approach and they decided to slow down the development of AI possibly because they realized Sam was getting too close to AGI. And they didn’t have any plans to safeguard the world from the potential dangers of AGI.

This became even more suspicious when the new CEO hired by the board members, Emmett Shear, was known to be a very conservative person and he was known to be very vocal about the dangers of AGI and how we should slow down the development of AI.

These two events combined together were enough to raise the suspicion of the AI community, however, the drama didn’t end there.

Let me introduce you to Jimmy Apples.

Who the hell is Jimmy Apples?

Jimmy Apples is an unknown entity on Twitter (X) who has been repeatedly proven to be predicting future events with 100% accuracy.

He made the following tweet on 18th September 2023.

And then he made the following tweet on 25th October 2023.

And we all know what happened on 17th November 2023. The fact that Board members gave the reason for firing Sam as a vibe check was suspiciously similar to what Jimmy Apples had predicted.

Now ofcs this could all be a coincidence, but it’s very hard to believe that someone could predict the future with 100% accuracy multiple times.

It’s almost easier to believe that Jimmy Apples is an insider at OpenAI who has been leaking information about the company’s progress in AGI.

I know you aren’t yet convinced that OpenAI has made a breakthrough in AGI… Well, there’s more. Let me introduce you to what people are calling the Q* Model

The Q* Model

A couple of weeks ago, a new research paper was leaked on 4chan. The paper was titled “Q-Networks for Partially Observable Reinforcement Learning” and it was written by a group of researchers at OpenAI.

The paper was about a new algorithm that can be used to train AI agents to understand basic mathematical concepts and even reason about them. The models that are trained using this algorithm are called Q* Models.

These models are suspected to have the capabilities to understand mathematics at least as well as a high school student.

Not only that, but these models are also really good at using the knowledge and learnings from one domain to solve problems in another domain. This is something that the current AI models are not good at.

Now of course, this could all be a hoax, but last time a research paper was leaked on 4chan, it turned out to be true. The paper was about Meta’s own LlamaNet.

Alright, now before we get carried away, let’s take a step back and look at the bigger picture by collecting all the pieces of the puzzle.

The Bigger Picture

  • Sam Altman got fired by the board members of OpenAI.
  • Sam has been an advocate of rapid AI development.
  • The board members have been known to be more conservative.
  • The new CEO hired by the board members is known to be very conservative.
  • Jimmy Apples predicted the firing of Sam Altman.
  • Jimmy Apples predicted that OpenAI has made a breakthrough in AGI.
  • A research paper was leaked on 4chan about a new algorithm that can be used to train AI agents to understand basic mathematical concepts and even reason about them.

Now, if you look at all these pieces of the puzzle together, it’s very hard to believe that there’s no suspicious activity going on at OpenAI.

Plus, one last thing that I forgot to mention is that OpenAI silently updated its vision statement on its website as follows.

Conclusion

Phew! That was a lot of information to take in. Let’s take a moment to process all of this.

Combining the Q* Model, GPT-4, and DALL-E, we can see that we’re getting closer and closer to AGI.

It seems like it will combine the two sides of the brain and be capable of knowing some things out of experience, while still being able to reason about facts.

Even if OpenAI has made a breakthrough in AGI, it’s still gonna be a good few years before we can fully understand the capabilities of the Q* Model and extend it to other domains while making it work with other models like GPT-4 and DALL-E.

It’s hard to say when we’ll reach AGI, but it seems like we’re getting closer and closer every day.

“Are we ready for it?” is a question for another day and I’d leave that for the new OpenAI board members to think about.

Now ofcs this is all speculation and there’s no concrete evidence to prove that OpenAI has made a breakthrough in AGI. We cannot be sure of it until OpenAI itself makes some announcement.

References

--

--