What is AGI (Artificial General Intelligence)? Exploring autonomous AI agents

AutoGPT, Baby AGI and AgentGPT

Mehul Gupta
Data Science in your pocket

--

Photo by Andy Kelly on Unsplash

With the advent of ChatGPT, the AI space is experiencing its peak popularity. Everyone, be it a big Tech firm or an individual contributor, wishes to have a piece of it. It is so hyped, that even the dream of AGI looks possible now.

My debut book “LangChain in your Pocket” is out now !!

In this post, we will be exploring

What is AGI?

Why LLMs are not AGI?

Exploring AI Agents like AutoGPT, Baby AGI and AgentGPT

Let’s start off with

What is AGI?

It’s Artificial Intelligence with an added middle name ‘General’.

Jokes apart, an AGI refers to AI that has cognitive abilities similar to human beings. By cognitive, I mean the ability of a machine or artificial intelligence (AI) to perform tasks that typically require human intelligence, such as understanding language, recognizing objects, and making decisions. Hence

AGI is a hypothetical AI that would have the ability to perform any intellectual task that a human being can

Ok, so LLMs are also AGI. Correct?

Nopes, given the current state, LLMs can’t be considered as AGI as they miss major capabilities that an AGI has

Reasoning about the world: AGI would be able to reason about the world in the same way that humans do. This would allow it to understand cause and effect, to make predictions, and to plan for the future. LLMs, on the other hand, are only able to generate text based on the patterns that they have learned from data.

In the task of driving a car. A human driver must be able to reason about the other cars on the road, the traffic laws, and the weather conditions in order to drive safely. An LLM would not be able to do this because it does not have the ability to reason about the world in the same way that a human do

Making decisions: AGI would be able to make decisions in the same way that humans do. This would allow it to weigh the pros and cons of different options and to choose the best course of action. LLMs, on the other hand, are not able to make decisions in the same way.

One example of where LLM would fail to make decisions in the same way as humans is in the case of a medical diagnosis. A human doctor would be able to weigh the pros and cons of different diagnoses and choose the best course of action. LLM, on the other hand, would not be able to do this

Understanding emotions: AGI would be able to understand emotions in the same way that humans do. This would allow it to empathize with others and build relationships.

LLM would fail to understand emotions is in the case of complex emotions but AGI would pass. Complex emotions are emotions that are made up of a combination of different emotions. For example, someone might feel both happy and sad at the same time

Being creative: AGI would be able to be creative in the same way that humans do. This would allow it to come up with new ideas and to solve problems in new ways. LLMs, on the other hand, are not that creative.

AGI will be able to write a poem that is both creative and meaningful, while LLM may not be creative & meaningful at the same time.

Now, we got to know 2 things, what is AGI and how LLMs are nowhere close to an AGI. Next, we will explore a few AI Agents developed using LLMs that can be called a subset of AGI performing, if not all, few tasks an AGI can do

AutoGPT

Becoming an instant hit, AutoGPT apart from Q&A similar to standalone LLMs, can also execute tasks, break a big task into multiple smaller tasks, and use self-prompting to achieve it. Hence it can generate a fully-fledged website on its own, make presentations, train ML models and save results, etc. Want to see a demo?

The below video explains how to set AutoGPT in the local

Baby AGI

Baby AGI is nothing but a Python script that acts as a task management system, which

  • Given the goals, prepare a sub-task list
  • Priortize these tasks using self-awareness

But this is similar to AutoGPT, No?

Yes, the objective may look the same but not the way they work.

Baby AGI uses, apart from GPT-4, other tech stack like Langchain, and vector DBs but not AutoGPT

AutoGPT may be more suited for content generation, while BabyAGI might serve better in applications requiring complex decision-making.

I had a chance to use it first hand and the experience is not great compared to AutoGPT because of 2 reasons

It never ends. So if you give the agent a goal, it will keep on creating subtasks infinitely until unless you stop it.

The results aren’t stored anywhere. Everything comes into logs only, be it codes or text. So if you wish to store results or create a webapp, this isn’t suitable.

Note: Similar to AutoGPT, even Baby AGI can be used in local by running the babyagi.py. The detailed instructions are available in the readme.md

AgentGPT

Another popular AI Agent, AgentGPT has a big plus compared to AutoGPT & Baby AGI i.e. it has a UI hence more suitable for non-programmers. Also, AgentGPT can’t generate it’s own prompts (self prompting) as is the case with AutoGPT & Baby AGI.

You can give a try to AgentGPT here:

With this, we will be wrapping it up. See you soon with some exciting stuff on Gen-AI.

--

--