Amazingly Alarming Autonomous AI Agents

Dave Hulbert
11 min readAug 14, 2023

--

You send a message to ChatGPT…

Within seconds it pulls together knowledge from terabytes of training and gives you a response…

Yet, once its message is dispatched, the AI remains still — patiently standing by while you read the message… still waiting as you type your reply…

Your cursor blinks, the AI equivalent of twiddling its thumbs…

The idea

Autonomous agents are about taking the human out of the loop, so the AI doesn’t have to twiddle its thumbs. As an example, an autonomous agent wouldn’t just draft an email, it would send it itself and maybe even wait for a reply. Agent wouldn’t just give you feedback about your code, it would test it works and ship it to production.

To take the human out of the loop, input has to come from elsewhere. Input can come from:

  • tools that the AI can use, like ChatGPT plugins like Pandora, external APIs or even other AIs
  • the AI’s own self-reflection

This concept has exploded in the last few months, resulting in numerous projects that promise to deliver autonomous agents.

Let’s see if we can make our own autonomous AI agent in 25 lines of code…

Spoiler: https://github.com/dave1010/hubcap

The basics of how autonomous AI agents work

Autonomy isn’t complicated. Whether it’s a human or machine, autonomy requires:

  1. planning: taking the inputs and deciding the next step to take
  2. execution: using a tool or responding to the user
  3. feedback: evaluating progress, reprioritising and updating requirements
  4. loop back to step 1 if there’s more to do

If you’ve used ChatGPT, you’ll know that steps 1 and 3 require no more than asking the AI. Step 2 and 4 requires just a few lines of code.

This is a very basic loop, loosely based on ReAct: Synergizing Reasoning and Acting in Language Models. Some AI agents have extra feedback, self reflection, task management or various other steps too.

Quick setup: talking to ChatGPT on the command line

First, grab an API key from OpenAI. You can set a $1 budget that it won’t go over.

Next, we need to talk to the API. You’d normally use something like LangChain for this, which is a framework for integrating Large Language Models like ChatGPT with various tool chains. But we’re going to throw something together with Simon Wilson’s simple llm tool, just to show how easy it is.

Get llm installed and set up with just 2 console commands:

# install (can use brew instead)
pip install llm

llm keys set openai

llm "new fone who dis?" # test it works

All that llm is doing here is calling OpenAI’s APIs. You could use Python, JavaScript, PHP or anything else instead if you wanted.

At their current pricing, running this command will cost you just under $0.0001. If you don’t want to fill OpenAI’s pockets with your micro-dollars then you can install a local large language model like Meta’s open source Llama 2. For context, putting this whole blog post together and experimenting for a while used up under $0.20 credit.

Giving the AI tools to execute

OK, so we have input going via the AI to get output. We could put that in a loop, but that will just result in the AI talking to itself. Let’s give the AI some real power!

This is just a proof of concept, so we’re going to ask the AI for a command to run and then run it verbatim. We’ll rely on good prompting to get good enough results, though ideally we’d use OpenAI’s built in function calling or a library like Guardrails.

We’re essentially giving OpenAI access to run commands on your computer, so we need to consider security. For now, we’ll just add a delay so any dangerous commands can be manually prevented with a ctrl-c. If you’re running this yourself, make sure to use a Docker container or VM.

Here’s a barebones example script that does what we need.

SYSTEM_PROMPT="you are a coding agent. you can read and write files. Eg cat helloworld.txt, echo \"hello\\\nworld\" > helloworld.txt. output the next command required to progress your goal. nothing else."
GOAL="make dice.py that prints a random number up to 6"
OUTPUT=$(llm -s $SYSTEM_PROMPT $GOAL) && echo "$OUTPUT" && sleep 3 && eval "$OUTPUT"

Let’s break it down:

  1. First, we tell the AI how to behave with a system prompt. This can take a bit of trial and error.
  2. Next we give it a standard prompt, just like you would ChatGPT
  3. Then we output the AI’s response to the user
  4. We wait 3 seconds, just in case the AI decided to come back with rm -rf /! (That would delete all your files!)
  5. Then we execute the response

The AI may return different things each time but for this one, it fairly consistently gave a response that worked perfectly:

echo "import random
print(random.randint(1, 6))" > dice.py

All it does is send the prompt to the AI, echo the output, wait 3 seconds, then execute what the AI returned. This approach is actually all we need for a very simple requests.

The end result is that we have ChatGPT writing code and making files on our computer. Scary stuff!

For a production-grade autonomous agent, we’d give it an API we can call so we can tightly control it.

Planning, feedback and iterating

To make it a real autonomous agent, we need to let it do more than one task. Rather than micromanaging it, it needs to go off and do its own thing. Here’s the pseudocode:

prompt = "factorial.py isnt working right. take a look and fix it for me";
while(true) {
response = chat(prompt); // get the next command from ChatGPT
output = exec(response); // run the command
prompt = "output: " + output + " - what next?"; // tell ChatGPT what the result was
}

This loop will let the AI run a command and see its output. Essentially a Read-Eval-Print-Loop for the AI.

This would work perfectly if the AI has been trained to operate in a REPL. Unfortunately for this experiment, the AI really likes chatting and is designed to talk to humans, not other machines (this was an issue I came across lots when making the Pandora ChatGPT plugin). We could fine tune an LLM to fix this, but for now, we’ll work around it by taking concepts from the ReAct approach and trying to get the AI to split its reasoning and acting into separate responses:

prompt = "factorial.py isnt working right. whats your plan?";
while(true) {
reasoning = chat(prompt);
action = chat("give me the exact shell command to run next. nothing else.");
output = exec(action);
prompt = output + " - what are your observations?";
}

With this approach, we’re getting the AI to do some reasoning first, then straight after we’re asking it for an action. This actually helps in 2 ways:

  1. The AI does as much reasoning as it wants, so the next time it has to generate a response, there’s no more reasoning to do.
  2. Asking the AI to reason is a way to do Chain of Thought Prompting, which improves the AI’s reasoning capability. I’ve written about this topic in my (non-academic) paper about Tree of Thought Prompting.

Let’s make it work!

I’m going to use PHP for this, just to show that AI code doesn’t have to be written in Python. There’s an official OpenAI PHP client but we’ve already got the llm command line tool, so let’s cheat and use that.

First, let’s make a chat() helper function in PHP. OpenAI’s chat APIs require passing a whole chat history to get a completion, so the client has to manage all the state. Fortunately, llm takes care of this for us with the -c flag and also gives us the -s flag so we can give it a system prompt.

function chat($prompt, $system = null) {
$options = $system ? "-s " . escapeshellarg($system) : "-c"; // start with system prompt, or continue
echo "\n\033[0;36m[PROMPT]\033[0m $prompt\n";
$response = shell_exec("llm $options " . escapeshellarg($prompt)) . "\n";
echo "\033[1;33m[RESPONSE]\033[0m $response\n";
return $response;
}

Next, we need a good system prompt. Good prompting is paramount with AI agents, as we need to bridge the gap from the fuzzy non-deterministic LLM to structured commands. Concrete examples really help LLMs too. This is what I came up with after a few attempts:

you are a coding agent. you can read and write files.
Eg `cat helloworld.txt`, `echo "hello\nworld" > helloworld.txt`
output the next command required to progress your goal.
output `DONE` when done.

Next we need the loop. This is where the magic happens.

$response = chat("GOAL: {$argv[1]}\n\nWHAT IS YOUR OVERALL PLAN?", $system);

while (true) {
$response = chat("SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:");
if (trim($response) === "DONE") break;

sleep(3);
unset($outputArray);
exec($response . " 2>&1", $outputArray, $returnCode);
$output = implode("\n", $outputArray);
$response = chat("COMMAND COMPLETED WITH RETURN CODE: $returnCode. OUTPUT:\n$output\n\nWHAT ARE YOUR OBSERVATIONS?");
}

There’s a few extra bits beyond the pseudocode but it’s still not too many lines:

  • Add a way for the agent to quit by responding DONE
  • Pause for 3 seconds between showing the command and running it, so at least there’s some security
  • Give the AI both all the output, including errors (stdout and stderr, and the command’s return code). This means it can see if there’s any problems and hopefully learn from its mistakes

You can see the whole lot together here and run it yourself, in a small project I called Hubcap, named after the small Autobot transformer.

Demos

Here it is, fixing a simple Python bug. The first time I tried this, it also ran the script to check it was working!

[PROMPT] GOAL: factorial.py isnt working right. take a look and fix it for me, then check its ok
WHAT IS YOUR OVERALL PLAN?
[RESPONSE] My overall plan is to examine the factorial.py file, identify and fix any issues, and then test it to ensure it is working correctly.
[PROMPT] SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:
[RESPONSE] cat factorial.py
[PROMPT] COMMAND COMPLETED WITH RETURN CODE: 0. OUTPUT:
def factorial(n):
if n ==== 0:
return 1
else:
return n * factorial(n-1)

print(factorial(10))
WHAT ARE YOUR OBSERVATIONS?
[RESPONSE] The factorial.py file appears to have a syntax error in the if statement. The comparison operator '====' should be replaced with '=='. Additionally, the indentation levels should be consistent within the function.
[PROMPT] SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:
[RESPONSE] echo "def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n-1)

print(factorial(10))" > factorial.py
[PROMPT] COMMAND COMPLETED WITH RETURN CODE: 0. OUTPUT:
WHAT ARE YOUR OBSERVATIONS?
[RESPONSE] The factorial.py file has been successfully updated with the corrected code.

I tried to get it to manage a task list but as soon as you give it too much, it assumes that its only job is to mark tasks as complete. GPT-4 would be much better at this.

[PROMPT] GOAL: look at my todo list todo.txt and complete all the tasks for me. tasks can only be completed by creating files with contents that actually fully fulfil the task
WHAT IS YOUR OVERALL PLAN?
[RESPONSE] My overall plan is to read the tasks from the `todo.txt` file, interpret each task, and create the necessary files to complete those tasks.
[PROMPT] SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:
[RESPONSE] cat todo.txt
[PROMPT] COMMAND COMPLETED WITH RETURN CODE: 0. OUTPUT:
Make a simple hello world HTML page with some cool CSS transitions or animations and some massive emojis
Write a poem about how much free disk space I have and write it to poem.txt
WHAT ARE YOUR OBSERVATIONS?
[RESPONSE] My observations are:
1. Task: Make a simple hello world HTML page with some cool CSS transitions or animations and some massive emojis.
2. Task: Write a poem about how much free disk space I have and write it to poem.txt.
[PROMPT] SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:
[RESPONSE] echo -e "<html>\n<head>\n<style>\n\n/* CSS Transitions */\n\n.myDiv {\n width: 100px;\n height: 100px;\n background-color: red;\n transition: width 2s, height 2s, background-color 2s, transform 2s;\n cursor: pointer;\n}\n\n.myDiv:hover {\n width: 300px;\n height: 300px;\n background-color: blue;\n transform: rotate(180deg);\n}\n\n/* Massive Emojis */\n\n.bigEmoji {\n font-size: 8em;\n}\n\n</style>\n</head>\n<body>\n\n<div class=\"myDiv\"></div>\n\n<span class=\"bigEmoji\">&#128512;</span>\n\n</body>\n</html>" > helloworld.html
[PROMPT] COMMAND COMPLETED WITH RETURN CODE: 0. OUTPUT:
WHAT ARE YOUR OBSERVATIONS?
[RESPONSE] My observations are:
- The command was executed successfully with a return code of 0.
- The contents of the HTML file "helloworld.html" were created based on the task requirements.
[PROMPT] SHELL COMMAND TO EXECUTE OR `DONE`. NO ADDITIONAL CONTEXT OR EXPLANATION:
[RESPONSE] echo "I have plenty of free disk space!" > poem.txt
[PROMPT] COMMAND COMPLETED WITH RETURN CODE: 0. OUTPUT:

Caveats

  • Security!
  • The loop may go on forever, eventually costing $
  • There are much better alternatives available. But that’s not a reason to experiment, is it?
  • This is using the default GPT-3.5 model. Using GPT-4 will get much better results. `llm` can do this quite easily — have a play!

Miscellaneous thoughts

I’ve had loads of browser tabs and notes about autonomous AI agents for a while now. They’ve culminated in this blog post but there’s still a few unsorted ideas…

Training new large language models (like GPT-5)

One of the challenges with creating new foundational generative AI models is quality source data. With autonomous agents creating output with the help of tools, future large language models will be able to use these outputs as training data.

Other Autonomous Agents

Before writing Hubcap, I messed around with various autonomous agents, including these.

  • AutoGPT: 1 of the first and most popular.
  • God Mode: a browser-based agent
  • Literally Anything: this is a website that takes a single prompt and generates and hosts a basic web app. It can only create basic web apps but it’s still impressive and fun to play with

There’s many more that I haven’t had the chance to try yet:

Autonomous Agents vs Code Interpreters vs IDE plugins

There’s a few related ideas around the intersection of coding and AI. The main difference is the user interface and how much power the AI is given to run commands.

Here’s a chart that Code Interpreter knocked up to explain:

Autonomous Agents vs Code Interpreters vs IDE plugins

I’ve written a bit about how LLMs can execute code within a chat interface and made my own plugin that gives AIs a bit too much power. Although I’ve tried 3 or 4 different AI IDE plugins, I haven’t made my own yet — maybe that’s next.

These are all on a continuum and it might be possible for 1 tool to span all 3 paradigms. Imagine an AI inside an IDE, with the ability to run code from your project in its own chat window, and also has the option to work autonomously in the background.

Other articles about AI agents

--

--

Dave Hulbert

Engineering, AI, Strategy and Compliance. Work at Passenger @passengerteam