1/1- On “AI Agents” and other terrible word choices
Crypto is agog with AI Agents. But like almost everything else in the crypto world, the marketing folks and their choice of descriptive words are doing the rest of the industry a great disservice and creating unlimited risk.
Bad language choices abound in our industry- ask Prof Reyes who wrote extensively about this. The pervasive use of language borrowed from other disciplines to describe new and innovative technologies and new and novel relationships has been a plague on our industry. Think ICO and its close relationship to IPO, Barnbridge’s “SMARTbond” etc…the use of legal language to describe new technologies has brought the wrong kind of attention.
“AI Agents” is just another example.
Whether its AI16Z or other projects seizing the zeitgeist of the day and fusing crypto with “AI,” the term “AI Agent” is everywhere.
But…. what’s an AI Agent? “AI Agent” seems to mean deployed code that can act “autonomously.” While that sounds compelling, we’ll grant that “autonomously” is doing Schwarzenegger-level heavy lifting. One cannot merely assume automation. What level of automation establishes autonomous activity? At what point are humans no longer involved? As in all things crypto, there are no consensus definitions. I like to think about Vitalik Buterin’s view of AI systems as a great place to start. Vitalik, in his seminal blog, offered a definition for Autonomous Agent:
… in an autonomous agent, there is no necessary specific human involvement at all; that is to say, while some degree of human effort might be necessary to build the hardware that the agent runs on, there is no need for any humans to exist that are aware of the agent’s existence. One example of an autonomous agent that already exists today would be a computer virus; the virus survives by replicating itself from machine to machine without deliberate human action, and exists almost as a biological organism. A more benign entity would be a decentralized self-replicating cloud computing service; such a system would start off running an automated business on one virtual private server, and then once its profits increase it would rent other servers and install its own software on them, adding them to its network.
A full autonomous agent, or a full artificial intelligence, is the dream of science fiction; such an entity would be able to adjust to arbitrary changes in circumstances, and even expand to manufacture the hardware needed for its own sustainability in theory. Between that, and single purpose agents like computer viruses, is a large range of possibilities, on a scale which can alternatively be described as intelligence or versatility. … The next level from there would involve upgrading its own software, perhaps using evolutionary algorithms, or being able to adapt to new paradigms of server rental (eg. make offers for ordinary users to install its software and earn funds with their desktops), and then the penultimate step consists of being able to discover and enter new industries (the ultimate step, of course, is generalizing completely into a full AI).
These platforms and their code are not autonomous; in most cases, they are code that executes the will of or based upon persons or systems serving data to the code. For instance, AI16Z relies on a bot, interacting on Twitter (or, if you like, Xitter) and a few other social media/chat platforms under the handle @pmairca. @pmairca’s code interacts with others presumably human social media users and using an algorithm assigns value to the information provided, with the intention that the information provided. AI16Z DAO token holders can also interact with @pmairca to pitch or otherwise attempt to convince the bot to invest treasury assets into various blockchain -enabled opportunities.
Is this “autonomous” Maybe, if you squint. It certainly fails Buterin’s “no human involvement” test. But there’s a lot of detail that isn’t publicly available that weighs heavily on the “autonomous” character of that code, so it may be premature to conclude….but the threshold is fairly high. Buterin himself noted in the above block quote that Autonomous Agents “ is the dream of science fiction.” I think that dream is still a dream even today.
But let’s assume you’re arguing that the code is really “autonomous.” In that case, one should strongly consider dropping the “agent” part — because it is misleading in all of the wrong, and liability-inducing ways.
We lawyers will tell you that an agent is one who is recognized at law as having the power to create legal rights, duties or relationships between another person (the principal) and a third party. If you deployed an honest-to-goodness automaton that actually acts on its own, you don’t want to call it an agent; you’d be announcing loudly that YOU are the principal and that YOU are legally responsible for the actions taken by the automaton. In most cases that is not the goal. The goal is typically the opposite.
Instead let’s call these AIs what they are, or perhaps what we aspire them to be: Automatons. Instead of clinging to language that looks good on copy but maybe opens you to the assumption that you’ve opted into unchecked legal liability, get your words right. Ditch the Agent, embrace the Automaton.