The Success of ChatGPT Reveals Most Human Communication is Bullshit
Large language models are strings of words and phrases glued together by rolls of the dice, much like your last conference call
Weāre just shy of a year since ChatGPT launched and changed the world. About a week later I wrote a post that recognized that, and I believe its transformative power is only beginning to be felt and understood.
Iāve since gone so far as to predict that these advances in AI capabilities and adoption will destroy the Internet as we know it, websites vanishing and being replaced by personal bots that are directly fed, providing all our data.
But this isnāt about the AI Revolution. Instead, Iām marveling at how large language models (LLMs) can string together words and phrases to the extent we canāt tell the difference between a chatbot and other people.
Alan Turing, cracker of the Nazi codes and Big Daddy of the computer, proposed his eponymous āTuring Testā as the easiest and most effective way to determine whether or not a machine was conscious like us humans.
He imagined two rooms, each containing a way to send and receive written messages betwen them. A real person sat in one, typing then receiving responses. The conversation would continue for several iterations.
If the human couldnāt tell whether or not the whomever in the other room was a human or a machine, then it didnāt matter. And if the whomever was revealed to be a machine, then the machine de facto had to be sentient.
Fast forward and imagine youāre the human in one room, and ChatGPT is in the other. According to Turing ā and the Google programmer who went bonkers ā todayās chatbots are sophisticated enough to be human.
Despite the speculation of the smartest and dumbest programmers ever, and the over-hyped and over-hystericized āhallucinationsā of these occasionally creepy Bots, none of them are even remotely self-aware.
Even the best and brightest LLM-based Bots are incapable of creative thought and problem solving. My favorite science pundit Sean Carroll illustrates this limitation using a wonderfully simple chess challenge.
Ask a computer to play chess, and itāll kick any grandmasterās ass. But ask ChatGPT to extrapolate the standard rules to a toroidal board, and determine if black or white has the advantage, and epic #fail.
A moderately intelligent human, if given this question and a visualization of the wraparound board similar to the screen in the video game Asteroids, would see how white, moving first, could instantly checkmate black.
Such limits become evident when you understand the basics of how LLMs work. The essence is simple, even if the details and their implementation have taken decades of research and billions of dollars to bring to life.
Step 1: Scrape petabytes of human-written content into a database; Step 2: Break the data into ātokensā; Step 3: Calculate the likelihood that any token has been strung together with any other token within the database.
For example, the token āI woke up this morningā¦ā is associated with the token āā¦and I got myself a beerā only 0.00023% of the time ā unless the query also has āJim Morrisonā in it, boosting it to 99.825% frequency.
I made that shit up, but you get the picture. ChatGPT and all their ilk have as much āintelligenceā as a house fly, maybe less. They basically cobble together words and phrases and lines of code that best matches your ask.
Now hereās what amazes and shocks me: Despite that boneheaded determinism, the outputs of these next gen Bots are so realistic that you can chat with them for a long time before realizing they have an IQ of 5.
Not only that, but being that stupid doesnāt stop them from passing the bar exam, helping to design new drugs, or do thousands of other amazing and apparently super smart things we humans have trouble or canāt even do.
And all that makes me wonder about what āintelligenceā actually means, especially when it comes to performing certain tasks, no matter how seemingly complex and important. Dumb machines are rocking it.
Coming around full circle, I was recently on one of those interminable conference calls at work. Someone was talking, and I couldnāt help but think, twenty years in this business, that all I heard was word salad.
Remember those celebrity quote generator memes? The Charlie Sheen ones were particularly funny. Random jumbles, they were spot-on because they captured the essence of the famous personās lexicon of bullshit.
And now Iām wondering if almost all human communication is bullshit, proven by large language modelsā ability to grind up everything we write, say, and eventually do, spit it back out as if the machines just made it up.
My failed romances spring to mind, of course. The same cycles of honeymoon to hell, the same arguments repeating over and over, endless variations of the exact same nonsense, just tap the button for more.
No wonder one of the biggest AI growth markets is SexBots. I think their burgeoning popularity isnāt from making porn more engaging, but by implicitly reminding us how unpleasant real relationships usually are.
So if the AI Revolution has shown us anything, itās not to be terrified of machines taking over the world and killing us all ā instead, itās how theyāve revealed that everything we consider human is actually regurgitated noise.
The machines need to feed on our āoriginality,ā and if youāre a genuinely creative person you donāt need the machines to create. Yet everything that makes us uniquely human keeps slipping away, perhaps gone for good.
Have a nice day! And by the way, within seconds after I post this blog, OpenAI and its competitors will scrape its content, slurp it into their database, break it into tokens, and spew it back out in endless waysā¦
(Of course Iām not going to see a penny of that exercise, either, booyah.)