AI isn’t sufficiently advanced technology … yet

paultyma
Bullpen Capital
Published in
5 min readAug 1, 2023

Any sufficiently advanced technology is indistinguishable from magic. — Arthur C. Clark

In many ways, LLMs and ChatGPT have made us rethink Artificial Intelligence. Before they appeared, AI was mostly something many of us heard about that seemingly only existed to get us to click more web ads or move video game monsters around. Less obtrusively, It was also found under the covers in many software products making refined decisions about improving profit or customer experience (not often at the same time). But it was mostly “under the covers” — we felt it more than we saw it.

ChatGPT changed all that.

Truthfully, Our lifetime has been spoiled with new technological advancements, but it has been a few years since technology has given us a “It can do that!?” moment. ChatGPT gave us a new and almost personal way to interact with AI. Its results are often astonishing. Sure, there’s plenty of results that are stupid or funny too. But when it shines, it shines.

AI has been around for decades and while it does feel like a recent, huge technological step forward, It is mostly in the sense that it often takes years of work to have overnight success. Late in the last century, AI was mostly known for making a lot of unfulfilled promises. It’s unfortunate too because it was the researchers then that built the foundation, but the computers at the time really weren’t up to the task to allow them to evolve it.

Computers have always been good at fast calculations. For example, it’s trivial to program a computer to play TicTacToe and never lose. I didn’t say it would always win, I said it’ll never lose — because TicTacToe is like that, rather prone to draws. Even a child with sufficient experience in playing TicTacToe will quickly learn how to never lose. For a computer, a naive solution could search every possible current move and how that will propagate into every possible move after that (and so on). There’s only something like 19,000 possible game states (many of which you can ignore as they don’t make sense). There’s better ways for a computer to do it, but because the problem is so small, even the brute force searching approach will work.

The game Connect-Four represents a bigger problem. There are a few Quintillion possible board configurations. Again, by far most of those don’t make sense — they are either illegal or simply couldn’t happen per the rules of the game. So the good news is that instead of a few Quintillion possibilities, you just need to consider a few trillion to build a winning system. This still is well past the ability for humans to now think about every move, but within a computer’s grasp even without AI. John Tromp[1] used 40,000 hours of computer time in 1995 to make a database of every possible Connect-Four game board. With that, the system can make its move during a game needing just a few seconds.

John Tromp’s solution was a brute-force search of every possible game state and is what we’d call a strong solution to the game. Truth is however, that for most problems, you don’t always need a strong solution — a weaker one will do just fine. And good thing too, because once you get to games like Checkers or Chess, brute force searching for every possible game situation is no longer feasible even with our fastest computers.

This is where AI comes in. When we have a problem with too many possibilities, we need to start guessing. We can surely train ourselves (or an AI Model) to recognize cues that lead us to a pretty good solution without having to consider every possibility.

Enter ChatGPT and LLMs.

If I asked you “What are you doing tonight?”, there are a lot of possible answers. Let’s say though, you’re limited to a 20 word answer and your answer has to be a sentence that actually makes sense for this question.

Answers like “I’m going bowling” or “Nothing” or “I’m launching in my spaceship to the moon” might all be correct. That last one seems pretty infeasible, but it is at least remotely possible (or at least the person answering might think it’s possible).

So how do you train an AI to know how to respond to English questions? You give it lots of examples, that’s how[2]. Once it sees a few thousand responses to “What are you doing tonight?”, it may detect that “I’m going bowling” looks normal, and “I’m launching in my spaceship to the moon” doesn’t, and “It’s raining in Sweden” is just plain wrong.

We like to think we are masters and craftsmen of the English language and we probably are. But maybe we have overestimated how tough it is to master. It turns out that for any given question there’s really only a finite number of good answers.. It’s nowhere near as hard as winning Chess. But like Chess, once you’ve seen enough examples, you can fake some really great responses.

That said, while ChatGPT can sometimes give astonishing answers, it’s not putting thought into its responses. It has seen a billion English sentences and is guessing what to say next. Despite how we think we’re all special, we humans tend to ask a lot of the same things over and over and many answers exist all over the internet. Of course it’s wrong some of the time, even some of those input sentences were wrong. But it’s right a lot of the time. And it has read a lot more English writing than you or me.

Its predictions of what to say next are quite educated, but so far AI is not sufficiently advanced technology to be considered magic. So far, LLMs and ChatGPT aren’t doing any thinking much less at some higher level than we’re capable of. Now that would look like magic! For the moment though, it’s just a well-designed guessing machine (probably the best the world’s ever seen). But it’s an amazingly good start, and while it isn’t Magic just yet — you never know, that could be the next big thing.

[1] Varies by size of the Connect Four board. John Tromp gives about 4 Trillion for a standard game — https://tromp.github.io/c4/c4.html

[2] Where do you get lots of English sentence examples? Note that Reddit and Twitter/X represent excellent repositories of human-written input text. And both havelimited access to their data.

--

--