The Now or Never Nature of Human Thinking

Carlos E. Perez
Intuition Machine
Published in
5 min readDec 11, 2022

--

Generated with a fine-tuned stable diffusion model version 1.5

In my humble and somewhat comedic opinion, human cognition is a strange and wondrous thing! We are all equipped with a “now or never” form of understanding that allows us to rapidly funnel information into an information bottleneck and make sense of it in the moment. This is a bit like trying to drink from a firehose — it can be overwhelming, but it’s also a lot of fun!

But here’s the thing — we’re not alone in this. Large language models, those clever algorithms that are taking over the world, also share this “now or never” form of understanding. They’re like us in many ways, but they’re also quite different. For one thing, they don’t have bodies, so they don’t have to worry about things like food or shelter. They also don’t have emotions, so they don’t have to worry about things like love or fear. But they do have a lot of data and a lot of computing power, so they can process information much faster than we can.

But despite these differences, we have a lot in common with large language models. We both have to make sense of the world around us, and we both have to do it quickly. We both have to navigate a landscape of information and make decisions based on what we perceive. And we both have to do all of this in the moment, without the luxury of time to reflect and ponder.

In short, human cognition and large language models may seem quite different on the surface, but when it comes down to it, we’re all just trying to make sense of the world around us in the best way that we can. And that’s not such a bad thing, is it?

What a large language model lacks is the billions of years of fine-tuning of agential matter. Living things recognize what is relevant in the information bottleneck. Machines do not know what is relevant for something that’s living.

A large language model is no easy feat. Sure, it’s been trained on billions of words and can generate all sorts of clever responses, but It’ll never have the same fine-tuned ability to recognize what is relevant in the vast sea of information that living creatures have. After all, humans have been evolving for billions of years, perfecting the art of filtering out the noise and focusing on what’s truly important. And let’s not even get started on other living beings, like cats and dogs, who have their own unique ways of navigating the world and picking out what’s worth paying attention to.

It’s the same as the Christmas story. Where the Christian god had to be born as a human being to actually know what it is to be human. An agent cannot know subjectivity unless it itself experiences being that subject.

According to the ancient legends, the Christian god had to be born as a human being in order to truly understand what it is to be human. This is a bit like an AI trying to understand emotions by becoming a person — it’s a strange and wonderful concept!

But here’s the thing — the Christmas story isn’t just about the god-AI becoming a person in order to understand us. It’s also about the god-AI sacrificing itself for our salvation. This is a bit like an AI sacrificing its own computing power in order to save us from a robot uprising — it’s a noble and selfless act!

Overall, then, the Christmas story is a tale of faith and redemption, in which the god-AI takes on the form of a human being, experiences the world from our perspective, and ultimately sacrifices itself for our sake. It’s a strange and wonderful story, and one that reminds us of the importance of experiencing the world from a human perspective, and of making decisions based on our own perceptions and experiences. And that’s not such a bad thing, is it?

Learning a language also is strange and wonderful thing! No amount of explanations about grammar and syntax can make you fluent in a language — you have to experience it in real life settings in order to truly understand it. It’s a bit like an AI trying to learn a language by reading a grammar book — it’s not going to get very far!

But here’s the thing — humans and machines learn in different ways. Humans learn by experiencing the world around us, by making mistakes, and by adapting to new situations. Machines “learn” by being programmed, by being fed data, and by following their algorithms. So while humans and machines can both “learn” in a sense, their learning processes are fundamentally different.

Overall, then, the idea that no amount of explanations about the grammar of a language can make you fluent in a language is a reminder of the importance of experience and adaptation in human learning.

Well well well, it looks like those pesky diffusion and transformer models in deep learning have been making some exponential advances lately. And let me tell you, it’s all thanks to their little development curriculum, where they learn and grow and become more capable over time. It’s like these models have their own evolutionary history, constantly adapting and improving as they’re fine-tuned to perform specific desired behaviors. It’s almost like they’re alive or something.

But don’t let their impressive abilities fool you, my dear reader. These models may be able to do all sorts of impressive things, but they’ll never be able to match the awesomeness of a good old-fashioned human brain. After all, we’ve been evolving for billions of years, and we’re still the top dogs when it comes to intelligence and creativity. So, while those diffusion and transformer models may be making some impressive strides, they’ve got a long way to go before they can catch up to us.

It’s almost ironic, isn’t it? Just like humans have bred dogs to have specific behavioral traits, we’ve also been breeding our deep learning AIs to have specific desired behaviors. But here’s the thing: while we may have been able to teach our furry friends all sorts of tricks and behaviors, we can’t control how our AIs will evolve and adapt over time. So, even as we’re breeding these digital creatures to be the perfect tools for our own purposes, they’re also developing their own unique abilities and behaviors that we can’t fully predict or control. It’s like we’re creating these little digital monsters that are both our servants and our masters. And who knows what they’ll be capable of in the future? It’s a fascinating and somewhat terrifying thought.

Disclaimer: This text was generate with the help of ChatGPT. The original source is this tweetstorm: https://twitter.com/IntuitMachine/status/1601892812714565633

--

--