The better LLMs learn to talk, the more they begin to see the world.

AI World Vision
AI World Vision News
4 min readAug 20, 2024

--

Researchers at MIT’s CSAIL ran some cool tests & found out that big language models create their own little worlds inside their “heads”, showing they get language more than just copying stuff.

Ask GPT-4 to sniff a rainy campsite, and it will tell you it can’t. But ask the AI to talk that smell, and boom! You get descriptions like “air thick with anticipation” & “a scent that’s both fresh and earthy.” Its nose-less, rainless existence notwithstanding, it got it right. What is going on? Perhaps it is just parroting — literally, repetition of what it read, not really understanding rain or smell.

But wait — just because it doesn’t have eyes, does that mean LLMs can never know a lion is bigger than a house cat? People have long thought that understanding words is what makes us smart. How do we do it anyway?

Well, researchers from the MIT CSAIL looked into this mystery and returned with really interesting stuff. Turns out, LLMs may just be coming up with an understanding of reality to make better sentences. The team dreamt up some Karel puzzles — think giving commands to a robot in a fake world. They then trained an LLM on solutions but did not show how they actually worked. Using something called “probing,” they peeked into the model’s thinking as it solved new puzzles.

--

--

AI World Vision
AI World Vision News

Disabled retiree trying to improve his life by writing about news in Artificial Intelligence, Crypto finance, internet protection and technological innovations.