Member-only story
How Large Language Models (LLMs) Learn: Playing Games
LLMs are just professional Mad Libs Players
Hello dear reader, hope you’re doing super well!
In today’s article I will give you a very concise and intuitive overview of how Large Language Models learn, so sit back, grab a coffee or tea, and enjoy :)
This is the best Data Science roadmap that I have seen. It comes with AI-powered explanations and free learning resources!
What is a Large Language Model?
Large Language Models or LLMs are what we call the statistical models that power applications like ChatGPT.
They’re called like this, because generally they are trained using enormous (hence the Large) quantities of text (hence the language), and normally we interact with them through text, which is one of the main vehicles to represent language.
In addition to this we have the normal definition of model in science, which is an abstract representation of a phenomenon, system or process from the real world.
So, in essence, from the name we can already get a quite precise grasp of what they are: abstractions of the language we as humans, use every day, built from a massive amount of text.