Retrieval-Augmented Generation (RAG): Control Your Model’s Knowledge and… Hallucinations!

Super-sized language models are really good at what they do. This is something we all know from constantly hearing about it on A.I. related news.

This model probably knows more than you and even writes better than you.

Admittedly, it is really big (175 Billion parameters) and it has seen a lot of text (~500 Billion words). But when we say “parameters”, these are just numbers we’re talking about; just matrices upon matrices of floating point values. It just knows, somehow. It’s not even like it has a database or a lookup table of information.

But maybe it should. Maybe it should have a database of knowledge like a normal “machine” would.

The thing is, these language models know so much, but it tends to not know what it knows. Developers that have worked with models like GPT must’ve encountered this problem in one way or another:

these language models hallucinate.

--

--