Giving self-reflection capabilities to LLMs

Vishal Rajput
AIGuys
Published in
7 min readAug 29, 2023

--

By now, everyone would have used ChatGPT or some other LLMs. ChatGPT is great, but it still hallucinates, especially with complex problems. LLMs hold tremendous power, but most of us cannot utilize it to the full extent. The way we prompt can completely change the response. Usually, ChatGPT tries to give politically correct and generalized answers, often lacking nuance and a lot of verbiage. A verbiage is a sentence or paragraph that has a lot of words but doesn’t make any specific point. It is not that ChatGPT can’t answer nuance, but it needs specific instruction to do so. Let’s see how to solve the problem of nuance and how to make it better at responding to complex queries.

Photo by ilgmyzin on Unsplash

Problems with LLMs

There are basically five major problems with ChatGPT or other LLMs.

  1. They give a very generalized response, often lacking nuance. At times, it repeats itself.
  2. There is a lot of verbiage and unnecessary words that don’t say anything.
  3. They often try to tell things politically correctly and can’t make good arguments from a given worldview.
  4. It hallucinates and often gets things wrong with complex problems that might require the answer to be spoken in more than 8k or 16k tokens.
  5. Runs out of memory to store the relevant…

--

--