Darren Wallace
Darren Wallace

Darren Wallace

AI and LLMs

4 stories

An overview of the RAG pipeline. For documents storage: input documents -> text chunks -> encoder model -> vector database. For LLM prompting: User question -> encoder model -> vector database -> top-k relevant chunks -> generator LLM model. The LLM then answers the question with the retrieved context.
Darren Wallace

Darren Wallace

Develop educational web applications by day. Wannabe tortured artist. Survivor - but only just.