PinnedSasirekha CotaImproving LLM reasoning using prompting — CoT, L2M, PoT, ToT, PHP- Part 1Large language models — in spite of their versatility and impressive behaviour in understanding and emulating human conversation — was not…Feb 8Feb 8
PinnedSasirekha CotaEverything you need to know about Fine-tuning LLMs — Part 4 — Parameter-efficient fine-tuning…PEFT is a collection of techniques designed to adapt LLMs to specific tasks while significantly reducing the computational resources and…Jan 3Jan 3
PinnedSasirekha CotaDeep learning basics — Part 5 — Word2VecWord2vec is a watershed moment in the history of NLP, fundamentally changing the way we represent and understand language.Nov 30, 2023Nov 30, 2023
PinnedSasirekha CotaDeep learning basics — Part 3 — Discrete Representation of TextDiscrete representation refers to a method of representing each data point independently as integers, symbols, or binary vectors.Nov 27, 2023Nov 27, 2023
Sasirekha CotaImproving LLM reasoning using prompting — CoT, L2M, PoT, ToT, PHP- Part 2We covered the Chain-of-Thought (CoT) and Least-to-Most (L2M) prompting techniques that enables LLMs to handle reasoning tasks better in…Feb 9Feb 9
Sasirekha CotaRed-Teaming — to make LLMs robust and saferLLMs are incredibly powerful and shine in natural language understanding and generation (mimicking human behavior so well), but biases…Jan 20Jan 20
Sasirekha CotaEverything you need to know about Fine-tuning LLMs — Part 3There are various approaches to fine tuning — depending on the training data, learning paradigm, extent of model modification and…Jan 21Jan 21
Sasirekha CotaEverything you need to know about Fine-tuning LLMs — Part 2The Mechanics: the six-step process of fine-tuning.Dec 17, 2023Dec 17, 2023