Published inData Science CollectiveMaking LLMs Lighter: A deep dive into LLM quantization with CodeUnderstand how LLM quantization methods such as LLM.int8() and GPTQ worksFeb 241Feb 241
Published inTowards AIFrom FP32 to INT8: The Science of Shrinking AI ModelsUnderstanding quantization of neural network along with their implementationFeb 121Feb 121
Published inTowards AIUnderstanding Agentic RAG and How It’s Different From RAG With CodeIn the world of Large Language Models (LLMs), Retrieval Augmented Generation (RAG) has emerged as a game-changer. Traditional RAG, while…Jan 28Jan 28
Published inLevel Up CodingDiving Deeper into LLM Agents along with CodeMany of us have seen Iron Man movie, where Jarvis was an AI assistant for the iron man conducting various tasks for him. Well with the…Dec 15, 2024Dec 15, 2024
Published inTowards AIExtending Context Length of an LLM: Intuition, Implementation & Interview QuestionsUnderstand multiple methods like Positional Interpolation, NTK awareness, and Dynamic NTK for extending context length.Jul 2, 2024Jul 2, 2024
Published inTowards AIRotary Positional Embedding(RoPE): Motivation and Code ImplementationDelve deeper into RoPE along with its code to understand the positional embedding in LLMs betterJun 13, 2024Jun 13, 2024
Published inTowards AILow-Rank Adaptation (LoRA): From Intuition to Implementation to Interview QuestionsDelving Deeper into LoRA for LLMsJun 3, 20241Jun 3, 20241
Elevate Your Model Training with WandB: A Data Scientist’s Best FriendLearn how to incorporate WandB into all of your codeJul 24, 2023Jul 24, 2023
Published inPredictHow AI can make your life better!AI (artificial intelligence) can be an intimidating word that has suddenly crept into our lives, so let’s see how it affects us.Feb 1, 2022Feb 1, 2022
Published inTDS ArchiveA better way to code, as a data scientistEven if you are just plotting a graph, a clean code is always appreciated, so try out these practices while codingDec 15, 20211Dec 15, 20211