Day 6: Why Transformers Normalize & Add Residuals — The Hidden Boosters Behind Their Power-By Isha khatanaAug 5Aug 5
Day 5: The Role of Feedforward Neural Networks in Transformers — Refining Each Word’s Understanding-Isha KhatanaAug 4Aug 4
Day 4: Multi-Head Attention in Transformers — Seeing the Same Sentence from Multiple AnglesIn the previous blog, we discussed positional encoding — how Transformers understand the order of words even though they process sequences…Jul 31Jul 31
Day 3:Positional Encoding in LLMs Demystified: How Sin & Cos Functions Help Transformers Understand…By Isha KhatanaJul 18Jul 18
Day 2 : From Words to Meaning: A Beginner’s Guide to Attention Mechanism in Transformers-By Isha KhatanaJul 17Jul 17
What Business Teams Actually Want from Your ML Model (And What They Don’t Care About)Bridging the gap between model performance and real business impactJul 15Jul 15
The Real-World Rules I Use for Picking Models That WorkThe surprising truth: sometimes the best-performing model is not the one you should use.Jul 10Jul 10
Data Is Never Clean — Embracing Imperfect Inputs in AI SystemsThere’s no such thing as perfect data — only data that’s good enough to be useful.Jul 9Jul 9
🕵️♀️ Thinking Like a Data Detective: How I Approach Unstructured ProblemsIn today’s fast-moving, solution-driven world (especially in tech and AI), there’s a strong tendency to jump straight to building — a…Jul 8Jul 8