PinnedHesam SheikhinTowards Data ScienceThe Smarter Way of Using AI in Programmingavoid the outdated methods of integrating AI into your coding workflow by going beyond ChatGPTAug 296Aug 296
PinnedHesam SheikhinTowards Data ScienceWhat We Still Don’t Understand About Machine LearningMachine Learning unknowns that researchers struggle to understand — from Batch Norm to what SGD hidesJul 265Jul 265
PinnedHesam SheikhinTowards AILearn Anything with AI and the Feynman Techniquestudy any concept in four easy steps, by applying AI and a Noble Prize winner approachJul 826Jul 826
Hesam SheikhinTowards AIHow I Stay Up to Date With the Latest AI Trends [2024]Stay ahead of the AI game with a concise list of resources…Aug 155Aug 155
Hesam SheikhinTowards Data ScienceCreate Synthetic Dataset Using Llama 3.1 to Fine-Tune Your LLMUsing the giant Llama 3.1 405B and Nvidia Nemotron 4 reward model to create a synthetic dataset for instruction fine-tuning.Aug 7Aug 7
Hesam SheikhinTowards Data ScienceA Comprehensive Guide to Collaborative AI Agents in Practicethe definition, and building a team of agents that refine your CV and Cover Letter for job applicationsJul 32Jul 32
Hesam SheikhWhy Medium is the Easiest Way to Start Up Your Personal Brand in 2024It’s not too late to grow your brand and start your one-person business on Medium.Jul 11Jul 11
Hesam SheikhinTowards Data ScienceUnderstanding Buffer of Thoughts (BoT) — Reasoning with Large Language ModelsNew prompt engineering tool for complex reasoning, compared with Chain of thought (CoT) and Tree of Thought (ToT)Jun 146Jun 146
Hesam SheikhinTowards AIUnderstanding MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuningthe math and intuition behind a novel parameter-efficient fine-tuning methodJun 7Jun 7
Hesam SheikhinTowards AILoRA Learns Less and Forgets LessWe will go through LoRA (Low-Rank Adaptation of Large Language Models), and compare LoRA to Full Fine-Tuning.Jun 72Jun 72