Dorian DrostinTowards Data ScienceTake a Look Under the hoodUsing Monosemanticity to understand the concepts a Large Language Model learnedJun 13Jun 13
Dorian DrostinTowards Data ScienceAn Overview of the LoRA FamilyLoRA, DoRA, AdaLoRA, Delta-LoRA, and more variants of low-rank adaptation.Mar 103Mar 103
Dorian DrostinTowards Data ScienceThe German Tank ProblemEstimating your chances of winning the lottery with samplingMar 62Mar 62
Dorian DrostinTowards Data ScienceHow Nightshade WorksConfusing image-generating AI with poisoned dataNov 3, 20233Nov 3, 20233
Dorian DrostinTowards Data ScienceThe Capture-ReCapture MethodEstimating a population size without counting itSep 13, 2023Sep 13, 2023
Dorian DrostinTowards Data ScienceMultilevel Regression Models and Simpson’s paradoxAvoiding false conclusions with the proper toolingAug 8, 20232Aug 8, 20232
Dorian DrostinTowards Data ScienceDifferent ways of training LLMsAnd why prompting is none of themJul 21, 20233Jul 21, 20233
Dorian DrostinTowards Data ScienceCode understanding on your own hardwareSetting up an LLM to talk about your code — with LangChain and local hardwareJul 5, 2023Jul 5, 2023
Dorian DrostinTowards Data ScienceInteracting with large language modelsEnriching prompts to steer the model — explained for non-expertsMay 24, 2023May 24, 2023
Dorian DrostinTowards Data ScienceA brief history of language modelsBreakthroughs on the way towards GPT — explained for non-expertsMay 12, 2023May 12, 2023