PinnedMichael WoodinCubed100% Accurate AI Claimed by Acurai — OpenAI and Anthropic Confirm Acurai’s DiscoveriesAcurai’s audacious claims to have discovered how LLMs operate are now confirmed by studies conducted by OpenAI and Anthropic.Aug 2615Aug 2615
PinnedMichael WoodinCubedEliminating Hallucinations Lesson 1: Named Entity Filtering (NEF)Named Entity FilteringSep 28Sep 28
PinnedMichael WoodCreating Accurate AI: Coreference Resolution with FastCorefIntroductionOct 15, 20231Oct 15, 20231
PinnedMichael WoodGPT 4 Hallucination Rate is 28.6% on a Simple Task: Citing Title, Author, and Year of PublicationThe all-too-common myth of GPT 4 having only a 3% hallucination rate is shattered by a recent study that found GPT 4 has a 28.6%…Jun 262Jun 262
PinnedMichael WoodStop Saying RAG Solves Hallucinations — You’re Hurting The AI IndustryToo many companies (and data scientists) are claiming that RAG eliminates hallucinations. Consider the leading providers of legal research…Jun 29Jun 29
Michael WoodinCubedOpenAI’s o1 Model is a DisasterBefore you buy the inevitable hype regarding the brand new o1 model, read OpenAI’s stunning admission in the o1 System Card (page 5)…4d ago34d ago3
Michael WoodRespectfully, no.In fact, you can empirically demonstrate this yourself by replicating the demonstrations regarding magnesium and calcium in the video…Sep 51Sep 51
Michael WoodExcellent article on how to implement standard chunking techniques.The basic methodology is to first assign each sentence an index number. Then process the sentences to transform them into independent…Aug 193Aug 193
Michael WoodAnother great, useful article.We've found a third search/ranking method to also be essential—a method based on synonyms. Although it is true that "cat" and "feline" are…Aug 171Aug 171