Meir MichanieOffload Overfitting to GradientThis article is part of a series of articles I wrote were we review the usage of LLM model overfitting vs RAG. The idea is that fine tuning…Jul 3Jul 3
Meir MichanieBringing Your Fine-Tuned MLX Model to Life with Ollama IntegrationIn this series, we have learned how to train our own model using MLX. The first article, “Overfitting is all you need,” explained the…Jul 2Jul 2
Meir MichanieTraining for overfittingHow to create a dataset for LLM memorization without generalizationJun 30Jun 30
Meir MichanieOverfitting is all you needHow you can fine tune any LLM model to memorize your data.Jun 30Jun 30
Meir MichanieAI Won’t Take Your Job — But People Using AI WillIn the ever-evolving landscape of technology, one fear looms large: the rise of artificial intelligence (AI) and its potential to replace…Jun 6Jun 6
Meir MichanieThe fastest way to empower your LLM to fetch data and run multiple tasksUse OpenAPI actions with agentic LLMsJun 4Jun 4
Meir MichanieParallel runs for LlamaIndex agentsSpeed up your task by running agents in parallel when possible.May 22May 22
Meir MichanieLlamaIndex for agentic RAG and moreLearn how to use LlamaIndex agents to perform multiple tasks.May 20May 20
Meir MichanieI won’t change “Her” for GPT-4oWhy “Her” Is More Than Just a Movie: The Future of AI as an OSMay 15May 15
Meir Michanie@llm_func, a decorator from llm_wrapper for including AI calls in pythonHow to run virtual functions without writing code.Apr 30Apr 30