Pere MartrainTowards AIHow to Create a Medical Agent / RAG System.Learn how to build a ReAct agent powered by a RAG system using LangChain, ChromaDB, and OpenAI, with a user-friendly Gradio interface.3d ago13d ago1
Pere MartrainLevel Up CodingInfluencing a Large Language Model response with in-context learning.A simple introduction to prompt Engineering with OpenAI. Some differences between gpt-3.5 and gpt-4o.Aug 27Aug 27
Pere MartrainTowards AIFirst NL2SQL Chat with OpenAI and Gradio.You are going to create a simple Natural Language to SQL translator using models from OpenAI and create the interface with Gradio.Aug 14Aug 14
Pere MartrainArtificial Intelligence in Plain EnglishCreate a simple Chatbot with OpenAI and Gradio.This is not about creating a chatbot. It is about understanding how roles work in OpenAI, how memory is maintained, and how to create a…Aug 8Aug 8
Pere MartrainTowards AIEvaluating LLM Summaries using Embedding Distance with LangSmith.LangSmith is the new tool from LangChain for tracing and evaluating models. In this article, we will explore how to use it to assist in…Feb 19Feb 19
Pere MartrainDataDrivenInvestorDecoding Risk: Transforming Banks with Customer Embeddings.In this article, we explore the transformative power of embeddings and large language models (LLMs) in customer risk assessment and product…Nov 20, 2023Nov 20, 2023
Pere MartrainTowards AIHow To Set up a NL2SQL System With Azure OpenAI StudioWe’ll see how to use Azure OpenAI Studio to set up an inference endpoint we can call to generate SQL commands.Nov 9, 2023Nov 9, 2023
Pere MartrainTowards AICreate a SuperPrompt for Natural Language to SQL Conversion for OpenAI.One of the things that has changed more in recent months, since the ChatGPT boom, is the emergence of massive large language models able to…Nov 3, 20231Nov 3, 20231
Pere MartrainTowards AIQLoRA: Training a Large Language Model on a 16GB GPU.Let’s explore how Quantization works and provide an example of fine-tuning a Llama3-8-b model on a T4 16GB GPU in Google Colab.Oct 20, 2023Oct 20, 2023
Pere MartrainLevel Up CodingEfficient Fine-Tuning with LoRA. Optimal training for Large Language Models.LoRA is one of the most efficient and effective fine-tuning techniques applicable to Large Language Models. In the post we will take a…Oct 5, 2023Oct 5, 2023