PinnedIngrid StevensExtract Structured Data from Unstructured Text using LLMsUsing LangChain’s create_extraction_chain and PydanticOutputParserJan 13Jan 13
PinnedIngrid StevensQuantization of LLMs with llama.cppUnderstanding and Implementing n-bit Quantization Techniques for Efficient Inference in LLMsMar 159Mar 159
Ingrid StevensBuilding a Simple GenAI Chatbot with Google MesopA Step-by-Step Guide to Creating a Generative AI Chatbot Using Google Mesop and LLM’sAug 20Aug 20
Ingrid StevensRegulating AI: The Limits of FLOPs as a MetricAn Argument for (if one must) Regulating Applications, Not MathMay 12May 12
Ingrid StevensLlama 3's Performance Benchmark Values ExplainedUnderstand the Acronyms: MMLU, GPQA, HumanEval, GSM-8K, MATHApr 191Apr 191
Ingrid StevensPrivateGPT v0.4.0 for Mac: LM Studio & OllamaRun PrivateGPT Locally with LM Studio and Ollama — updated for v0Mar 312Mar 312
Ingrid StevensinArtificial Intelligence in Plain EnglishLLM Jailbreak: Comparing DrAttack, ArtPrompt, and Morse CodeRed teaming LLMs to Reveal “Forbidden” InformationMar 103Mar 103
Ingrid StevensStreaming Local LLM Responses with LM Studio Inference ServerStreaming with Streamlit, using LM Studio for local LLM inference on Apple Silicon.Mar 9Mar 9
Ingrid StevensChat with your Local Documents | PrivateGPT + LM Studio100% Local: PrivateGPT + 2bit Mistral via LM Studio on Apple SiliconFeb 249Feb 249
Ingrid StevensChat with your Local Documents100% Local: PrivateGPT + Mistral via Ollama on Apple SiliconFeb 237Feb 237