AnythingLLM+Ollama:Bring Together All LLM Runner and All large Language Models-Part 02 Install AnythingLLM and Connect with Ollama.
Learn to Connect Koboldcpp/Ollama/llamacpp/oobabooga LLM runner/Databases/TTS/Search Engine & Run various large Language Models.
Previous Article
The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
Product Overview
AnythingLLM is a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions to build a private ChatGPT with no compromises that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
AnythingLLM divides your documents into objects called workspaces. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
Some cool features of AnythingLLM
- Multi-user instance support and permissioning
- Agents inside your workspace (browse the web, run code, etc)
- Custom Embeddable Chat widget for your website
- Multiple document type support (PDF, TXT, DOCX, etc)
- Manage documents in your vector database from a simple UI
- Two chat modes conversation and query. Conversation retains previous questions and amendments. Query is simple QA against your documents
- In-chat citations
- 100% Cloud deployment ready.
- “Bring your own LLM” model.
- Extremely efficient cost-saving measures for managing very large documents. You’ll never pay to embed a massive document or transcript more than once. 90% more cost effective than other document chatbot solutions.
- Full Developer API for custom integrations!
Supported LLMs, Embedder Models, Speech models, and Vector Databases
Language Learning Models:
- Any open-source llama.cpp compatible model
- OpenAI
- OpenAI (Generic)
- Azure OpenAI
- Anthropic
- Google Gemini Pro
- Hugging Face (chat models)
- Ollama (chat models)
- LM Studio (all models)
- LocalAi (all models)
- Together AI (chat models)
- Perplexity (chat models)
- OpenRouter (chat models)
- Mistral
- Groq
- Cohere
- KoboldCPP
- LiteLLM
- Text Generation Web UI
Embedder models:
- AnythingLLM Native Embedder (default)
- OpenAI
- Azure OpenAI
- LocalAi (all)
- Ollama (all)
- LM Studio (all)
- Cohere
Audio Transcription models:
- AnythingLLM Built-in (default)
- OpenAI
TTS (text-to-speech) support:
- Native Browser Built-in (default)
- OpenAI TTS
- ElevenLabs
STT (speech-to-text) support:
- Native Browser Built-in (default)
Vector Databases:
- LanceDB (default)
- Astra DB
- Pinecone
- Chroma
- Weaviate
- Qdrant
- Milvus
- Zilliz
- Qdrant
- Milvus
- Zilliz
Ollama:Large Language Models Runner
https://hub.docker.com/r/ollama/ollama
How to Install AnythingLLM and Connect with Ollama
Step 01: Visit official website and Download the installer for Mac/Windows/Linux
Step 02: Double click the installer.
Step 03: If you want to just test the installer without installation then double click else drag and drop to application folder.
Step 04: Once you reach to workspace then click on add new workspace.
Step 05: give your workspace name and save it.
Step 06: Click on settings for Ollama
Step 07: Click on chat setting for your LLM Provider
Step 08: Choose Ollama as LLM provider
Step 09: Choose Workspace Ollama Models
Step 10: Click on update workspace
Step 11: click on Workspace agent configuration
Step 12: Choose Ollama
Step 13: Click on Update workspace agent
Step 14: Click on configure agent skills.
Step 15: Following Skills are available, Choose as per your requirement.
a) RAG & Longterm Memory (Default)
b) View and Summarize data (Default)
c) Scrape Websites (Default)
d) Generate and Save Files to Browser
e) Generate Charts
f) Web Search
g) SQL Connector
Step 16: Choose LLM Preference
Step 17: Choose TTS (Text to Speech)
Step 18: Choose Transcription Model
Step 19: Choose Embedder Preference
Step 20: Choose Vector database and click back button to go back to workspace.
Step 21: Now start asking question and get answers with TTS/Transcription/SQL Connector/Browser and other functionality.
Stay tuned for more updates on AnythingLLM
Here is Quick Youtube Video for Visual reference.