Open WebUI: The LLM Web UI

Omar Alva
5 min readMay 21, 2024

--

Futuristic Web UI — AI generated by author

Introduction

Open WebUI, formerly known as Ollama WebUI, is an extensible, feature-rich, and user-friendly self-hosted web interface designed to operate entirely offline. It supports various Large Language Model (LLM) runners, making it a versatile tool for deploying and interacting with language models.

Open WebUI provides a ChatGPT-style interface, allowing users to chat with remote servers running language models. This web UI is particularly useful for those who want to run language models locally or in a self-hosted environment, ensuring data privacy and control.

Concepts

Extensibility and Features

Open WebUI is built with extensibility in mind. It supports multiple LLM runners, which means it can be configured to work with different language models and frameworks. This flexibility allows users to choose the best model for their specific needs. The web UI is designed to be user-friendly, with a clean interface that makes it easy to interact with the models.

Self-Hosted and Offline Operation

One of the key features of Open WebUI is its ability to operate entirely offline. This is particularly important for users who are concerned about data privacy and security. By running the web UI locally, users can ensure that their data is not sent to external servers. This self-hosted approach also provides greater control over the deployment and management of language models.

Community-Driven Development

Open WebUI is a community-driven project, which means it benefits from contributions and feedback from a diverse group of users and developers. This collaborative approach helps ensure that the web UI continues to evolve and improve over time, incorporating new features and addressing any issues that arise.

Usage

Installation and Setup

To get started with Open WebUI, users need to install the necessary software and configure their environment. The installation process typically involves setting up Docker, as Open WebUI runs inside a Docker container. This ensures that the web UI is isolated from the host system and can be easily managed.

Install Docker: Ensure that Docker is installed on your system. Docker provides a convenient way to package and run applications in isolated containers.

Launch Open WebUI: Use Docker commands to pull the Open WebUI image and start the container (instructions below). This will set up the web UI and make it accessible via a web browser.

# If Ollama is on your computer, use this command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

# If Ollama is on a Different Server, use this command:
# To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

# To run Open WebUI with Nvidia GPU support, use this command:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Create an Admin Account: The first user to sign up on Open WebUI will be granted administrator privileges. This account will have comprehensive control over the web UI, including the ability to manage other users and configure settings.

Open WebUI Sign Up — Image by author

Connecting to Language Models

Once Open WebUI is up and running, users can connect it to various language models. This involves configuring the web UI to communicate with the servers running the models.

Configure Ollama: Set up your Ollama instances that Open WebUI will connect to. This may involve specifying the server addresses.

Open WebUI Settings — Image by author

Configure OpenAI (optional): Set the OpenAI API key. This allows Open WebUI to connect to OpenAI directly.

Open WebUI Settings — Image by author

Demo

To demonstrate the capabilities of Open WebUI, let’s walk through a simple example of setting up and using the web UI to interact with a language model.

Access the Web UI: Open a web browser and navigate to the address where Open WebUI is running. You will be prompted to create an admin account if this is the first time accessing the web UI.

Open WebUI Sign In — Image by author

Start a Chat Session: Once logged in, you can start a chat session with the language model. The interface is designed to be intuitive, with a text input field for entering your queries and a chat window for displaying the model’s responses.

Open WebUI Welcome page — Image by author
Open WebUI Select a model — Image by author

Manage Settings: As an admin, you have access to various settings and configurations. You can add or remove model instances, adjust load balancing settings, and manage user accounts.

Open WebUI Admin settings — Image by author

Conclusion

Open WebUI, the Ollama web UI, is a powerful and flexible tool for interacting with language models in a self-hosted environment. Its extensibility, user-friendly interface, and offline operation make it an ideal choice for users who value data privacy and control. By leveraging the capabilities of Open WebUI, users can deploy and manage language models with ease, ensuring optimal performance and a seamless user experience.

Whether you are a researcher, developer, or enthusiast, Open WebUI provides the tools you need to harness the power of language models in a secure and efficient manner. With its community-driven development and robust feature set, Open WebUI is poised to become a leading solution for self-hosted language model interfaces.

References

--

--

Omar Alva
Omar Alva

Written by Omar Alva

Passionate about DevSecOps, automation, orchestration, cloud, security, AI, data science, and Python.

Responses (1)