Ollama with open-webui with a fix

Majed Ali
4 min readFeb 10, 2024

--

Dalle 3 Generated image

NOTE: Edited on 11 May 2014 to reflect the naming change from ollama-webui to open-webui

Before delving into the solution let us know what is the problem first, since this problem will arise whenever we go deeper into using docker.

Lately, I have started playing with Ollama and some tasty LLM such as (llama 2, mistral, and Tinyllama), and nothing easier than installing Ollama, it’s only one line:

curl -fsSL https://ollama.com/install.sh | sh

Downloading the language models even easier, choose a model from their library, and the following command:

ollama run llama2

But I’ve got bored using the command line interface, I wanted to work with a richer UI.

A while on GitHub’s Ollama page landed me on “open-webui”, which gives a ChatGPT-like interface.

A good thing about it, is the docker image that I can use for installation, An approach I prefer over cloning the repository and installing all libraries contaminating my OS with cache files everywhere.

After trying multiple times to run open-webui docker container using the command available on its GitHub page, it failed to connect to the Ollama API server on my Linux OS host, the problem arose from the fact, that the Open-WebUItries to connect to Ollama on http://localhost:11434 from inside the Docker container assuming the Ollama exists inside the container itself, but that is not true, because Ollama is installed in the host OS itself.

So I need a way to tell Docker to redirect the connection outside the container, and after some research, I’ve created a very simple setup to test the connection between my OS and the container.

Created a simple web server using Python:

python -m http.server 8000

Which is considered the fastest web server anyone can make with only one line, it only shows a web page containing the files and folders in the current folder and offers them for downloads.

Then, I created a temporary Ubuntu container which will give me a command line running inside the container.

docker run -it --rm ubuntu /bin/bash

Inside the container, I executed the following command to install curl to test the connection:

apt update && apt install curl

Tested the connection using the command:

curl http://localhost:8000

But It gave the next error:

curl: (7) Failed to connect to localhost port 8000 after 0 ms: Connection refused

I made a small change to the container by adding the option --network="host" to be like the following:

docker run -it --rm --network="host" ubuntu /bin/bash

Then running the curl command again:

curl http://localhost:8000

It successfully connected to the localhost showing the content of the server's HTML

<!DOCTYPE HTML>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
...

So from the above pieces of information and trials, I’ve distilled the following command:

docker run -d --network="host" -v open-webui:/app/backend/data -e OLLAMA_API_BASE_URL=http://localhost:11434/api --name open-webui ghcr.io/open-webui/open-webui:main

It gives a long string of characters and numbers, Representing the container address, meaning our container is now running in the background.

You can access the open-webui on the URL:

http://localhost:8080/

Regarding the command above, here is a breakdown of the command options:

  • -d for running the container in the background.
  • --network=”host” used to let the container access OS Host localhost
  • -v open-webui:/app/backend/data used to mount a volume folder inside the container, so we don’t lose our data whenever the container shuts down.
  • -e OLLAMA_API_BASE_URL=http://localhost:11434/api define environment variable inside the container which will be used by the open-webui app to connect to the Ollama server.
  • --name open-webui the name we want to give to the container.
  • ghcr.io/open-webui/open-webui:main the image on the hub we want to pull to create the container from it.

One last thing, if you intend to use the open-webui on a daily basis and want the container to be run automatically without needing to start it manually every time, you may want to add --restart always in the command to let the Docker run it whenever it shuts for any reason, so the command will be as follows:

docker run -d --network="host" -v open-webui:/app/backend/data -e OLLAMA_API_BASE_URL=http://localhost:11434/api --restart always --name open-webui ghcr.io/open-webui/open-webui:main

You can now explore Ollama’s LLMs through a rich web UI, while Ollama is a powerful platform, you want some convenience with its power, And that’s what I’ve tried to accomplish in this minimum tutorial.

--

--