DEV Community

Ajeet Singh Raina
Ajeet Singh Raina

Posted on

How to setup Open WebUI with Ollama and Docker Desktop

With over 50K+ GitHub stars, Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.

If you’re exploring how to deploy and customize Open WebUI, this guide provides a straightforward, hands-on approach. Let’s get started!

Step 1. Clone the repository

git clone https://github.com/open-webui/open-webui/
Enter fullscreen mode Exit fullscreen mode

Step 2. Examining the Compose file

services:
  ollama:
    volumes:
      - ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    container_name: open-webui
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - ${OPEN_WEBUI_PORT-3001}:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}
Enter fullscreen mode Exit fullscreen mode

This Docker Compose file defines two services, ollama and open-webui, with associated volumes for data persistence. The ollama service runs a container named ollama based on the ollama/ollama image (defaulting to the latest version if OLLAMA_DOCKER_TAG is not set). It stores its data in the ollama volume, restarts automatically unless stopped, and uses a TTY-enabled container. The open-webui service builds its image using a specified Dockerfile with build arguments, setting OLLAMA_BASE_URL to /ollama. It runs a container named open-webui, maps a local port (default 3001) to the container’s port 8080, and stores data in the open-webui volume. It depends on the ollama service, setting an environment variable for the OLLAMA_BASE_URL to point to ollama at port 11434. It also uses host.docker.internal for additional networking and restarts unless stopped. Two volumes, ollama and open-webui, are defined for persistent storage.

Key Modifications:

I changed the default port from 3000 to 3001 to avoid conflicts with my existing applications. The Compose file allows customization, so feel free to adjust as needed.

Step 3: Start the Services

Bring the services up with Docker Compose:

docker compose -f docker-compose.yaml up
Enter fullscreen mode Exit fullscreen mode

Image1

Step 4: Verify the Services

Ensure all services are running as expected:

compose services

Step 5: Accessing the Open WebUI

openwebui1

openwebui

chatwindow

Step 6: Download a Model

Downloading and managing AI models is straightforward. For example, after downloading, you’ll see the Llama2 model listed:

Image5

Once downloaded, you will see the Llama2 model listed instead of your name.

Imagellama

Step 7: Configure Settings

Adjust the configuration for your models and application settings:

Image45

Step 8. Configure the permission

Image43

Step 9: Select the Right Model

If you encounter an issue like Ollama: 500, message='Internal Server Error', it might be due to pulling an unsupported model. Refer to this discussion for solutions

Try pulling llama3.2:1b model and it will work flawlessly.

Image56

Image1

By following these steps, you’ll successfully deploy and customize Open WebUI using Docker Compose. With flexibility and ease of use, this setup is ideal for exploring AI-powered solutions. Happy building!

Top comments (0)