How to Use Open WebUI with Ollama

Set up Open WebUI with Ollama to chat with LLMs like Llama 3.1 in a browser. Save histories, store prompts, and upload docs with this beginner-friendly guide!

Ashley Goolam

Ashley Goolam

21 May 2025

How to Use Open WebUI with Ollama

Would you like to chat with powerful language models like Llama 3.1 or Mistral without getting stuck in a terminal? Open WebUI is your ticket to a sleek, ChatGPT-like interface that makes interacting with Ollama’s LLMs fun and intuitive. It lets you save chat histories, store prompts, and even upload documents for smarter responses—all in your browser. In this beginner-friendly guide, I’ll walk you through installing Ollama, testing a model in the terminal, and then leveling up with Open WebUI for a more user-friendly experience. We’ll use Docker for a quick setup and test it with a fun prompt. Ready to make AI chats a breeze? Let’s get rolling!

💡
Need to document your APIs? Try APIdog for a polished, interactive way to design and share API docs—perfect for your AI projects!
button

What is Open WebUI? Your LLM Command Center

Open WebUI is an open-source, self-hosted web interface that connects to Ollama, letting you interact with large language models (LLMs) like Llama 3.1 or Mistral in a browser-based dashboard. Unlike Ollama’s command-line interface, Open WebUI feels like ChatGPT, offering:

With over 50K GitHub stars, Open WebUI is a hit for developers and AI enthusiasts who want a collaborative, graphical way to work with LLMs locally. First, let’s get Ollama running to see why Open WebUI is worth adding!

ollama and open webui

Installing and Testing Ollama

Before we dive into Open WebUI, let’s set up Ollama and test a model like Llama 3.1 or Mistral in the terminal. This gives you a baseline to appreciate Open WebUI’s intuitive interface.

1. Check System Requirements:

2. Install Ollama: Download and install Ollama from ollama.com for your OS. Follow the installer prompts—it’s a quick setup. Verify installation with:

ollama --version

Expect a version like 0.1.44 (April 2025). If it fails, ensure Ollama is in your PATH.

ollama

3. Download a Model: Choose a model like Llama 3.1 (8B) or Mistral (7B). For this guide, we’ll use Llama 3.1:

ollama pull llama3.1

This downloads ~5GB, so grab a coffee if your internet’s slow. Check it’s installed:

ollama list

Look for llama3.1:latest. Mistral (ollama pull mistral) is another great option if you want a lighter model (~4GB).

ollama models

4. Test the Model in Terminal: Try a simple prompt to see Ollama in action:

ollama run llama3.1

At the prompt (>>>), type: “Tell me a dad joke about computers.” Hit Enter. You might get: “Why did the computer go to the doctor? It had a virus!” Exit with /bye. I ran this and got a chuckle-worthy joke, but typing in the terminal felt clunky—no chat history, no saved prompts. This is where Open WebUI shines, offering a visual interface to save conversations, reuse prompts, and upload documents for richer responses. Let’s set it up!

ollama terminal chat

Setting Up Your Environment for Open WebUI

Now that you’ve seen Ollama’s terminal interface, let’s prep for Open WebUI to make your LLM experience more intuitive. We’ll assume you have Docker installed, as it’s required for the Open WebUI setup.

1. Verify Docker: Ensure Docker is installed and running:

docker --version

Expect something like Docker 27.4.0. If you don’t have Docker, download and install Docker Desktop from their official website—it’s a quick setup for Windows, macOS, or Linux.

docker

2. Create a Project Folder: Keep things organized:

mkdir ollama-webui
cd ollama-webui

This folder will be your base for running Open WebUI.

3. Ensure Ollama is Running: Start Ollama in a separate terminal:

ollama serve

This runs Ollama’s API at http://localhost:11434. Keep this terminal open, as Open WebUI needs it to connect to your models.

Installing Open WebUI with Docker

With Ollama and Llama 3.1 ready, let’s install Open WebUI using a single Docker command for a fast, reliable setup.

1. Run Open WebUI: In your ollama-webui folder, execute:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

This command:

It takes a minute to download. Check it’s running with docker ps—look for the open-webui container.

2. Access Open WebUI: Open your browser and go to http://localhost:3000. You’ll see Open WebUI’s welcome page. Click “Sign Up” to create an account (the first user gets admin privileges). Use a strong password and save it securely. You’re now ready to chat! If the page doesn’t load, ensure the container is running (docker logs open-webui) and port 3000 is free.

user account

Using Open WebUI: Chatting and Exploring Features

With Open WebUI running, let’s dive into chatting with Llama 3.1 and exploring its awesome features, which make it a huge upgrade over the terminal.

1. Start Chatting:

chat interface

The interface is clean, with your prompt and response saved automatically in the chat history.

2. Save and Organize Chats: In the left sidebar, click the pin icon to save the chat. Rename it (e.g., “Dad Jokes”) for easy access. You can archive or delete chats via the sidebar, keeping your experiments organized—way better than terminal scrolling!

3. Store Prompts: Save the dad joke prompt for reuse:

4. Upload a Document for RAG: Add context to your chats:

I tested this with a Python tutorial PDF, and Open WebUI nailed context-aware answers, unlike the terminal’s basic responses.

5. Explore More Features:

change models

Documenting Your APIs with APIdog

Using Open WebUI to interact with Ollama’s API and want to document your setup? APIdog is a fantastic tool for creating interactive API documentation. Its sleek interface and self-hosting options make it ideal for sharing your AI projects—check it out!

apidog documentation

Troubleshooting and Tips

How to Download and Use Ollama to Run LLMs Locally
The world of Artificial Intelligence (AI) is evolving at breakneck speed, with Large Language Models (LLMs) like ChatGPT, Claude, and Gemini capturing imaginations worldwide. These powerful tools can write code, draft emails, answer complex questions, and even generate creative content. However, usi…
New to Ollama? Check this out to help you get started!

Why Choose Open WebUI?

Open WebUI transforms Ollama from a clunky terminal tool into a powerful, user-friendly platform:

After testing both the terminal and Open WebUI, I’m sold on the GUI’s ease and features. It’s like upgrading from a flip phone to a smartphone!

Wrapping Up: Your Open WebUI Adventure Awaits

You’ve gone from terminal chats to a full-blown Open WebUI setup with Ollama, making LLM interactions smooth and fun! With Llama 3.1, saved chats, and document uploads, you’re ready to explore AI like never before. Try new models, store more prompts, or document your APIs with APIdog. Share your Open WebUI wins on the Open WebUI GitHub—I’m excited to see what you create! Happy AI tinkering!

button

Explore more

How to Integrate Claude Code with VSCode and JetBrains?

How to Integrate Claude Code with VSCode and JetBrains?

Learn how to integrate Claude Code with VSCode and JetBrains in this technical guide. Step-by-step setup, configuration, and usage tips for developers. Boost your coding with Claude Code!

10 June 2025

How to Generate Google Veo 3 Prompt Theory Videos (Google Veo 3 Prompt Guide)

How to Generate Google Veo 3 Prompt Theory Videos (Google Veo 3 Prompt Guide)

Learn how to craft effective prompts for Google Veo 3 to generate dynamic and expressive videos.

10 June 2025

How to Write technical documentations with examples

How to Write technical documentations with examples

Think of technical docs as the handshake between the people building the product and the folks using it. Whether you’re writing API guides, user manuals, or onboarding instructions for new team members, keeping things clear and simple makes life way easier for everyone involved. Nobody wants to dig through confusing or incomplete docs when they just want to get stuff done. These days, good documentation isn’t just a nice-to-have — it’s basically a must-have if you want your product to actually g

9 June 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs