How to Use RAGFlow(Open Source RAG Engine): A Complete Guide

Discover how to use RAGFlow to create AI-powered Q&A systems. This beginner’s guide covers setup, document parsing, and querying with tips!

Ashley Goolam

Ashley Goolam

18 June 2025

How to Use RAGFlow(Open Source RAG Engine): A Complete Guide

Hey, AI enthusiasts! Ready to unlock the power of your documents with RAGFlow? This open-source Retrieval-Augmented Generation (RAG) engine makes building smart, citation-backed Q&A systems a breeze, even for beginners. I set up RAGFlow in ~30 minutes, and it turned my chaotic PDFs into a searchable knowledge base—mind blown! In this beginner’s guide, I’ll show you how to install RAGFlow on Linux or Windows, configure model providers, create assistants, and even build a websearch agent. Let’s jump into the RAGFlow magic!

💡
Want a great API Testing tool that generates beautiful API Documentation?

Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?

Apidog delivers all your demands, and replaces Postman at a much more affordable price!
button

What is RAGFlow? Your AI-Powered Data Sidekick

RAGFlow is an open-source RAG engine that combines deep document understanding with large language models (LLMs) for accurate, cited answers. It excels at handling complex formats like PDFs, Word docs, and tables, making it ideal for businesses, researchers, or anyone drowning in documents. Here’s why RAGFlow rocks:

Users rave about RAGFlow’s enterprise-grade workflows for complex docs. Ready to try it? Let’s get started!

Why Use RAGFlow?

RAGFlow is a game-changer for anyone needing reliable answers from their data. Benefits include:

I used RAGFlow to query a stack of research PDFs, and it saved me hours of manual searching!

ragflow official website

How to Use RAGFlow: Step-by-Step Guide

Let’s set up RAGFlow using Docker on Linux (Ubuntu) or Windows (via WSL2) and explore its features. You’ll need a decent machine and some setup time, but no AI expertise required—follow along!

1. Prerequisites

Ensure your system meets these requirements:

Check Docker:

docker --version
docker compose version

2. Configure System Settings

Linux (Ubuntu):

sudo sysctl -w vm.max_map_count=262144
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf

Windows:

wsl --install
sudo sysctl -w vm.max_map_count=262144
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"

3. Install Docker and Docker Compose

Linux:

sudo apt update
sudo apt install -y docker.io docker-compose
sudo systemctl enable --now docker

Verify:

docker --version
docker-compose --version
docker and docker-compose versions

Windows:

docker --version
docker-compose --version

4. Clone the RAGFlow Repository

Clone and checkout the latest stable version:

git clone https://github.com/infiniflow/ragflow.git
cd ragflow
git checkout -f v0.19.0

5. Configure Docker Image

Navigate to the Docker folder:

cd docker

Edit .env:

nano .env

Set:

RAGFLOW_IMAGE=infiniflow/ragflow:v0.19.0-slim
SVR_HTTP_PORT=80
MYSQL_PASSWORD=your_secure_password
MINIO_PASSWORD=your_secure_password
HF_ENDPOINT=https://hf-mirror.com

6. Start RAGFlow Server

Run:

docker compose -f docker-compose.yml up -d

For GPU acceleration:

docker compose -f docker-compose-gpu.yml up -d

Check containers:

docker ps

7. Troubleshoot Port Conflicts

If port 80 is busy:

sudo lsof -i :80

Stop conflicting services (e.g., Apache):

sudo service apache2 stop

Or change SVR_HTTP_PORT in .env (e.g., to 8080). Restart:

docker compose -f docker-compose.yml up -d

8. Verify and Access RAGFlow

Check logs:

docker logs -f ragflow-server

Look for the RAGFlow banner and running status. Access the UI:

http://localhost

Log in with default credentials (check ragflow.io/docs).

9. Configure Model Providers

model providers

I added OpenAI for more model options—super flexible!

10. Create a Knowledge Base

name your knowledge base
select knoledge base model
add pdf file to knowledge base

RAGFlow nailed the tables in my PDF—impressive!

11. Create a Chat Assistant

select chat model

12. Create a Websearch Agent

ragflow agent node builder interface

13. (Optional) Install Ollama for Local LLMs

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama pull llama3.2
docker exec -it ollama ollama pull bge-m3
http://host.docker.internal:11434

Troubleshooting RAGFlow Issues

docker logs ragflow-elasticsearch

See ragflow.io/faqs.

Customizing and Extending RAGFlow

Level up:

I added keyword extraction, and my queries got sharper!

Conclusion: Why RAGFlow is a Must-Have for Beginners

RAGFlow makes RAG accessible, turning documents into smart Q&A systems with minimal effort. Its deep document understanding and node-based agent builder (like n8n!) beat simpler RAG tools, though Docker setup might stump newbies. Compared to LangChain, RAGFlow’s UI and citations are beginner-friendly. The RAGFlow docs and community are gold.

Ready to unleash RAGFlow? Spin it up, query your data, and share your setup—I’m stoked to see your AI wins!

💡
Want a great API Testing tool that generates beautiful API Documentation?

Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?

Apidog delivers all your demands, and replaces Postman at a much more affordable price!
button

Explore more

Top 10 API Trends for 2025: Shaping the Future of Development

Top 10 API Trends for 2025: Shaping the Future of Development

Explore the top 10 API trends transforming development in 2025 from async APIs and GraphQL to API-first design and AI-driven tooling. See how tools like Apidog help you stay ahead.

7 August 2025

What's New with Qwen3-4B-Instruct-2507 and Qwen3-4B-Thinking-2507? Smarter AI Models with 256K Context

What's New with Qwen3-4B-Instruct-2507 and Qwen3-4B-Thinking-2507? Smarter AI Models with 256K Context

Discover Qwen3-4B-Instruct-2507 and Qwen3-4B-Thinking-2507, Alibaba Cloud’s latest AI models with 256K context length, advanced reasoning, and multilingual support.

7 August 2025

How to Use Open AI’s GPT-OSS-120B with API

How to Use Open AI’s GPT-OSS-120B with API

Discover GPT-OSS-120B, Open AI’s open-weight model. Learn its benchmarks, pricing, and how to integrate it with Cursor or Cline using OpenRouter API for coding.

6 August 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs