Pyspur: Visual AI Workflow Builder for Fast, Transparent Agent Development
Are you building complex AI agents and tired of slow iteration cycles, opaque workflows, and debugging headaches? Pyspur is an open-source, node-based platform that brings transparency and speed to AI workflow development. With Pyspur, you can visually design, debug, and deploy advanced AI systems—without losing control or clarity.
If your team also needs a robust API testing tool to streamline your development pipeline, Apidog offers integrated API documentation, collaborative features, and a cost-effective alternative to Postman. Maximize productivity with a unified platform trusted by developer teams.
What is Pyspur?
Pyspur is an open-source visual environment designed for building, debugging, and deploying modular AI workflows. Using a drag-and-drop canvas, engineers connect logical nodes that represent each step in their AI agent's process. Pyspur's key advantages:
- Visual Workflow Design: Compose agents from modular blocks with a no-code/low-code interface.
- Real-Time Debugging: Inspect inputs and outputs of every node as your workflow runs—no more "prompt hell" or hidden pipeline issues.
- Advanced Patterns: Out-of-the-box support for Retrieval-Augmented Generation (RAG), human-in-the-loop checkpoints, and best-of-N evaluation strategies.
- Seamless Deployment: Turn any workflow into a production-ready API with a single click.
Pyspur helps backend engineers, AI developers, and API-focused teams build more reliable, debuggable, and production-ready AI systems—faster.
1. Setting Up Pyspur: Local and Docker Install
Choose your setup path:
A. Local pip Installation
Ideal for: Experimentation, prototyping, or solo development.
Prerequisites: Python 3.11+
pip install pyspur
# Initialize your project and enter its directory
pyspur init my-pyspur-project && cd my-pyspur-project
# Launch the Pyspur server (uses SQLite for simplicity)
pyspur serve --sqlite
- Access the UI: Open http://localhost:6080 in your browser.
B. Docker-Based Setup
Ideal for: Teams, scalable deployments, production, or reproducible environments.
Prerequisites: Docker Engine
# Download and execute the setup script (configures Docker Compose)
curl -fsSL https://raw.githubusercontent.com/PySpur-com/pyspur/main/start_pyspur_docker.sh | bash -s pyspur-project
- Access the UI: Open http://localhost:6080.
2. Building Your First Workflow: The Joke Generator Example
Pyspur makes designing multi-stage AI workflows simple. Let’s explore by loading a real-world template.
Load the "Joke Generator" Template
- On the Pyspur dashboard, click "New Spur".
- Switch to the "Templates" tab.
- Select the "Joke Generator" template.
The canvas loads a workflow designed to generate and refine jokes using LLMs.
Workflow Breakdown
Key Nodes:
-
input_node (InputNode):
Defines workflow entry.- Accepts:
topic(string): Joke topicaudience(string): Target audience
- Accepts:
-
JokeDrafter (BestOfNNode):
First-stage joke creation.- Generates 10 joke drafts using an LLM.
- Each draft is rated by another LLM call (scale 0–10).
- The highest-rated joke is selected.
-
JokeRefiner (BestOfNNode):
Refines the best joke from the previous step.- Produces 3 variations, each rated.
- Outputs the best, most concise version.
Workflow Data Flow:
input_node → JokeDrafter → JokeRefiner
Testing and Debugging
- Use the test panel to enter custom input (e.g., Topic: "AI assistants", Audience: "Developers").
- Click Run.
- Inspect any node's outputs in real time—see all generated drafts, ratings, and how the final result is chosen.
Why it matters: Pyspur’s transparency lets you trace every step, making debugging and optimization far easier than with traditional code-only approaches.
3. Retrieval-Augmented Generation (RAG) in Pyspur
For production-grade AI agents, grounding LLMs in custom data is essential. Pyspur streamlines RAG pipelines:
How to Add Knowledge Retrieval:
-
Document Ingestion:
- In the RAG section, create a "Document Collection" and upload a file (PDF, etc.).
- Pyspur parses, chunks, and stores text with metadata.
-
Vectorization:
- Build a "Vector Index" from the collection.
- Pyspur calls an embedding model (e.g., OpenAI’s
text-embedding-ada-002) to vectorize each chunk and upserts vectors into a vector database (ChromaDB, PGVector).
-
Semantic Retrieval in Workflows:
- Add a "Retriever Node" to your workflow.
- At runtime, queries are embedded and matched to relevant document chunks, which are then passed as context to downstream nodes.
This modular approach lets you build enterprise-ready RAG agents with robust inspection and easy scaling.
4. Deploying Pyspur Workflows as APIs
Once your workflow is production-ready, deploying as an HTTP API is seamless.
Deployment Steps
- Click the "Deploy" button in Pyspur’s top navigation.
- Choose API type:
- Blocking (Synchronous): For quick workflows.
POST /api/wf/{workflow_id}/run/?run_type=blocking
- Non-Blocking (Asynchronous): For workflows with many LLM calls (avoids client timeouts).
- Start:
POST /api/wf/{workflow_id}/start_run/?run_type=non_blocking - Status:
GET /api/runs/{run_id}/status/
- Start:
- Blocking (Synchronous): For quick workflows.
Integration Example:
Pyspur generates ready-to-use client code. Here’s a Python snippet for the Joke Generator workflow:
import requests
import json
import time
PYSUR_HOST = "http://localhost:6080"
WORKFLOW_ID = "your_workflow_id_here" # Replace with your actual ID
payload = {
"initial_inputs": {
"input_node": {
"topic": "Python decorators",
"audience": "Senior Software Engineers"
}
}
}
# Start non-blocking run
start_url = f"{PYSUR_HOST}/api/wf/{WORKFLOW_ID}/start_run/?run_type=non_blocking"
start_resp = requests.post(start_url, json=payload)
run_id = start_resp.json()['id']
# Poll for status
status_url = f"{PYSUR_HOST}/api/runs/{run_id}/status/"
while True:
status_resp = requests.get(status_url)
data = status_resp.json()
if data.get("status") in ["COMPLETED", "FAILED"]:
print(json.dumps(data.get("outputs"), indent=2))
break
time.sleep(2)
Conclusion: Visual AI Development with Pyspur
Pyspur empowers developers to design, debug, and deploy sophisticated AI workflows with full transparency and minimal friction. Its modular, inspectable approach is ideal for teams seeking to move from prototype to production with confidence.
For API-first organizations, coupling Pyspur’s AI workflow management with Apidog’s beautiful API documentation and collaborative testing tools ensures your end-to-end development process is both efficient and reliable. Boost your team’s productivity and replace Postman at a more affordable price with Apidog.



