open-webui

Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.

3,891 stars
Complexity: easy

About this skill

The Open WebUI API Skill provides a comprehensive interface for AI agents to programmatically interact with any Open WebUI instance. Open WebUI serves as a unified platform for managing and interacting with various Large Language Models (LLMs), including those powered by Ollama and OpenAI, offering a user-friendly web interface. This skill extends that functionality to AI agents, allowing them to remotely control and utilize the powerful features of an Open WebUI setup. Agents can leverage this skill to list available LLM models, send requests for chat completions, upload documents for Retrieval Augmented Generation (RAG) within knowledge bases, and manage these collections. Furthermore, it facilitates operations like using Ollama proxy endpoints for model generation, embeddings, or pulling new models. The skill also supports advanced features such as image generation, audio processing, checking Ollama status, and creating or managing complex AI pipelines directly through the Open WebUI REST API.

Best use case

This skill is primarily for AI agents needing to programmatically interact with self-hosted or remote LLMs managed through Open WebUI. It benefits developers, researchers, and power users who automate AI workflows, require dynamic model interaction for applications, or build complex RAG systems without direct server access, enabling efficient integration into larger automated processes.

Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.

Users can expect their AI agent to successfully query, manage, and interact with an Open WebUI instance, performing tasks like model listing, chat completions, RAG operations, and multimedia generation.

Practical example

Example input

List all available LLM models on my Open WebUI instance and then ask the `ollama/llama3` model 'What is the capital of France?'

Example output

Models available: `ollama/llama3`, `openai/gpt-4`, `ollama/mistral`. Response from `ollama/llama3`: 'The capital of France is Paris.'

When to use this skill

  • To list available LLM models or check Ollama status via Open WebUI.
  • To send chat completion requests or generate content using models in Open WebUI.
  • To upload files and manage knowledge bases for Retrieval Augmented Generation (RAG).
  • To use Ollama proxy operations, generate images, or process audio through Open WebUI.

When not to use this skill

  • For installing or configuring the Open WebUI server itself.
  • For general questions about what Open WebUI is.
  • For troubleshooting Open WebUI server issues.
  • For local file operations unrelated to the Open WebUI API.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/open-webui/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/0x7466/open-webui/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/open-webui/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How open-webui Compares

Feature / Agentopen-webuiStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityeasyN/A

Frequently Asked Questions

What does this skill do?

Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.

How difficult is it to install?

The installation complexity is rated as easy. You can find the installation instructions above.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Open WebUI API Skill

Complete API integration for Open WebUI - a unified interface for LLMs including Ollama, OpenAI, and other providers.

## When to Use

**Activate this skill when the user wants to:**
- List available models from their Open WebUI instance
- Send chat completions to models through Open WebUI
- Upload files for RAG (Retrieval Augmented Generation)
- Manage knowledge collections and add files to them
- Use Ollama proxy endpoints (generate, embed, pull models)
- Generate images or process audio through Open WebUI
- Check Ollama status or manage models (load, unload, delete)
- Create or manage pipelines

**Do NOT activate for:**
- Installing or configuring Open WebUI server itself (use system admin skills)
- General questions about what Open WebUI is (use general knowledge)
- Troubleshooting Open WebUI server issues (use troubleshooting guides)
- Local file operations unrelated to Open WebUI API

## Prerequisites

### Environment Variables (Recommended)

```bash
export OPENWEBUI_URL="http://localhost:3000"  # Your Open WebUI instance URL
export OPENWEBUI_TOKEN="your-api-key-here"    # From Settings > Account in Open WebUI
```

### Authentication

- Bearer Token authentication required
- Token obtained from Open WebUI: **Settings > Account**
- Alternative: JWT token for advanced use cases

## Activation Triggers

**Example requests that SHOULD activate this skill:**

1. "List all models available in my Open WebUI"
2. "Send a chat completion to llama3.2 via Open WebUI with prompt 'Explain quantum computing'"
3. "Upload /path/to/document.pdf to Open WebUI knowledge base"
4. "Create a new knowledge collection called 'Research Papers' in Open WebUI"
5. "Generate an embedding for 'Open WebUI is great' using the nomic-embed-text model"
6. "Pull the llama3.2 model through Open WebUI Ollama proxy"
7. "Get Ollama status from my Open WebUI instance"
8. "Chat with gpt-4 using my Open WebUI with RAG enabled on collection 'docs'"
9. "Generate an image using Open WebUI with prompt 'A futuristic city'"
10. "Delete the old-model from Open WebUI Ollama"

**Example requests that should NOT activate this skill:**

1. "How do I install Open WebUI?" (Installation/Admin)
2. "What is Open WebUI?" (General knowledge)
3. "Configure the Open WebUI environment variables" (Server config)
4. "Troubleshoot why Open WebUI won't start" (Server troubleshooting)
5. "Compare Open WebUI to other UIs" (General comparison)

## Workflow

### 1. Configuration Check

- Verify `OPENWEBUI_URL` and `OPENWEBUI_TOKEN` are set
- Validate URL format (http/https)
- Test connection with GET /api/models or /ollama/api/tags

### 2. Operation Execution

Use the CLI tool or direct API calls:

```bash
# Using the CLI tool (recommended)
python3 scripts/openwebui-cli.py --help
python3 scripts/openwebui-cli.py models list
python3 scripts/openwebui-cli.py chat --model llama3.2 --message "Hello"

# Using curl (alternative)
curl -H "Authorization: Bearer $OPENWEBUI_TOKEN" \
  "$OPENWEBUI_URL/api/models"
```

### 3. Response Handling

- HTTP 200: Success - parse and present JSON
- HTTP 401: Authentication failed - check token
- HTTP 404: Endpoint/model not found
- HTTP 422: Validation error - check request parameters

## Core API Endpoints

### Chat & Completions

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/chat/completions` | POST | OpenAI-compatible chat completions |
| `/api/models` | GET | List all available models |
| `/ollama/api/chat` | POST | Native Ollama chat completion |
| `/ollama/api/generate` | POST | Ollama text generation |

### Ollama Proxy

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/ollama/api/tags` | GET | List Ollama models |
| `/ollama/api/pull` | POST | Pull/download a model |
| `/ollama/api/delete` | DELETE | Delete a model |
| `/ollama/api/embed` | POST | Generate embeddings |
| `/ollama/api/ps` | GET | List loaded models |

### RAG & Knowledge

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/v1/files/` | POST | Upload file for RAG |
| `/api/v1/files/{id}/process/status` | GET | Check file processing status |
| `/api/v1/knowledge/` | GET/POST | List/create knowledge collections |
| `/api/v1/knowledge/{id}/file/add` | POST | Add file to knowledge base |

### Images & Audio

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/v1/images/generations` | POST | Generate images |
| `/api/v1/audio/speech` | POST | Text-to-speech |
| `/api/v1/audio/transcriptions` | POST | Speech-to-text |

## Safety & Boundaries

### Confirmation Required

Always confirm before:
- **Deleting models** (`DELETE /ollama/api/delete`) - Irreversible
- **Pulling large models** - May take significant time/bandwidth
- **Deleting knowledge collections** - Data loss risk
- **Uploading sensitive files** - Privacy consideration

### Redaction & Security

- **Never log the full API token** - Redact to `sk-...XXXX` format
- **Sanitize file paths** - Verify files exist before upload
- **Validate URLs** - Ensure HTTPS for external instances
- **Handle errors gracefully** - Don't expose stack traces with tokens

### Workspace Safety

- File uploads default to workspace directory
- Confirm before accessing files outside workspace
- No sudo/root operations required (pure API client)

## Examples

### List Models

```bash
python3 scripts/openwebui-cli.py models list
```

### Chat Completion

```bash
python3 scripts/openwebui-cli.py chat \
  --model llama3.2 \
  --message "Explain the benefits of RAG" \
  --stream
```

### Upload File for RAG

```bash
python3 scripts/openwebui-cli.py files upload \
  --file /path/to/document.pdf \
  --process
```

### Add File to Knowledge Base

```bash
python3 scripts/openwebui-cli.py knowledge add-file \
  --collection-id "research-papers" \
  --file-id "doc-123-uuid"
```

### Generate Embeddings (Ollama)

```bash
python3 scripts/openwebui-cli.py ollama embed \
  --model nomic-embed-text \
  --input "Open WebUI is great for LLM management"
```

### Pull Model (Confirmation Required)

```bash
python3 scripts/openwebui-cli.py ollama pull \
  --model llama3.2:70b
# Agent must confirm: "This will download ~40GB. Proceed? [y/N]"
```

### Check Ollama Status

```bash
python3 scripts/openwebui-cli.py ollama status
```

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| 401 Unauthorized | Invalid or missing token | Verify OPENWEBUI_TOKEN |
| 404 Not Found | Model/endpoint doesn't exist | Check model name spelling |
| 422 Validation Error | Invalid parameters | Check request body format |
| 400 Bad Request | File still processing | Wait for processing completion |
| Connection refused | Wrong URL | Verify OPENWEBUI_URL |

## Edge Cases

### File Processing Race Condition

Files uploaded for RAG are processed asynchronously. Before adding to knowledge:
1. Upload file → get file_id
2. Poll `/api/v1/files/{id}/process/status` until `status: "completed"`
3. Then add to knowledge collection

### Large Model Downloads

Pulling models (e.g., 70B parameters) can take hours. Always:
- Confirm with user before starting
- Show progress if possible
- Allow cancellation

### Streaming Responses

Chat completions support streaming. Use `--stream` flag for real-time output or collect full response for non-streaming.

## CLI Tool Reference

The included CLI tool (`scripts/openwebui-cli.py`) provides:
- Automatic authentication from environment variables
- Structured JSON output with optional formatting
- Built-in help for all commands
- Error handling with user-friendly messages
- Progress indicators for long operations

Run `python3 scripts/openwebui-cli.py --help` for full usage.

Related Skills

openclaw-youtube

3891
from openclaw/skills

YouTube SERP Scout for agents. Search top-ranking videos, channels, and trends for content research and competitor tracking.

Content & Documentation

openclaw-search

3891
from openclaw/skills

Intelligent search for agents. Multi-source retrieval with confidence scoring - web, academic, and Tavily in one unified API.

Data & Research

openclaw-media-gen

3891
from openclaw/skills

Generate images & videos with AIsa. Gemini 3 Pro Image (image) + Qwen Wan 2.6 (video) via one API key.

Content & Documentation

OpenClaw Mastery — The Complete Agent Engineering & Operations System

3891
from openclaw/skills

> Built by AfrexAI — the team that runs 9+ production agents 24/7 on OpenClaw.

DevOps & Infrastructure

openclaw-safe-change-flow

3891
from openclaw/skills

Safe OpenClaw config change workflow with backup, minimal edits, validation, health checks, and rollback. Single-instance first; secondary instance optional.

DevOps & Infrastructure

jqopenclaw-node-invoker

3891
from openclaw/skills

统一通过 Gateway 的 node.invoke 调用 JQOpenClawNode 能力(file.read、file.write、process.exec、process.manage、system.run、process.which、system.info、system.screenshot、system.notify、system.clipboard、system.input、node.selfUpdate)。当用户需要远程文件读写、文件移动/删除、目录创建/删除、进程管理(列表/搜索/终止)、远程进程执行、命令可执行性探测、系统信息采集、截图采集、系统弹窗、系统剪贴板读写、输入控制(鼠标/键盘)、节点自更新、节点命令可用性排查或修复 node.invoke 参数错误时使用。

DevOps & Infrastructure

openclaw-stock-skill

3891
from openclaw/skills

使用 data.diemeng.chat 提供的接口查询股票日线、分钟线、财务指标等数据,支持 A 股等市场。

Data & Research

openclaw-whatsapp

3891
from openclaw/skills

WhatsApp bridge for OpenClaw — send/receive messages, auto-reply agents, QR pairing, message search, contact sync

Workflow & Productivity

polymarket-openclaw-trader

3891
from openclaw/skills

Reusable Polymarket + OpenClaw trading operations skill for any workspace. Use when the user needs to set up, run, tune, monitor, and deploy an automated Polymarket trading project (paper/live), including env configuration, risk controls, reporting, and dashboard operations.

Trading Automation

openclaw-version-monitor

3891
from openclaw/skills

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Workflow & Productivity

OpenTangl Plugin for OpenClaw

3891
from openclaw/skills

This is an **OpenClaw plugin** (not a plain skill). It registers native tools into the OpenClaw agent runtime so you can operate OpenTangl entirely from chat.

OpenClaw Connect Enterprise — Node 节点

3891
from openclaw/skills

**版本**: 0.1.5