model-provider-manager

Unified LLM provider and model configuration, health monitoring, and key management

33 stars

Best use case

model-provider-manager is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Unified LLM provider and model configuration, health monitoring, and key management

Teams using model-provider-manager should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/model-provider-manager/SKILL.md --create-dirs "https://raw.githubusercontent.com/aAAaqwq/AGI-Super-Team/main/skills/model-provider-manager/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/model-provider-manager/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How model-provider-manager Compares

Feature / Agentmodel-provider-managerStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Unified LLM provider and model configuration, health monitoring, and key management

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Skill: Model & Provider Manager

## 触发条件
- "模型管理"、"供应商管理"、"provider 健康检查"
- "检查模型"、"模型可用性"、"哪个模型能用"
- "embedding 模型"、"推理模型"、"视觉模型"
- "API key 检查"、"key 过期"、"余额查询"
- "添加模型"、"删除模型"、"更新模型配置"
- "同步模型配置"、"sync agent models"

## 职责
统一维护所有 LLM 供应商和模型的配置、健康状态、密钥管理。
CEO 指定的**专职模型管理 skill**,所有模型相关查询和操作都通过此 skill。

## ⚙️ 配置更新铁律(env-first 三步同步)

**任何模型/密钥/供应商变更,必须按以下顺序执行:**

```
Step 1: 更新 pass 密钥仓库(唯一真相源)
   pass insert api/<provider-name>    # 新建
   pass show api/<provider-name>      # 验证

Step 2: 同步到 ~/.openclaw/.env(运行时环境变量)
   添加/更新: echo 'NEW_API_KEY=<value>' >> ~/.openclaw/.env
   或执行: ~/clawd/scripts/rebuild-env.sh  # 如果存在
   
Step 3: 同步到 ~/.openclaw/openclaw.json(OpenClaw 配置)
   确保模型引用 ${ENV_VAR_NAME} 而非硬编码
   gateway restart 或 config.patch 生效

Step 4: 验证同步完整性
   python3 ~/clawd/skills/model-provider-manager/scripts/health-check.py
   python3 ~/clawd/skills/model-provider-manager/scripts/key-audit.py
```

### Agent & Cron 模型同步
当全局模型配置变更时(新增/删除/切换 provider),需检查并同步:
- **Agent 模型**: `openclaw.json → agents.list[].model.primary/fallbacks`
- **Cron 任务模型**: `cron list` → 检查每个 cron 的 `payload.model`
- **Cron fallback**: `cron list` → 检查 `payload.fallbacks`

### 同步检查脚本
```bash
python3 ~/clawd/skills/model-provider-manager/scripts/sync-check.py
```

## 配置文件位置

| 文件 | 路径 | 用途 |
|------|------|------|
| **pass 仓库** | `pass api/<name>` | **密钥唯一真相源** |
| 运行时环境 | `~/.openclaw/.env` | 环境变量(chmod 600) |
| 主配置 | `~/.openclaw/openclaw.json` → `models.providers` | 供应商定义 |
| Agent 模型 | `~/.openclaw/openclaw.json` → `agents.list[].model` | 各 agent 主模型+fallback |
| 全局默认 | `~/.openclaw/openclaw.json` → `agents.defaults.model` | 默认模型 |
| Embedding | `~/.openclaw/openclaw.json` → `agents.defaults.memorySearch` | Embedding 配置 |

## 当前供应商清单

| Provider | baseUrl | API类型 | 模型数 | Key来源 | 状态 |
|----------|---------|---------|--------|---------|------|
| shibacc | http://8.148.217.100:6543 | anthropic-messages | 3 | 🔴硬编码 | ⚠️ key未存pass |
| xingsuancode | https://cn.xingsuancode.com | anthropic-messages | 1 | env | ❌ 9账号全挂 |
| zai | https://open.bigmodel.cn/api/coding/paas/v4 | openai-completions | 5 | 🔴硬编码 | ⚠️ key未存pass |
| minimax | https://api.minimaxi.com/anthropic | anthropic-messages | 1 | env | ✅ |
| xingjiabiapi | https://xingjiabiapi.com/v1 | openai-completions | 4 | env | ✅ |
| xai | https://api.x.ai/v1 | openai-completions | 10 | env | ✅ |
| wow | https://linuxdoapi-api-wow.223387.xyz/v1 | openai-completions | 3 | env | ⚠️ 503偶发 |
| xinyuan | https://api-i.xykjy.com | auto | 4 | env | ✅ |
| aixn | https://ai.xn--vuq861bvij35ps8cv0uohm.com/v1 | openai-completions | 1 | env | ✅ |
| moonshot | https://api.moonshot.cn/v1 | openai-completions | 1 | env | ✅ |
| ollama | http://100.65.110.126:11434 | ollama | 12 | env | ✅ |
| github-copilot | https://api.githubcopilot.com | openai-completions | 17 | env | ✅ |

## Embedding 模型清单

| 位置 | 模型 | 维度 | 大小 | 状态 |
|------|------|------|------|------|
| Mac Studio Ollama | qwen3-embedding:0.6b | 1024 | 639MB | ✅ 推荐 |
| Mac Studio Ollama | qwen3-embedding:8b | 4096 | 4.7GB | ✅ |
| Mac Studio Ollama | nomic-embed-text:latest | 768 | 274MB | ✅ |
| 小m Ollama | qwen3-embedding:0.6b | 1024 | 639MB | ⚠️ 离线不稳定 |
| OpenClaw 内置 | embeddinggemma-300M | 384 | 300MB | ✅ fallback |

## Agent 模型分配方案(待 Daniel 确认)

| Agent | 当前 | 建议主模型 | 建议Fallback |
|-------|------|-----------|-------------|
| main (CEO) | opus-4-6 | shibacc/opus-4-6 ✅ | xsc/opus, zai/glm-5 |
| quant | opus-4-6 | shibacc/opus-4-6 ✅ | xsc/opus, zai/glm-5 |
| code | opus-4-6 | xingsuancode/sonnet-4-6 | shibacc/opus, aixn/gpt-5.2 |
| pm | opus-4-6 | xingsuancode/sonnet-4-6 | shibacc/opus, zai/glm-5 |
| content | opus-4-6 | xingsuancode/sonnet-4-6 | shibacc/opus, minimax/M2.5 |
| research | opus-4-6 | xingsuancode/sonnet-4-6 | shibacc/opus, aixn/gpt-5.2 |
| law | opus-4-6 | xingsuancode/sonnet-4-6 | shibacc/opus, minimax/M2.5 |
| data | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| ops | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| finance | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| market | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| product | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| sales | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |
| batch | opus-4-6 | zai/glm-5-turbo | xingsuancode/sonnet, minimax/M2.5 |

## 执行命令

| 命令 | 用途 |
|------|------|
| `health-check.py` | 全量 provider 健康检查 |
| `health-check.py --type embedding` | 只检查 embedding 模型 |
| `key-audit.py` | 密钥审计(硬编码/缺pass/env缺失) |
| `sync-check.py` | 检查 env→json→agent→cron 同步完整性 |

## 安全铁律
1. **pass 是唯一真相源** — 所有 key 必须先存 pass
2. **openclaw.json 用 `${ENV_VAR}`** — 永不硬编码 key
3. **换 key**: `pass insert` → 更新 `.env` → `gateway restart`
4. **provider 变更必须经 CEO 确认**

## 更新记录
- 2026-03-29: 初版,覆盖 12 provider 84+ 模型,env-first 三步同步

Related Skills

workspace-directory-manager

33
from aAAaqwq/AGI-Super-Team

Workspace directory manager — maintain cleanliness of ~/.openclaw/ and ~/clawd/

startup-financial-modeling

33
from aAAaqwq/AGI-Super-Team

Use when the user asks to create financial projections, build a financial model, forecast revenue, calculate burn rate, estimate runway, model cash flow, or do 3-5 year startup financial planning.

ssh-manager

33
from aAAaqwq/AGI-Super-Team

专业 SSH 连接管理工具。处理 Tailscale SSH、主机密钥、代理绕过、远程命令执行等操作。

provider-key-manager

33
from aAAaqwq/AGI-Super-Team

Provider key manager — rotate and sync API keys across multi-agent workspaces

product-manager-skills

33
from aAAaqwq/AGI-Super-Team

> 产品经理技能集——PRD、用户故事、竞品分析、路线图等产品方法论工具

portfolio-manager

33
from aAAaqwq/AGI-Super-Team

Comprehensive portfolio analysis using Alpaca MCP Server integration to fetch holdings and positions, then analyze asset allocation, risk metrics, individual stock positions, diversification, and generate rebalancing recommendations. Use when user requests portfolio review, position analysis, risk assessment, performance evaluation, or rebalancing suggestions for their brokerage account.

permission-manager

33
from aAAaqwq/AGI-Super-Team

管理Claude Code的全局工具权限配置,自动将MCP命令或其他工具添加到allowedTools中,避免每次使用时都需要手动批准。工作流程:确认用户需要添加的命令 -> 确认添加级别(默认全局~/.claude.json) -> 执行添加 -> 验证并提醒重启。

model-usage

33
from aAAaqwq/AGI-Super-Team

Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.

model-usage-linux

33
from aAAaqwq/AGI-Super-Team

Track OpenClaw AI token usage and cost per model on Linux by parsing session JSONL files. Use when asked about: token usage, API cost, how much has been spent, which model was used most, usage summary, billing, cost breakdown. Linux replacement for the macOS-only model-usage/CodexBar skill.

model-hierarchy-skill

33
from aAAaqwq/AGI-Super-Team

> 模型层级调度——根据任务复杂度自动选择合适的 LLM 模型

model-health-check

33
from aAAaqwq/AGI-Super-Team

检查已配置模型供应商的连通性、延迟和可用性,用于快速诊断模型侧故障。

model-fallback

33
from aAAaqwq/AGI-Super-Team

模型自动降级与故障切换。当主模型请求失败、超时、达到速率限制或配额耗尽时,自动切换到备用模型,确保服务连续性。支持多供应商、多优先级的智能模型选择,提供健康监控、自动重试和错误恢复机制。