Best use case
model-health-check is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
检查已配置模型供应商的连通性、延迟和可用性,用于快速诊断模型侧故障。
Teams using model-health-check should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/model-health-check/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How model-health-check Compares
| Feature / Agent | model-health-check | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
检查已配置模型供应商的连通性、延迟和可用性,用于快速诊断模型侧故障。
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Model Health Check 检查所有配置的模型供应商连通性和延迟。 ## 触发条件 用户发送 `/model-check`、`模型检查`、`model health`、`检查模型`、`供应商状态` 等。 ## 执行 ```bash bash ~/clawd/scripts/model-health-check.sh ``` 直接运行脚本,将输出原样返回给用户。无需额外处理。 ## 超时 脚本总耗时约 20-30 秒(每个供应商最多 15 秒超时)。使用 `timeout: 120` 确保完成。
Related Skills
telegram-check
Check inbound Telegram messages
startup-financial-modeling
Use when the user asks to create financial projections, build a financial model, forecast revenue, calculate burn rate, estimate runway, model cash flow, or do 3-5 year startup financial planning.
skill-config-checker
扫描本地所有 skills,检测需要配置的 API keys、tokens、secrets 等,生成配置需求清单和操作指南。
security-compliance-compliance-check
You are a compliance expert specializing in regulatory requirements for software systems including GDPR, HIPAA, SOC2, PCI-DSS, and other industry standards. Perform compliance audits and provide im...
model-usage
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
model-usage-linux
Track OpenClaw AI token usage and cost per model on Linux by parsing session JSONL files. Use when asked about: token usage, API cost, how much has been spent, which model was used most, usage summary, billing, cost breakdown. Linux replacement for the macOS-only model-usage/CodexBar skill.
model-provider-manager
Unified LLM provider and model configuration, health monitoring, and key management
model-hierarchy-skill
> 模型层级调度——根据任务复杂度自动选择合适的 LLM 模型
model-fallback
模型自动降级与故障切换。当主模型请求失败、超时、达到速率限制或配额耗尽时,自动切换到备用模型,确保服务连续性。支持多供应商、多优先级的智能模型选择,提供健康监控、自动重试和错误恢复机制。
cron-model-migration
Safely change models for OpenClaw cron jobs without leaving behind session/model mismatch errors. Use when creating or editing cron jobs with `payload.model`, when moving a job between models/providers, when diagnosing `LiveSessionModelSwitchError`, or when deciding whether a cron should run in `sessionTarget: "isolated"`, `"current"`, `"main"`, or a custom persistent session.
agent-model-switcher
批量查看和切换子 agent 的模型配置,用于统一调整多 agent 的 provider/model 设置。
wemp-operator
> 微信公众号全功能运营——草稿/发布/评论/用户/素材/群发/统计/菜单/二维码 API 封装