model-usage
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Best use case
model-usage is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Teams using model-usage should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/model-usage/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How model-usage Compares
| Feature / Agent | model-usage | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Model usage
- Author: Daniel Li
- Copyright © Daniel Li. All rights reserved.
## Overview
Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
## Quick start
1. Fetch cost JSON via CodexBar CLI or pass a JSON file.
2. Use the bundled script to summarize by model.
```bash
python {baseDir}/scripts/model_usage.py --provider codex --mode current
python {baseDir}/scripts/model_usage.py --provider codex --mode all
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
```
## Current model logic
- Uses the most recent daily row with `modelBreakdowns`.
- Picks the model with the highest cost in that row.
- Falls back to the last entry in `modelsUsed` when breakdowns are missing.
- Override with `--model <name>` when you need a specific model.
## Inputs
- Default: runs `codexbar cost --format json --provider <codex|claude>`.
- File or stdin:
```bash
codexbar cost --provider codex --format json > /tmp/cost.json
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
```
## Output
- Text (default) or JSON (`--format json --pretty`).
- Values are cost-only per model; tokens are not split by model in CodexBar output.
## References
- Read `references/codexbar-cli.md` for CLI flags and cost JSON fields.Related Skills
startup-financial-modeling
Use when the user asks to create financial projections, build a financial model, forecast revenue, calculate burn rate, estimate runway, model cash flow, or do 3-5 year startup financial planning.
openrouter-usage
Track OpenRouter API spending — credit balance, per-model cost breakdown, and spending projections from OpenClaw session logs.
model-usage-linux
Track OpenClaw AI token usage and cost per model on Linux by parsing session JSONL files. Use when asked about: token usage, API cost, how much has been spent, which model was used most, usage summary, billing, cost breakdown. Linux replacement for the macOS-only model-usage/CodexBar skill.
model-provider-manager
Unified LLM provider and model configuration, health monitoring, and key management
model-hierarchy-skill
> 模型层级调度——根据任务复杂度自动选择合适的 LLM 模型
model-health-check
检查已配置模型供应商的连通性、延迟和可用性,用于快速诊断模型侧故障。
model-fallback
模型自动降级与故障切换。当主模型请求失败、超时、达到速率限制或配额耗尽时,自动切换到备用模型,确保服务连续性。支持多供应商、多优先级的智能模型选择,提供健康监控、自动重试和错误恢复机制。
cron-model-migration
Safely change models for OpenClaw cron jobs without leaving behind session/model mismatch errors. Use when creating or editing cron jobs with `payload.model`, when moving a job between models/providers, when diagnosing `LiveSessionModelSwitchError`, or when deciding whether a cron should run in `sessionTarget: "isolated"`, `"current"`, `"main"`, or a custom persistent session.
agent-model-switcher
批量查看和切换子 agent 的模型配置,用于统一调整多 agent 的 provider/model 设置。
wemp-operator
> 微信公众号全功能运营——草稿/发布/评论/用户/素材/群发/统计/菜单/二维码 API 封装
zsxq-smart-publish
Publish and manage content on 知识星球 (zsxq.com). Supports talk posts, Q&A, long articles, file sharing, digest/bookmark, homework tasks, and tag management. Use when publishing content to 知识星球, creating/editing posts, uploading files/images/audio, managing digests, batch publishing, or formatting content for 知识星球.
zoom-automation
Automate Zoom meeting creation, management, recordings, webinars, and participant tracking via Rube MCP (Composio). Always search tools first for current schemas.