Best use case
tldr-stats is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Show full session token usage, costs, TLDR savings, and hook activity
Teams using tldr-stats should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/tldr-stats/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How tldr-stats Compares
| Feature / Agent | tldr-stats | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Show full session token usage, costs, TLDR savings, and hook activity
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# TLDR Stats Skill
Show a beautiful dashboard with token usage, actual API costs, TLDR savings, and hook activity.
## When to Use
- See how much TLDR is saving you in real $ terms
- Check total session token usage and costs
- Before/after comparisons of TLDR effectiveness
- Debug whether TLDR/hooks are being used
- See which model is being used
## Instructions
**IMPORTANT:** Run the script AND display the output to the user.
1. Run the stats script:
```bash
python3 $CLAUDE_PROJECT_DIR/.claude/scripts/tldr_stats.py
```
2. **Copy the full output into your response** so the user sees the dashboard directly in the chat. Do not just run the command silently - the user wants to see the stats.
### Sample Output
```
╔══════════════════════════════════════════════════════════════╗
║ 📊 Session Stats ║
╚══════════════════════════════════════════════════════════════╝
You've spent $96.52 this session
Tokens Used
1.2M sent to Claude
416.3K received back
97.8K from prompt cache (8% reused)
TLDR Savings
You sent: 1.2M
Without TLDR: 2.5M
💰 TLDR saved you ~$18.83
(Without TLDR: $115.35 → With TLDR: $96.52)
File reads: 1.3M → 20.9K █████████░ 98% smaller
TLDR Cache
Re-reading the same file? TLDR remembers it.
█████░░░░░░░░░░ 37% cache hits
(35 reused / 60 parsed fresh)
Hooks: 553 calls (✓ all ok)
History: █▃▄ ▇▃▇▆ avg 84% compression
Daemon: 24m up │ 3 sessions
```
## Understanding the Numbers
| Metric | What it means |
|--------|---------------|
| **You've spent** | Actual $ spent on Claude API this session |
| **You sent / Without TLDR** | Actual tokens vs what it would have been |
| **TLDR saved you** | Money saved by compressing file reads |
| **File reads X → Y** | Raw file tokens compressed to TLDR summary |
| **Cache hits** | How often TLDR reuses parsed file results |
| **History sparkline** | Compression % over recent sessions (█ = high) |
## Visual Elements
- **Progress bars** show savings and cache efficiency at a glance
- **Sparklines** show historical trends (█ = high savings, ▁ = low)
- **Colors** indicate status (green = good, yellow = moderate, red = concern)
- **Emojis** distinguish model types (🎭 Opus, 🎵 Sonnet, 🍃 Haiku)
## Notes
- Token savings vary by file size (big files = more savings)
- Cache hit rate starts low, increases as you re-read files
- Cost estimates use: Opus $15/1M, Sonnet $3/1M, Haiku $0.25/1M
- Stats update in real-time as you workRelated Skills
tldr-router
Map code questions to the optimal tldr command by detecting intent and routing to the right analysis layer.
tldr-overview
Get a token-efficient overview of any project using file tree, code structure, and call graph analysis.
tldr-deep
Run full 5-layer analysis (AST, call graph, CFG, DFG, slice) on a specific function for deep debugging or understanding.
tldr-code
Token-efficient code analysis via 5-layer stack (AST, Call Graph, CFG, DFG, PDG). 95% token savings.
workflow-router
Goal-based workflow orchestration - routes tasks to specialist agents based on user goals
wiring
Wiring Verification
websocket-patterns
Connection management, room patterns, reconnection strategies, message buffering, and binary protocol design.
visual-verdict
Screenshot comparison QA for frontend development. Takes a screenshot of the current implementation, scores it across multiple visual dimensions, and returns a structured PASS/REVISE/FAIL verdict with concrete fixes. Use when implementing UI from a design reference or verifying visual correctness.
verification-loop
Comprehensive verification system covering build, types, lint, tests, security, and diff review before a PR.
vector-db-patterns
Embedding strategies, ANN algorithms, hybrid search, RAG chunking strategies, and reranking for semantic search and retrieval.
variant-analysis
Find similar vulnerabilities across a codebase after discovering one instance. Uses pattern matching, AST search, Semgrep/CodeQL queries, and manual tracing to propagate findings. Adapted from Trail of Bits. Use after finding a bug to check if the same pattern exists elsewhere.
validate-agent
Validation agent that validates plan tech choices against current best practices