tldr-stats

Show full session token usage, costs, TLDR savings, and hook activity

422 stars

Best use case

tldr-stats is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Show full session token usage, costs, TLDR savings, and hook activity

Teams using tldr-stats should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/tldr-stats/SKILL.md --create-dirs "https://raw.githubusercontent.com/vibeeval/vibecosystem/main/skills/tldr-stats/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/tldr-stats/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How tldr-stats Compares

Feature / Agenttldr-statsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Show full session token usage, costs, TLDR savings, and hook activity

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# TLDR Stats Skill

Show a beautiful dashboard with token usage, actual API costs, TLDR savings, and hook activity.

## When to Use
- See how much TLDR is saving you in real $ terms
- Check total session token usage and costs
- Before/after comparisons of TLDR effectiveness
- Debug whether TLDR/hooks are being used
- See which model is being used

## Instructions

**IMPORTANT:** Run the script AND display the output to the user.

1. Run the stats script:
```bash
python3 $CLAUDE_PROJECT_DIR/.claude/scripts/tldr_stats.py
```

2. **Copy the full output into your response** so the user sees the dashboard directly in the chat. Do not just run the command silently - the user wants to see the stats.

### Sample Output

```
╔══════════════════════════════════════════════════════════════╗
║  📊 Session Stats                                            ║
╚══════════════════════════════════════════════════════════════╝

  You've spent  $96.52  this session

  Tokens Used
        1.2M sent to Claude
      416.3K received back
       97.8K from prompt cache (8% reused)

  TLDR Savings

    You sent:               1.2M
    Without TLDR:           2.5M

    💰 TLDR saved you ~$18.83
    (Without TLDR: $115.35 → With TLDR: $96.52)

    File reads: 1.3M → 20.9K █████████░ 98% smaller

  TLDR Cache
    Re-reading the same file? TLDR remembers it.
    █████░░░░░░░░░░ 37% cache hits
    (35 reused / 60 parsed fresh)

  Hooks: 553 calls (✓ all ok)
  History: █▃▄ ▇▃▇▆ avg 84% compression
  Daemon: 24m up │ 3 sessions
```

## Understanding the Numbers

| Metric | What it means |
|--------|---------------|
| **You've spent** | Actual $ spent on Claude API this session |
| **You sent / Without TLDR** | Actual tokens vs what it would have been |
| **TLDR saved you** | Money saved by compressing file reads |
| **File reads X → Y** | Raw file tokens compressed to TLDR summary |
| **Cache hits** | How often TLDR reuses parsed file results |
| **History sparkline** | Compression % over recent sessions (█ = high) |

## Visual Elements

- **Progress bars** show savings and cache efficiency at a glance
- **Sparklines** show historical trends (█ = high savings, ▁ = low)
- **Colors** indicate status (green = good, yellow = moderate, red = concern)
- **Emojis** distinguish model types (🎭 Opus, 🎵 Sonnet, 🍃 Haiku)

## Notes

- Token savings vary by file size (big files = more savings)
- Cache hit rate starts low, increases as you re-read files
- Cost estimates use: Opus $15/1M, Sonnet $3/1M, Haiku $0.25/1M
- Stats update in real-time as you work

Related Skills

tldr-router

422
from vibeeval/vibecosystem

Map code questions to the optimal tldr command by detecting intent and routing to the right analysis layer.

tldr-overview

422
from vibeeval/vibecosystem

Get a token-efficient overview of any project using file tree, code structure, and call graph analysis.

tldr-deep

422
from vibeeval/vibecosystem

Run full 5-layer analysis (AST, call graph, CFG, DFG, slice) on a specific function for deep debugging or understanding.

tldr-code

422
from vibeeval/vibecosystem

Token-efficient code analysis via 5-layer stack (AST, Call Graph, CFG, DFG, PDG). 95% token savings.

workflow-router

422
from vibeeval/vibecosystem

Goal-based workflow orchestration - routes tasks to specialist agents based on user goals

wiring

422
from vibeeval/vibecosystem

Wiring Verification

websocket-patterns

422
from vibeeval/vibecosystem

Connection management, room patterns, reconnection strategies, message buffering, and binary protocol design.

visual-verdict

422
from vibeeval/vibecosystem

Screenshot comparison QA for frontend development. Takes a screenshot of the current implementation, scores it across multiple visual dimensions, and returns a structured PASS/REVISE/FAIL verdict with concrete fixes. Use when implementing UI from a design reference or verifying visual correctness.

verification-loop

422
from vibeeval/vibecosystem

Comprehensive verification system covering build, types, lint, tests, security, and diff review before a PR.

vector-db-patterns

422
from vibeeval/vibecosystem

Embedding strategies, ANN algorithms, hybrid search, RAG chunking strategies, and reranking for semantic search and retrieval.

variant-analysis

422
from vibeeval/vibecosystem

Find similar vulnerabilities across a codebase after discovering one instance. Uses pattern matching, AST search, Semgrep/CodeQL queries, and manual tracing to propagate findings. Adapted from Trail of Bits. Use after finding a bug to check if the same pattern exists elsewhere.

validate-agent

422
from vibeeval/vibecosystem

Validation agent that validates plan tech choices against current best practices