context-compressor
Intelligently compress context — conversations, code, logs. Preserve key information while reducing token usage. Auto-detects content type and applies optimal compression.
Best use case
context-compressor is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Intelligently compress context — conversations, code, logs. Preserve key information while reducing token usage. Auto-detects content type and applies optimal compression.
Teams using context-compressor should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/ctx-compress/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How context-compressor Compares
| Feature / Agent | context-compressor | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Intelligently compress context — conversations, code, logs. Preserve key information while reducing token usage. Auto-detects content type and applies optimal compression.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
AI Agents for Marketing
Discover AI agents for marketing workflows, from SEO and content production to campaign research, outreach, and analytics.
SKILL.md Source
# 上下文压缩器 / Context Compressor
在有限的上下文窗口里装下无限的对话记忆。
Fit more context in less tokens. Not truncation — intelligent compression.
## 核心理念
```
普通截断: "这是很长的对话..." → "这是很..." (丢了信息)
智能压缩: "这是很长的对话..." → "讨论了XX,决定用YY" (保留核心)
```
**不同内容,不同压缩策略:**
- 💬 对话 → 提取决策、问题、结论
- 💻 代码 → 去噪音保逻辑
- 📋 日志 → 只留错误和关键事件
- 📄 文本 → 摘要式压缩
## 命令参考
COMPRESS = python <skill_dir>/scripts/ctxcompress.py
### 压缩文本
```bash
# 从文件
$COMPRESS compress --input conversation.txt --level medium --type chat
# 从管道
echo "很长的文本..." | $COMPRESS compress --type text
# 内联文本
$COMPRESS compress --text "你的文本内容..." --level aggressive
```
**压缩级别:**
- `light` — 去掉空行和注释,保留所有内容
- `medium` — 去噪音 + 摘要 + 去重(默认)
- `aggressive` — 只保留核心信息
**内容类型(自动检测或手动指定):**
- `chat` — 对话/聊天记录
- `code` — 代码
- `log` — 日志文件
- `text` — 普通文本
### 提取变更
比较两个文件的差异:
```bash
$COMPRESS diff --old v1.txt --new v2.txt
```
输出:新增了什么、删除了什么、修改了什么。
### 提取关键信息
从文本中提取结构化信息:
```bash
$COMPRESS extract --input chat_history.txt --type chat
```
输出 JSON:
```json
{
"decisions": ["决定用 Flask 框架"],
"errors_and_fixes": [{"error": "port 5000 in use", "fix": "改用 8080"}],
"commands": ["pip install flask"],
"urls": ["https://flask.palletsprojects.com"],
"key_terms": ["Flask", "deploy", "gunicorn"]
}
```
### 链式压缩
把多个文件压缩成一个摘要:
```bash
$COMPRESS chain file1.py file2.log conversation.txt --level medium
```
适用于:一个任务涉及多个文件,想一次性了解全貌。
### 分析压缩潜力
```bash
$COMPRESS stats --input big_file.txt
```
显示:总行数、空行占比、注释占比、预估可压缩比例。
## Agent 使用场景
### 场景 1:对话历史压缩
当对话变长时,压缩旧消息以腾出上下文:
```bash
# 导出旧对话
cat old_messages.txt | $COMPRESS compress --type chat --level medium > compressed.txt
# 用压缩后的版本替换原文
```
压缩后保留:
- ✅ 做了什么决定
- ✅ 遇到什么问题,怎么解决的
- ✅ 关键代码片段
- ❌ 去掉了寒暄、重复讨论、中间过程
### 场景 2:错误调试压缩
20 轮试错 → 压缩成 1 段:
```bash
$COMPRESS compress --input debug_session.log --type log --level aggressive
```
输出类似:
```
📊 Log Summary: 15 errors, 3 warnings, 8 key events
ERROR: Connection refused on port 5432
ERROR: Permission denied /var/log/app.log
(above error repeated 8x)
SUCCESS: Service started on port 8080
```
### 场景 3:跨 Session 记忆传递
把上一个 session 的精华传递给新 session:
```bash
$COMPRESS chain session1_notes.md session1_code.py session1_chat.txt --level aggressive
```
输出:一个紧凑的上下文摘要,注入新 session。
### 场景 4:代码 Review 压缩
长文件压缩后便于快速理解:
```bash
$COMPRESS compress --input big_module.py --type code --level medium
```
去掉了空行、注释、docstring,只留逻辑代码。
## 自动类型检测
根据文件扩展名和内容自动判断:
| 扩展名 | 类型 |
|--------|------|
| `.py`, `.js`, `.ts`, `.go` | code |
| `.log`, `.out`, `.err` | log |
| 包含 "User:" / "Agent:" | chat |
| 其他 | text |
## 数据输出格式
压缩结果包含元信息:
```
🗜️ Compressed: 5234 → 876 chars (83% reduction)
Type: chat | Level: medium
────────────────────────────────────────
## 决策 / Decisions
- 决定用 PostgreSQL 而不是 MySQL
## 问题 / Issues
- Error: port 5432 already in use
- Fix: 改用 5433
## 代码 / Code
- [342 chars of code]
```
## 目录结构
```
context-compressor/
├── SKILL.md # 本文件
└── scripts/
└── ctxcompress.py # CLI 工具
```
## 设计哲学
- **有损但保核心**:像 JPEG 压缩图片一样压缩文本
- **类型感知**:不同内容用不同压缩策略
- **可组合**:支持管道、链式压缩、提取信息
- **零依赖**:纯 Python,不需要额外库
## 局限性
- 不是语义压缩(不理解含义,只做模式匹配)
- 对话压缩依赖格式识别(如果格式不标准可能失败)
- 代码压缩可能误删重要注释(light 模式更安全)
- 没有解压功能(压缩是不可逆的有损过程)Related Skills
MCP Engineering — Complete Model Context Protocol System
Build, integrate, secure, and scale MCP servers and clients. From first server to production multi-tool architecture.
context-handoff
保存和恢复聊天上下文到本地文件。用于用户想在切换账号、清空 session、重新开会话、跨会话延续项目时,把当前会话级上下文或项目级摘要落盘并在之后恢复。也用于列出已有的会话上下文槽位或项目摘要,并按更新时间排序返回最近使用项。触发词包括:保存当前上下文、保存会话摘要、保存项目摘要、记下这次讨论、切号前保存、恢复上下文、恢复项目摘要、读取上次摘要、继续上次讨论、列出上下文槽位、列出已保存摘要、有哪些项目摘要、最近更新的项目摘要、按更新时间排序、session handoff, context handoff, save session context, save current context, save chat summary, save project summary, restore context, restore session context, restore project summary, continue last discussion, resume project context, list context slots, list project summaries, list saved summaries, sort by updated time, most recently updated, recently updated summaries, chat handoff, project handoff.
context-pruner
Intelligent context window management by summarizing and removing redundant history. Helps agents maintain high performance in long-running threads.
contextbroker
A cross-agent memory and context SDK for AI systems. Provides structured context injection, conversation memory portability, and context enrichment.
context-optimizer
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.
placeholder for arscontexta.org
∵ ars contexta ∴
fulcra-context
Access your human's personal context data (biometrics, sleep, activity, calendar, location) via the Fulcra Life API and MCP server. Requires human's Fulcra account + OAuth2 consent.
agent-context
Bootstrap persistent project context for AI coding agents.
agent-context-system
Persistent local-only memory for AI coding agents. AGENTS.md (committed) + .agents.local.md (gitignored) = context that persists across sessions. Read both at start, update scratchpad at end, promote stable patterns over time.
context7
Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React, Next.js, Supabase, etc.) (2) User asks about library APIs, patterns, or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff Always prefer this over guessing library APIs or using outdated knowledge.
trucontext-openclaw
TruContext persistent memory for OpenClaw agents. Use when you need to remember something significant across sessions, recall prior context, query the knowledge graph, check what TC is curious about, or declare entity nodes. Triggers on: 'remember this', 'recall what we know about', 'check TC', 'what has TC flagged', 'create a node for', 'find the node for'.
marketing-context
Create and maintain the marketing context document that all marketing skills read before starting. Use when the user mentions 'marketing context,' 'brand voice,' 'set up context,' 'target audience,' 'ICP,' 'style guide,' 'who is my customer,' 'positioning,' or wants to avoid repeating foundational information across marketing tasks. Run this at the start of any new project before using other marketing skills.