context-optimizer
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.
Best use case
context-optimizer is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.
Teams using context-optimizer should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/context-optimizer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How context-optimizer Compares
| Feature / Agent | context-optimizer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Advanced context management with auto-compaction and dynamic context optimization for DeepSeek's 64k context window. Features intelligent compaction (merging, summarizing, extracting), query-aware relevance scoring, and hierarchical memory system with context archive. Logs optimization events to chat.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Context Pruner
Advanced context management optimized for DeepSeek's 64k context window. Provides intelligent pruning, compression, and token optimization to prevent context overflow while preserving important information.
## Key Features
- **DeepSeek-optimized**: Specifically tuned for 64k context window
- **Adaptive pruning**: Multiple strategies based on context usage
- **Semantic deduplication**: Removes redundant information
- **Priority-aware**: Preserves high-value messages
- **Token-efficient**: Minimizes token overhead
- **Real-time monitoring**: Continuous context health tracking
## Quick Start
### Auto-compaction with dynamic context:
```javascript
import { createContextPruner } from './lib/index.js';
const pruner = createContextPruner({
contextLimit: 64000, // DeepSeek's limit
autoCompact: true, // Enable automatic compaction
dynamicContext: true, // Enable dynamic relevance-based context
strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
queryAwareCompaction: true, // Compact based on current query relevance
});
await pruner.initialize();
// Process messages with auto-compaction and dynamic context
const processed = await pruner.processMessages(messages, currentQuery);
// Get context health status
const status = pruner.getStatus();
console.log(`Context health: ${status.health}, Relevance scores: ${status.relevanceScores}`);
// Manual compaction when needed
const compacted = await pruner.autoCompact(messages, currentQuery);
```
### Archive Retrieval (Hierarchical Memory):
```javascript
// When something isn't in current context, search archive
const archiveResult = await pruner.retrieveFromArchive('query about previous conversation', {
maxContextTokens: 1000,
minRelevance: 0.4,
});
if (archiveResult.found) {
// Add relevant snippets to current context
const archiveContext = archiveResult.snippets.join('\n\n');
// Use archiveContext in your prompt
console.log(`Found ${archiveResult.sources.length} relevant sources`);
console.log(`Retrieved ${archiveResult.totalTokens} tokens from archive`);
}
```
## Auto-Compaction Strategies
1. **Semantic Compaction**: Merges similar messages instead of removing them
2. **Temporal Compaction**: Summarizes older conversations by time windows
3. **Extractive Compaction**: Extracts key information from verbose messages
4. **Adaptive Compaction**: Chooses best strategy based on message characteristics
5. **Dynamic Context**: Filters messages based on relevance to current query
## Dynamic Context Management
- **Query-aware Relevance**: Scores messages based on similarity to current query
- **Relevance Decay**: Relevance scores decay over time for older conversations
- **Adaptive Filtering**: Automatically filters low-relevance messages
- **Priority Integration**: Combines message priority with semantic relevance
## Hierarchical Memory System
The context archive provides a RAM vs Storage approach:
- **Current Context (RAM)**: Limited (64k tokens), fast access, auto-compacted
- **Archive (Storage)**: Larger (100MB), slower but searchable
- **Smart Retrieval**: When information isn't in current context, efficiently search archive
- **Selective Loading**: Extract only relevant snippets, not entire documents
- **Automatic Storage**: Compacted content automatically stored in archive
## Configuration
```javascript
{
contextLimit: 64000, // DeepSeek's context window
autoCompact: true, // Enable automatic compaction
compactThreshold: 0.75, // Start compacting at 75% usage
aggressiveCompactThreshold: 0.9, // Aggressive compaction at 90%
dynamicContext: true, // Enable dynamic context management
relevanceDecay: 0.95, // Relevance decays 5% per time step
minRelevanceScore: 0.3, // Minimum relevance to keep
queryAwareCompaction: true, // Compact based on current query relevance
strategies: ['semantic', 'temporal', 'extractive', 'adaptive'],
preserveRecent: 10, // Always keep last N messages
preserveSystem: true, // Always keep system messages
minSimilarity: 0.85, // Semantic similarity threshold
// Archive settings
enableArchive: true, // Enable hierarchical memory system
archivePath: './context-archive',
archiveSearchLimit: 10,
archiveMaxSize: 100 * 1024 * 1024, // 100MB
archiveIndexing: true,
// Chat logging
logToChat: true, // Log optimization events to chat
chatLogLevel: 'brief', // 'brief', 'detailed', or 'none'
chatLogFormat: '📊 {action}: {details}', // Format for chat messages
// Performance
batchSize: 5, // Messages to process in batch
maxCompactionRatio: 0.5, // Maximum 50% compaction in one pass
}
```
## Chat Logging
The context optimizer can log events directly to chat:
```javascript
// Example chat log messages:
// 📊 Context optimized: Compacted 15 messages → 8 (47% reduction)
// 📊 Archive search: Found 3 relevant snippets (42% similarity)
// 📊 Dynamic context: Filtered 12 low-relevance messages
// Configure logging:
const pruner = createContextPruner({
logToChat: true,
chatLogLevel: 'brief', // Options: 'brief', 'detailed', 'none'
chatLogFormat: '📊 {action}: {details}',
// Custom log handler (optional)
onLog: (level, message, data) => {
if (level === 'info' && data.action === 'compaction') {
// Send to chat
console.log(`🧠 Context optimized: ${message}`);
}
}
});
```
## Integration with Clawdbot
Add to your Clawdbot config:
```yaml
skills:
context-pruner:
enabled: true
config:
contextLimit: 64000
autoPrune: true
```
The pruner will automatically monitor context usage and apply appropriate pruning strategies to stay within DeepSeek's 64k limit.Related Skills
geo-optimizer
Optimize content for AI citation (GEO). Use when user says "GEO", "generative engine optimization", "AI citation", "get cited by AI", "AI-friendly content", or creating content for ChatGPT/Claude/Perplexity visibility.
context7
Context7 MCP - Intelligent documentation search and context for any library
context7-2
Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React, Next.js, Supabase, etc.) (2) User asks about library APIs, patterns, or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff Always prefer this over guessing library APIs or using outdated knowledge.
context-recovery
No description provided.
context-manager
AI-powered context management for Clawdbot/Moltbot sessions
context-compressor
No description provided.
portfolio-watcher
Monitor stock/crypto holdings, get price alerts, track portfolio performance
portainer
Control Docker containers and stacks via Portainer API. List containers, start/stop/restart, view logs, and redeploy stacks from git.
portable-tools
Build cross-device tools without hardcoding paths or account names
polymarket
Trade prediction markets on Polymarket. Analyze odds, place bets, track positions, automate alerts, and maximize returns from event outcomes. Covers sports, politics, entertainment, and more.
polymarket-traiding-bot
No description provided.
polymarket-analysis
Analyze Polymarket prediction markets for trading edges. Pair Cost arbitrage, whale tracking, sentiment analysis, momentum signals, user profile tracking. No execution.