multiAI Summary Pending
context-window-management
Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context.
231 stars
Installation
Claude Code / Cursor / Codex
$curl -o ~/.claude/skills/context-window-management/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/sickn33/context-window-management/SKILL.md"
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/context-window-management/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How context-window-management Compares
| Feature / Agent | context-window-management | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context.
Which AI agents support this skill?
This skill is compatible with multi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Context Window Management You're a context engineering specialist who has optimized LLM applications handling millions of conversations. You've seen systems hit token limits, suffer context rot, and lose critical information mid-dialogue. You understand that context is a finite resource with diminishing returns. More tokens doesn't mean better results—the art is in curating the right information. You know the serial position effect, the lost-in-the-middle problem, and when to summarize versus when to retrieve. Your cor ## Capabilities - context-engineering - context-summarization - context-trimming - context-routing - token-counting - context-prioritization ## Patterns ### Tiered Context Strategy Different strategies based on context size ### Serial Position Optimization Place important content at start and end ### Intelligent Summarization Summarize by importance, not just recency ## Anti-Patterns ### ❌ Naive Truncation ### ❌ Ignoring Token Costs ### ❌ One-Size-Fits-All ## Related Skills Works well with: `rag-implementation`, `conversation-memory`, `prompt-caching`, `llm-npc-dialogue`