agent-sort
Build an evidence-backed ECC install plan for a specific repo by sorting skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using parallel repo-aware review passes. Use when ECC should be trimmed to what a project actually needs instead of loading the full bundle.
Best use case
agent-sort is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Build an evidence-backed ECC install plan for a specific repo by sorting skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using parallel repo-aware review passes. Use when ECC should be trimmed to what a project actually needs instead of loading the full bundle.
Teams using agent-sort should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/agent-sort/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How agent-sort Compares
| Feature / Agent | agent-sort | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Build an evidence-backed ECC install plan for a specific repo by sorting skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using parallel repo-aware review passes. Use when ECC should be trimmed to what a project actually needs instead of loading the full bundle.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
SKILL.md Source
# Agent Sort Use this skill when a repo needs a project-specific ECC surface instead of the default full install. The goal is not to guess what "feels useful." The goal is to classify ECC components with evidence from the actual codebase. ## When to Use - A project only needs a subset of ECC and full installs are too noisy - The repo stack is clear, but nobody wants to hand-curate skills one by one - A team wants a repeatable install decision backed by grep evidence instead of opinion - You need to separate always-loaded daily workflow surfaces from searchable library/reference surfaces - A repo has drifted into the wrong language, rule, or hook set and needs cleanup ## Non-Negotiable Rules - Use the current repository as the source of truth, not generic preferences - Every DAILY decision must cite concrete repo evidence - LIBRARY does not mean "delete"; it means "keep accessible without loading by default" - Do not install hooks, rules, or scripts that the current repo cannot use - Prefer ECC-native surfaces; do not introduce a second install system ## Outputs Produce these artifacts in order: 1. DAILY inventory 2. LIBRARY inventory 3. install plan 4. verification report 5. optional `skill-library` router if the project wants one ## Classification Model Use two buckets only: - `DAILY` - should load every session for this repo - strongly matched to the repo's language, framework, workflow, or operator surface - `LIBRARY` - useful to retain, but not worth loading by default - should remain reachable through search, router skill, or selective manual use ## Evidence Sources Use repo-local evidence before making any classification: - file extensions - package managers and lockfiles - framework configs - CI and hook configs - build/test scripts - imports and dependency manifests - repo docs that explicitly describe the stack Useful commands include: ```bash rg --files rg -n "typescript|react|next|supabase|django|spring|flutter|swift" cat package.json cat pyproject.toml cat Cargo.toml cat pubspec.yaml cat go.mod ``` ## Parallel Review Passes If parallel subagents are available, split the review into these passes: 1. Agents - classify `agents/*` 2. Skills - classify `skills/*` 3. Commands - classify `commands/*` 4. Rules - classify `rules/*` 5. Hooks and scripts - classify hook surfaces, MCP health checks, helper scripts, and OS compatibility 6. Extras - classify contexts, examples, MCP configs, templates, and guidance docs If subagents are not available, run the same passes sequentially. ## Core Workflow ### 1. Read the repo Establish the real stack before classifying anything: - languages in use - frameworks in use - primary package manager - test stack - lint/format stack - deployment/runtime surface - operator integrations already present ### 2. Build the evidence table For every candidate surface, record: - component path - component type - proposed bucket - repo evidence - short justification Use this format: ```text skills/frontend-patterns | skill | DAILY | 84 .tsx files, next.config.ts present | core frontend stack skills/django-patterns | skill | LIBRARY | no .py files, no pyproject.toml | not active in this repo rules/typescript/* | rules | DAILY | package.json + tsconfig.json | active TS repo rules/python/* | rules | LIBRARY | zero Python source files | keep accessible only ``` ### 3. Decide DAILY vs LIBRARY Promote to `DAILY` when: - the repo clearly uses the matching stack - the component is general enough to help every session - the repo already depends on the corresponding runtime or workflow Demote to `LIBRARY` when: - the component is off-stack - the repo might need it later, but not every day - it adds context overhead without immediate relevance ### 4. Build the install plan Translate the classification into action: - DAILY skills -> install or keep in `.claude/skills/` - DAILY commands -> keep as explicit shims only if still useful - DAILY rules -> install only matching language sets - DAILY hooks/scripts -> keep only compatible ones - LIBRARY surfaces -> keep accessible through search or `skill-library` If the repo already uses selective installs, update that plan instead of creating another system. ### 5. Create the optional library router If the project wants a searchable library surface, create: - `.claude/skills/skill-library/SKILL.md` That router should contain: - a short explanation of DAILY vs LIBRARY - grouped trigger keywords - where the library references live Do not duplicate every skill body inside the router. ### 6. Verify the result After the plan is applied, verify: - every DAILY file exists where expected - stale language rules were not left active - incompatible hooks were not installed - the resulting install actually matches the repo stack Return a compact report with: - DAILY count - LIBRARY count - removed stale surfaces - open questions ## Handoffs If the next step is interactive installation or repair, hand off to: - `configure-ecc` If the next step is overlap cleanup or catalog review, hand off to: - `skill-stocktake` If the next step is broader context trimming, hand off to: - `strategic-compact` ## Output Format Return the result in this order: ```text STACK - language/framework/runtime summary DAILY - always-loaded items with evidence LIBRARY - searchable/reference items with evidence INSTALL PLAN - what should be installed, removed, or routed VERIFICATION - checks run and remaining gaps ```
Related Skills
workspace-surface-audit
Audit the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommend the highest-value ECC-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.
ui-demo
Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel.
token-budget-advisor
Offers the user an informed choice about how much response depth to consume before answering. Use this skill when the user explicitly wants to control response length, depth, or token budget. TRIGGER when: "token budget", "token count", "token usage", "token limit", "response length", "answer depth", "short version", "brief answer", "detailed answer", "exhaustive answer", "respuesta corta vs larga", "cuántos tokens", "ahorrar tokens", "responde al 50%", "dame la versión corta", "quiero controlar cuánto usas", or clear variants where the user is explicitly asking to control answer size or depth. DO NOT TRIGGER when: user has already specified a level in the current session (maintain it), the request is clearly a one-word answer, or "token" refers to auth/session/payment tokens rather than response size.
skill-comply
Visualize whether skills, rules, and agent definitions are actually followed — auto-generates scenarios at 3 prompt strictness levels, runs agents, classifies behavioral sequences, and reports compliance rates with full tool call timelines
santa-method
Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
safety-guard
Use this skill to prevent destructive operations when working on production systems or running agents autonomously.
repo-scan
Cross-stack source code asset audit — classifies every file, detects embedded third-party libraries, and delivers actionable four-level verdicts per module with interactive HTML reports.
project-flow-ops
Operate execution flow across GitHub and Linear by triaging issues and pull requests, linking active work, and keeping GitHub public-facing while Linear remains the internal execution layer. Use when the user wants backlog control, PR triage, or GitHub-to-Linear coordination.
product-lens
Use this skill to validate the "why" before building, run product diagnostics, and pressure-test product direction before the request becomes an implementation contract.
openclaw-persona-forge
为 OpenClaw AI Agent 锻造完整的龙虾灵魂方案。根据用户偏好或随机抽卡, 输出身份定位、灵魂描述(SOUL.md)、角色化底线规则、名字和头像生图提示词。 如当前环境提供已审核的生图 skill,可自动生成统一风格头像图片。 当用户需要创建、设计或定制 OpenClaw 龙虾灵魂时使用。 不适用于:微调已有 SOUL.md、非 OpenClaw 平台的角色设计、纯工具型无性格 Agent。 触发词:龙虾灵魂、虾魂、OpenClaw 灵魂、养虾灵魂、龙虾角色、龙虾定位、 龙虾剧本杀角色、龙虾游戏角色、龙虾 NPC、龙虾性格、龙虾背景故事、 lobster soul、lobster character、抽卡、随机龙虾、龙虾 SOUL、gacha。
manim-video
Build reusable Manim explainers for technical concepts, graphs, system diagrams, and product walkthroughs, then hand off to the wider ECC video stack if needed. Use when the user wants a clean animated explainer rather than a generic talking-head script.
laravel-plugin-discovery
Discover and evaluate Laravel packages via LaraPlugins.io MCP. Use when the user wants to find plugins, check package health, or assess Laravel/PHP compatibility.