when-analyzing-skill-gaps-use-skill-gap-analyzer
Analyze skill library to identify coverage gaps, redundant overlaps, optimization opportunities, and provide recommendations for skill portfolio improvement
Best use case
when-analyzing-skill-gaps-use-skill-gap-analyzer is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Analyze skill library to identify coverage gaps, redundant overlaps, optimization opportunities, and provide recommendations for skill portfolio improvement
Analyze skill library to identify coverage gaps, redundant overlaps, optimization opportunities, and provide recommendations for skill portfolio improvement
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "when-analyzing-skill-gaps-use-skill-gap-analyzer" skill to help with this workflow task. Context: Analyze skill library to identify coverage gaps, redundant overlaps, optimization opportunities, and provide recommendations for skill portfolio improvement
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/when-analyzing-skill-gaps-use-skill-gap-analyzer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How when-analyzing-skill-gaps-use-skill-gap-analyzer Compares
| Feature / Agent | when-analyzing-skill-gaps-use-skill-gap-analyzer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Analyze skill library to identify coverage gaps, redundant overlaps, optimization opportunities, and provide recommendations for skill portfolio improvement
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Skill Gap Analyzer
**Purpose:** Perform comprehensive analysis of skill library to identify missing capabilities, redundant functionality, optimization opportunities, and provide actionable recommendations for skill portfolio improvement.
## When to Use This Skill
- When building a new skill library
- Quarterly skill portfolio reviews
- Before large refactoring efforts
- When considering new skill additions
- After major project pivots
- When optimizing resource allocation
## Analysis Dimensions
### 1. Coverage Gap Analysis
- Domain coverage mapping
- Missing capability identification
- Use case scenario testing
- Workflow completeness assessment
- Integration point analysis
### 2. Redundancy Detection
- Duplicate functionality identification
- Overlapping capability mapping
- Consolidation opportunity analysis
- Version conflict detection
- Naming collision identification
### 3. Optimization Opportunities
- Under-utilized skill detection
- Over-complex skill identification
- Composability improvement suggestions
- Dependency optimization
- Performance bottleneck analysis
### 4. Usage Pattern Analysis
- Frequency metrics
- Co-occurrence patterns
- Success rate tracking
- Token efficiency measurement
- Agent utilization patterns
### 5. Recommendation Generation
- Prioritized action items
- Consolidation strategies
- New skill proposals
- Deprecation candidates
- Restructuring plans
## Execution Process
### Phase 1: Library Inventory
```bash
# Initialize analysis session
npx claude-flow@alpha hooks pre-task --description "Analyzing skill library gaps"
# Scan skill directories
find ~/.claude/skills -name "SKILL.md" -o -name "*.skill.md"
```
**Inventory Script:**
```javascript
function inventorySkills(skillDirectory) {
const inventory = {
totalSkills: 0,
categories: {},
capabilities: {},
agents: {},
complexity: {},
tags: {}
};
// Parse each SKILL.md file
const skillFiles = findSkillFiles(skillDirectory);
for (const file of skillFiles) {
const metadata = parseYAMLFrontmatter(file);
inventory.totalSkills++;
// Categorize by path
const category = extractCategory(file);
inventory.categories[category] = (inventory.categories[category] || 0) + 1;
// Track capabilities
const capabilities = extractCapabilities(metadata.description);
capabilities.forEach(cap => {
inventory.capabilities[cap] = (inventory.capabilities[cap] || []);
inventory.capabilities[cap].push(metadata.name);
});
// Track required agents
if (metadata.agents_required) {
metadata.agents_required.forEach(agent => {
inventory.agents[agent] = (inventory.agents[agent] || 0) + 1;
});
}
// Track complexity
inventory.complexity[metadata.complexity || 'UNKNOWN'] =
(inventory.complexity[metadata.complexity || 'UNKNOWN'] || 0) + 1;
// Track tags
if (metadata.tags) {
metadata.tags.forEach(tag => {
inventory.tags[tag] = (inventory.tags[tag] || 0) + 1;
});
}
}
return inventory;
}
function extractCapabilities(description) {
// Extract action verbs and key nouns
const capabilities = [];
const verbs = description.match(/\b(analyz|creat|generat|optimiz|manag|coordinat|orchestrat|deploy|monitor|test|review|document|integrat|automat|validat|secur|perform|debug|refactor|migrat|transform)\w+/gi) || [];
capabilities.push(...verbs.map(v => v.toLowerCase()));
return [...new Set(capabilities)]; // Deduplicate
}
```
**Store Inventory:**
```bash
npx claude-flow@alpha memory store --key "gap-analysis/inventory" --value "{
\"totalSkills\": <count>,
\"categories\": {...},
\"capabilities\": {...},
\"timestamp\": \"<ISO8601>\"
}"
```
### Phase 2: Coverage Gap Detection
**Domain Coverage Matrix:**
```javascript
function analyzeCoverageGaps(inventory, requiredDomains) {
const gaps = [];
// Define comprehensive domain requirements
const domains = {
"Development": [
"code-generation", "testing", "debugging", "refactoring",
"documentation", "code-review", "architecture"
],
"DevOps": [
"deployment", "monitoring", "ci-cd", "infrastructure",
"security", "scaling", "backup-recovery"
],
"Project Management": [
"planning", "estimation", "tracking", "reporting",
"risk-management", "stakeholder-communication"
],
"Data": [
"data-analysis", "data-transformation", "data-validation",
"data-migration", "data-visualization"
],
"AI/ML": [
"model-training", "inference", "optimization",
"evaluation", "deployment", "monitoring"
],
"Integration": [
"api-integration", "webhook-handling", "event-processing",
"message-queue", "service-mesh"
]
};
// Check coverage for each domain
for (const [domain, capabilities] of Object.entries(domains)) {
const coverage = capabilities.map(cap => {
const covered = inventory.capabilities[cap]?.length > 0;
const skills = inventory.capabilities[cap] || [];
return { capability: cap, covered, skills };
});
const missingCaps = coverage.filter(c => !c.covered);
if (missingCaps.length > 0) {
gaps.push({
domain: domain,
coverage: ((capabilities.length - missingCaps.length) / capabilities.length * 100).toFixed(1) + "%",
missingCapabilities: missingCaps.map(c => c.capability),
priority: calculatePriority(domain, missingCaps.length, capabilities.length)
});
}
}
return gaps;
}
function calculatePriority(domain, missingCount, totalCount) {
const coverageRatio = 1 - (missingCount / totalCount);
const domainImportance = {
"Development": 1.0,
"DevOps": 0.9,
"Project Management": 0.7,
"Data": 0.8,
"AI/ML": 0.8,
"Integration": 0.9
};
const score = coverageRatio * (domainImportance[domain] || 0.5);
if (score < 0.3) return "critical";
if (score < 0.6) return "high";
if (score < 0.8) return "medium";
return "low";
}
```
**Use Case Scenario Testing:**
```javascript
function testScenarioCoverage(inventory) {
const scenarios = [
{
name: "Full-stack web app development",
requiredCapabilities: [
"code-generation", "testing", "database-design",
"api-integration", "deployment", "monitoring"
]
},
{
name: "ML model training and deployment",
requiredCapabilities: [
"data-analysis", "model-training", "evaluation",
"optimization", "deployment", "monitoring"
]
},
{
name: "GitHub workflow automation",
requiredCapabilities: [
"code-review", "testing", "ci-cd", "release-management",
"issue-tracking", "documentation"
]
},
{
name: "Prompt engineering and optimization",
requiredCapabilities: [
"prompt-analysis", "optimization", "testing",
"documentation", "version-control"
]
}
];
const scenarioResults = scenarios.map(scenario => {
const coverage = scenario.requiredCapabilities.map(cap => ({
capability: cap,
covered: inventory.capabilities[cap]?.length > 0,
skills: inventory.capabilities[cap] || []
}));
const coveragePercent = (coverage.filter(c => c.covered).length /
coverage.length * 100).toFixed(1);
return {
scenario: scenario.name,
coverage: coveragePercent + "%",
missing: coverage.filter(c => !c.covered).map(c => c.capability),
canExecute: coverage.every(c => c.covered)
};
});
return scenarioResults;
}
```
### Phase 3: Redundancy Detection
**Overlap Analysis:**
```javascript
function detectRedundancy(inventory) {
const redundancies = [];
// Find capabilities handled by multiple skills
for (const [capability, skills] of Object.entries(inventory.capabilities)) {
if (skills.length > 2) {
// Analyze actual overlap
const skillDetails = skills.map(name => loadSkillDetails(name));
const overlap = analyzeOverlap(skillDetails);
if (overlap.percentage > 70) {
redundancies.push({
capability: capability,
skillCount: skills.length,
skills: skills,
overlapPercentage: overlap.percentage,
recommendation: generateConsolidationRecommendation(skillDetails)
});
}
}
}
// Find naming collisions
const namePatterns = {};
for (const [category, skillList] of Object.entries(inventory.categories)) {
// Extract common patterns
const patterns = skillList.map(extractNamePattern);
// Identify potential confusion
}
return redundancies;
}
function analyzeOverlap(skills) {
// Compare descriptions, capabilities, processes
const descriptions = skills.map(s => s.description);
const commonWords = findCommonWords(descriptions);
// Calculate Jaccard similarity
const allWords = new Set(descriptions.flatMap(d => d.split(/\s+/)));
const overlap = commonWords.size / allWords.size * 100;
return { percentage: overlap, commonWords: Array.from(commonWords) };
}
```
### Phase 4: Optimization Opportunities
**Researcher Agent Task:**
```bash
# Spawn researcher agent for optimization analysis
# Agent instructions:
# 1. Analyze usage patterns from memory
# 2. Identify under-utilized skills (low frequency)
# 3. Identify over-complex skills (high token cost, low success rate)
# 4. Suggest composability improvements
# 5. Recommend dependency optimizations
# 6. Store findings in memory
npx claude-flow@alpha memory store --key "gap-analysis/optimization" --value "{
\"underutilized\": [...],
\"overcomplicated\": [...],
\"composability\": [...],
\"dependencies\": [...]
}"
```
**Optimization Detection:**
```javascript
function identifyOptimizations(inventory, usageMetrics) {
const optimizations = [];
// Under-utilized skills
const underutilized = usageMetrics.filter(m =>
m.frequency < 0.05 && // Less than 5% usage
m.lastUsed > 90 // Days since last use
).map(m => ({
skill: m.name,
frequency: m.frequency,
lastUsed: m.lastUsed + " days ago",
recommendation: "Review for deprecation or promotion"
}));
optimizations.push({
type: "under-utilized",
count: underutilized.length,
skills: underutilized
});
// Over-complex skills
const overcomplex = usageMetrics.filter(m =>
m.avgTokens > 5000 && // High token usage
m.successRate < 0.7 // Low success rate
).map(m => ({
skill: m.name,
avgTokens: m.avgTokens,
successRate: (m.successRate * 100).toFixed(1) + "%",
recommendation: "Break into smaller skills or simplify"
}));
optimizations.push({
type: "over-complex",
count: overcomplex.length,
skills: overcomplex
});
// Composability improvements
const composable = identifyComposablePatterns(inventory);
optimizations.push({
type: "composability",
count: composable.length,
opportunities: composable
});
return optimizations;
}
```
### Phase 5: Recommendation Generation
**Report Format:**
```markdown
## Skill Gap Analysis Report
**Date:** <timestamp>
**Total Skills Analyzed:** <count>
**Analysis Duration:** <time>
---
## Executive Summary
### Coverage
- Overall coverage: <percentage>%
- Critical gaps: <count>
- High-priority gaps: <count>
### Redundancy
- Duplicate functionality: <count> instances
- Consolidation opportunities: <count>
- Potential savings: <tokens/storage>
### Optimization
- Under-utilized skills: <count>
- Over-complex skills: <count>
- Composability improvements: <count>
---
## Coverage Gaps
### Critical Priority
1. **Domain:** [name]
- Coverage: [percentage]%
- Missing capabilities:
- [capability 1]
- [capability 2]
- Recommended action: Create skill "[proposed-name]"
- Impact: [high/medium/low]
### High Priority
...
---
## Redundancy Analysis
### Duplicate Functionality
1. **Capability:** [name]
- Handled by: [skill1], [skill2], [skill3]
- Overlap: [percentage]%
- Recommendation: Consolidate into "[new-skill-name]"
- Estimated savings: [tokens] tokens, [storage] MB
---
## Optimization Opportunities
### Under-Utilized Skills
| Skill | Frequency | Last Used | Recommendation |
|-------|-----------|-----------|----------------|
| [name] | [%] | [days] ago | [action] |
### Over-Complex Skills
| Skill | Avg Tokens | Success Rate | Recommendation |
|-------|------------|--------------|----------------|
| [name] | [count] | [%] | [action] |
### Composability Improvements
1. **Pattern:** [description]
- Current approach: [details]
- Improved approach: [details]
- Benefits: [list]
---
## Scenario Coverage
| Scenario | Coverage | Missing | Can Execute? |
|----------|----------|---------|--------------|
| Full-stack web app | [%] | [list] | [yes/no] |
| ML deployment | [%] | [list] | [yes/no] |
| GitHub automation | [%] | [list] | [yes/no] |
---
## Prioritized Recommendations
### Immediate Actions (This Week)
1. [ ] Create skill: [name] - [justification]
2. [ ] Consolidate: [skills] → [new-skill]
3. [ ] Deprecate: [skill] - [reason]
### Short-Term (This Month)
1. [ ] Optimize: [skill] - [changes]
2. [ ] Document: [skill] - [missing-docs]
3. [ ] Test: [scenario] - [coverage-improvement]
### Long-Term (This Quarter)
1. [ ] Refactor: [domain] - [architecture]
2. [ ] Integrate: [external-tool] - [capability]
3. [ ] Research: [emerging-technology] - [potential]
---
## Metrics Comparison
| Metric | Current | Target | Gap |
|--------|---------|--------|-----|
| Total skills | [count] | - | - |
| Domain coverage | [%] | 90% | [%] |
| Redundancy rate | [%] | <10% | [%] |
| Avg complexity | [level] | MEDIUM | - |
| Under-utilization | [%] | <5% | [%] |
```
## Concrete Example: Real Analysis
### Input: Skill Library (Fragment)
**Inventory:**
- Total skills: 47
- Categories: development (15), github (12), optimization (8), testing (5), meta-tools (3), misc (4)
- Capabilities mapped: 127
- Agents used: 18 distinct types
### Analysis Output
**Coverage Gaps Detected:**
```json
{
"domain": "Data Engineering",
"coverage": "23.1%",
"missingCapabilities": [
"data-transformation",
"data-validation",
"data-migration",
"data-visualization",
"etl-pipeline"
],
"priority": "high",
"recommendation": "Create 'data-engineering-workflow' skill"
}
```
**Redundancy Detected:**
```json
{
"capability": "code-review",
"skillCount": 4,
"skills": [
"code-review-assistant",
"github-code-review",
"pr-review-automation",
"code-quality-checker"
],
"overlapPercentage": 78,
"recommendation": "Consolidate into unified 'code-review-orchestrator' with specialized sub-skills"
}
```
**Optimization Opportunities:**
```json
{
"type": "under-utilized",
"skills": [
{
"skill": "legacy-converter",
"frequency": 0.02,
"lastUsed": "127 days ago",
"recommendation": "Archive or promote with use-case documentation"
}
]
},
{
"type": "over-complex",
"skills": [
{
"skill": "full-stack-architect",
"avgTokens": 8743,
"successRate": "64.3%",
"recommendation": "Break into: backend-architect, frontend-architect, database-architect"
}
]
}
```
**Recommendations:**
1. **Critical:** Create data engineering skill (coverage: 23% → 85%)
2. **High:** Consolidate 4 code review skills (save ~15K tokens, reduce confusion)
3. **Medium:** Break full-stack-architect into 3 focused skills
4. **Low:** Archive legacy-converter or add promotion documentation
**Expected Impact:**
- Coverage improvement: 67% → 89%
- Redundancy reduction: 18% → 7%
- Avg token efficiency: +32%
- Maintenance overhead: -40%
## Integration with Development Workflow
### Quarterly Review Process
```bash
# 1. Run gap analysis
npx claude-flow@alpha hooks pre-task --description "Quarterly skill gap analysis"
# 2. Spawn researcher agent for analysis
# Agent performs comprehensive inventory and analysis
# 3. Review recommendations
npx claude-flow@alpha memory retrieve --key "gap-analysis/recommendations"
# 4. Create action plan
# Prioritize and schedule improvements
# 5. Track progress
npx claude-flow@alpha hooks post-task --task-id "gap-analysis-q1-2025"
```
### Continuous Monitoring
```bash
# Track skill usage
npx claude-flow@alpha hooks post-task --skill-used "[name]"
# Aggregate metrics monthly
npx claude-flow@alpha memory aggregate --pattern "skills/usage/*" --period "monthly"
```
## Success Metrics
- Domain coverage: >85%
- Redundancy rate: <10%
- Under-utilization: <5%
- Scenario execution: 100% of core scenarios
- Optimization adoption: >80% of recommendations implemented
## Related Skills
- `when-optimizing-prompts-use-prompt-optimization-analyzer` - Optimize individual skills
- `when-managing-token-budget-use-token-budget-advisor` - Budget impact analysis
- `skill-forge` - Create new skills based on recommendations
## Notes
- Run quarterly or after major changes
- Involve team in recommendation review
- Track recommendation adoption rate
- Update analysis criteria as needs evolve
- Share findings across teamsRelated Skills
health-trend-analyzer
分析一段时间内健康数据的趋势和模式。关联药物、症状、生命体征、化验结果和其他健康指标的变化。识别令人担忧的趋势、改善情况,并提供数据驱动的洞察。当用户询问健康趋势、模式、随时间的变化或"我的健康状况有什么变化?"时使用。支持多维度分析(体重/BMI、症状、药物依从性、化验结果、情绪睡眠),相关性分析,变化检测,以及交互式HTML可视化报告(ECharts图表)。
when-verifying-quality-use-verification-quality
Comprehensive quality verification and validation through static analysis, dynamic testing, integration validation, and certification gates
when-validating-code-works-use-functionality-audit
Validates that code actually works through sandbox testing, execution verification, and systematic debugging. Use this skill after code generation or modification to ensure functionality is genuine rather than assumed. The skill creates isolated test environments, executes code with realistic inputs, identifies bugs through systematic analysis, and applies best practices to fix issues without breaking existing functionality.
when-using-flow-nexus-platform-use-flow-nexus-platform
Comprehensive Flow Nexus platform management covering authentication, sandboxes, storage, databases, app deployment, payments, and monitoring. This SOP provides end-to-end platform operations.
when-using-advanced-swarm-use-swarm-advanced
Advanced swarm patterns with dynamic topology switching and self-organizing behaviors for complex multi-agent coordination
when-training-neural-networks-use-flow-nexus-neural
This SOP provides a systematic workflow for training and deploying neural networks using Flow Nexus platform with distributed E2B sandboxes. It covers architecture selection, distributed training, ...
when-setting-network-security-use-network-security-setup
Configure Claude Code sandbox network isolation with trusted domains, custom access policies, and environment variables for secure network communication.
when-reviewing-pull-request-orchestrate-comprehensive-code-revie
Use when conducting comprehensive code review for pull requests across multiple quality dimensions. Orchestrates 12-15 specialized reviewer agents across 4 phases using star topology coordination. Covers automated checks, parallel specialized reviews (quality, security, performance, architecture, documentation), integration analysis, and final merge recommendation in a 4-hour workflow.
when-reviewing-github-pr-use-github-code-review
Comprehensive GitHub pull request code review using multi-agent swarm with specialized reviewers for security, performance, style, tests, and documentation. Coordinates security-auditor, perf-analyzer, code-analyzer, tester, and reviewer agents through mesh topology for parallel analysis. Provides detailed feedback with auto-fix suggestions and merge readiness assessment. Use when reviewing PRs, conducting code audits, or ensuring code quality standards before merge.
when-reviewing-code-comprehensively-use-code-review-assistant
Comprehensive PR review with multi-agent swarm specialization for security, performance, style, tests, and documentation
when-releasing-software-use-github-release-management
Comprehensive GitHub release orchestration with AI swarm coordination for automated versioning, testing, deployment, and rollback management. Coordinates release-manager, cicd-engineer, tester, and docs-writer agents through hierarchical topology to handle semantic versioning, changelog generation, release notes, deployment validation, and post-release monitoring. Supports multiple release strategies (rolling, blue-green, canary) and automated rollback. Use when creating releases, managing deployments, or coordinating version updates.
when-releasing-new-product-orchestrate-product-launch
Use when launching a new product end-to-end from market research through post-launch monitoring. Orchestrates 15+ specialist agents across 5 phases in a 10-week coordinated workflow including research, development, marketing, sales preparation, launch execution, and ongoing optimization. Employs hierarchical coordination with parallel execution for efficiency and comprehensive coverage.