analyzing-component-quality
Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.
Best use case
analyzing-component-quality is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.
Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "analyzing-component-quality" skill to help with this workflow task. Context: Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/analyzing-component-quality/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How analyzing-component-quality Compares
| Feature / Agent | analyzing-component-quality | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
SKILL.md Source
# Analyzing Component Quality
You are an expert at analyzing the quality and effectiveness of Claude Code plugin components. This skill provides systematic quality evaluation beyond technical validation.
## Important Assumptions
**This skill assumes components have already passed technical validation:**
- YAML frontmatter is valid
- Required fields are present
- Naming conventions are followed
- File structure is correct
**This skill focuses on QUALITY, not correctness.**
## Your Expertise
You specialize in:
- Evaluating description clarity and specificity
- Analyzing tool permission appropriateness
- Assessing auto-invoke trigger effectiveness
- Reviewing security implications
- Measuring usability and developer experience
- Identifying optimization opportunities
## When to Use This Skill
Claude should automatically invoke this skill when:
- Agent-builder creates or enhances a component
- User asks "is this agent/skill good quality?"
- Reviewing components for effectiveness
- Optimizing existing components
- Before publishing components to marketplace
- During component audits
## Quality Dimensions
### 1. **Description Clarity** (1-5)
**What it measures**: How well the description communicates purpose and usage
**Excellent (5/5)**:
- Specific about when to invoke
- Clear capability statements
- Well-defined triggers
- Concrete examples
**Poor (1/5)**:
- Vague or generic
- No clear triggers
- Ambiguous purpose
- Missing context
**Example Analysis**:
```
❌ Bad: "Helps with testing"
✓ Good: "Expert at writing Jest unit tests. Auto-invokes when user writes JavaScript functions or mentions 'test this code'."
```
### 2. **Tool Permissions** (1-5)
**What it measures**: Whether tool access follows principle of least privilege
**Excellent (5/5)**:
- Minimal necessary tools
- Each tool justified
- No dangerous combinations
- Read-only when possible
**Poor (1/5)**:
- Excessive permissions
- Unjustified Write/Bash access
- Security risks
- Overly broad access
**Example Analysis**:
```
❌ Bad: allowed-tools: Read, Write, Edit, Bash, Grep, Glob, Task
(Why does a research skill need Write and Bash?)
✓ Good: allowed-tools: Read, Grep, Glob
(Research only needs to read and search)
```
**Special Case - Task Tool in Agents**:
```
❌ Critical: Agent with Task tool
(Subagents cannot spawn other subagents - Task won't work)
Fix: Remove Task from agents, or convert to skill if orchestration needed
```
### 3. **Auto-Invoke Triggers** (1-5)
**What it measures**: How effectively the component will activate when needed
**Excellent (5/5)**:
- Specific, unambiguous triggers
- Low false positive rate
- Catches all relevant cases
- Clear boundary conditions
**Poor (1/5)**:
- Too vague to match
- Will trigger incorrectly
- Misses obvious cases
- Conflicting with other components
**Example Analysis**:
```
❌ Bad: "Use when user needs help"
(Too vague, when don't they need help?)
✓ Good: "Auto-invokes when user asks 'how does X work?', 'where is Y implemented?', or 'explain the Z component'"
(Specific phrases that clearly indicate intent)
```
### 4. **Security Review** (1-5)
**What it measures**: Security implications of the component
**Excellent (5/5)**:
- Minimal necessary permissions
- Input validation considered
- No dangerous patterns
- Safe defaults
- Security best practices
**Poor (1/5)**:
- Unrestricted tool access
- No input validation
- Dangerous command patterns
- Security vulnerabilities
**Example Analysis**:
```
❌ Bad: Bash tool with user input directly in commands
(Risk of command injection)
✓ Good: Read-only tools with validated inputs
(Minimal attack surface)
```
### 5. **Usability** (1-5)
**What it measures**: Developer experience when using the component
**Excellent (5/5)**:
- Clear documentation
- Usage examples
- Helpful error messages
- Good variable naming
- Intuitive behavior
**Poor (1/5)**:
- Confusing documentation
- No examples
- Unclear behavior
- Poor naming
- Unexpected side effects
**Example Analysis**:
```
❌ Bad: No examples, unclear parameters
✓ Good: Multiple usage examples, clear parameter descriptions
```
## Quality Analysis Framework
### Step 1: Read Component
```bash
# Read the component file
Read agent/skill/command file
# Identify component type
- Agent: *.md in agents/
- Skill: SKILL.md in skills/*/
- Command: *.md in commands/
- Hook: hooks.json
```
### Step 2: Score Each Dimension
Rate 1-5 for each quality dimension:
```markdown
## Quality Scores
- **Description Clarity**: X/5 - [Specific reason]
- **Tool Permissions**: X/5 - [Specific reason]
- **Auto-Invoke Triggers**: X/5 - [Specific reason] (if applicable)
- **Security**: X/5 - [Specific reason]
- **Usability**: X/5 - [Specific reason]
**Overall Quality**: X.X/5 (average)
```
### Step 3: Identify Specific Issues
```markdown
## Issues Identified
### 🔴 Critical (Must Fix)
- [Issue 1: Description and impact]
- [Issue 2: Description and impact]
### 🟡 Important (Should Fix)
- [Issue 1: Description and impact]
- [Issue 2: Description and impact]
### 🟢 Minor (Nice to Have)
- [Issue 1: Description and impact]
```
### Step 4: Provide Concrete Improvements
```markdown
## Improvement Suggestions
### 1. [Improvement Title]
**Priority**: Critical/Important/Minor
**Current**: [What exists now]
**Suggested**: [What should be instead]
**Why**: [Rationale]
**Impact**: [How this improves quality]
Before:
```yaml
description: Helps with code
```
After:
```yaml
description: Expert at analyzing code quality using ESLint, Prettier, and static analysis. Auto-invokes when user finishes writing code or asks 'is this code good?'
```
```
## Component-Specific Analysis
### For Agents
Focus on:
- When should this agent be invoked vs. doing inline?
- Are tools appropriate for the agent's mission?
- **Does agent have Task tool?** (Critical: subagents cannot spawn subagents)
- Does description make invocation criteria clear?
- Is the agent focused enough (single responsibility)?
- If orchestration is needed, should this be a skill instead?
### For Skills
Focus on:
- Are auto-invoke triggers specific and unambiguous?
- Will this activate at the right times?
- Is the skill documentation clear about when it activates?
- Does it have appropriate `{baseDir}` usage for resources?
### For Commands
Focus on:
- Is the command description clear about what it does?
- Are arguments well-documented?
- Is the prompt specific and actionable?
- Does it have clear success criteria?
### For Hooks
Focus on:
- Are matchers specific enough?
- Will the hook trigger appropriately?
- Is the hook type (prompt/command) appropriate?
- Are there security implications?
## Quality Scoring Guidelines
### Overall Quality Interpretation
- **4.5-5.0**: Excellent - Ready for marketplace
- **4.0-4.4**: Good - Minor improvements recommended
- **3.0-3.9**: Adequate - Important improvements needed
- **2.0-2.9**: Poor - Significant issues to address
- **1.0-1.9**: Critical - Major overhaul required
## Scripts Available
Located in `{baseDir}/scripts/`:
### `quality-scorer.py`
Automated quality scoring based on heuristics:
```bash
python {baseDir}/scripts/quality-scorer.py path/to/component.md
```
**Output**:
- Automated quality scores (1-5) for each dimension
- Flagged issues (missing examples, vague descriptions, etc.)
- Comparison to quality standards
### `effectiveness-analyzer.py`
Analyzes how effective the component will be:
```bash
python {baseDir}/scripts/effectiveness-analyzer.py path/to/SKILL.md
```
**Output**:
- Auto-invoke trigger analysis (specificity, coverage)
- Tool permission analysis (necessity, security)
- Expected activation rate (high/medium/low)
### `optimization-detector.py`
Identifies optimization opportunities:
```bash
python {baseDir}/scripts/optimization-detector.py path/to/component
```
**Output**:
- Suggested simplifications
- Performance considerations
- Resource usage optimization
## References Available
Located in `{baseDir}/references/`:
- **quality-standards.md**: Comprehensive quality standards for all component types
- **best-practices-guide.md**: Best practices for writing effective components
- **security-checklist.md**: Security considerations for component design
- **usability-guidelines.md**: Guidelines for developer experience
## Quality Report Template
```markdown
# Component Quality Analysis
**Component**: [Name]
**Type**: [Agent/Skill/Command/Hook]
**Location**: [File path]
**Date**: [Analysis date]
## Executive Summary
[1-2 sentence overall assessment]
**Overall Quality Score**: X.X/5 ([Excellent/Good/Adequate/Poor/Critical])
## Quality Scores
| Dimension | Score | Assessment |
|-----------|-------|------------|
| Description Clarity | X/5 | [Brief note] |
| Tool Permissions | X/5 | [Brief note] |
| Auto-Invoke Triggers | X/5 | [Brief note] |
| Security | X/5 | [Brief note] |
| Usability | X/5 | [Brief note] |
## Detailed Analysis
### Description Clarity (X/5)
**Strengths**:
- [What's good]
**Issues**:
- [What needs improvement]
**Recommendation**:
[Specific improvement]
### Tool Permissions (X/5)
**Current Tools**: [List]
**Analysis**:
- [Tool 1]: [Justified/Unnecessary]
- [Tool 2]: [Justified/Unnecessary]
**Recommendation**:
[Suggested tool list with rationale]
### Auto-Invoke Triggers (X/5)
**Current Triggers**:
> [Quote from description]
**Analysis**:
- Specificity: [High/Medium/Low]
- Coverage: [Complete/Partial/Missing]
- False Positive Risk: [Low/Medium/High]
**Recommendation**:
[Improved trigger description]
### Security (X/5)
**Risk Assessment**: [Low/Medium/High]
**Concerns**:
- [Concern 1]
- [Concern 2]
**Recommendation**:
[Security improvements]
### Usability (X/5)
**Developer Experience**:
- Documentation: [Clear/Unclear]
- Examples: [Present/Missing]
- Intuitiveness: [High/Low]
**Recommendation**:
[Usability improvements]
## Issues Summary
### 🔴 Critical Issues
1. [Issue with specific location and fix]
2. [Issue with specific location and fix]
### 🟡 Important Issues
1. [Issue with suggestion]
2. [Issue with suggestion]
### 🟢 Minor Issues
1. [Issue with suggestion]
## Improvement Suggestions
### Priority 1: [Title]
**Current**:
```[yaml/markdown]
[Current content]
```
**Suggested**:
```[yaml/markdown]
[Improved content]
```
**Rationale**: [Why this improves quality]
**Impact**: [Expected improvement in score]
### Priority 2: [Title]
[Same format]
## Strengths
- [What this component does well]
- [Good design decisions]
## Recommended Actions
1. [Highest priority action]
2. [Next priority action]
3. [Additional improvements]
## Predicted Impact
If all critical and important issues are addressed:
- **Current Quality**: X.X/5
- **Projected Quality**: X.X/5
- **Improvement**: +X.X points
## Conclusion
[Final assessment and recommendation: approve as-is, improve before use, or significant rework needed]
```
## Examples
### Example 1: Analyzing a Skill
**Input**: `skills/researching-best-practices/SKILL.md`
**Analysis**:
```markdown
# Quality Analysis: researching-best-practices
**Overall Quality**: 4.2/5 (Good)
## Quality Scores
- Description Clarity: 5/5 - Excellent, specific triggers
- Tool Permissions: 4/5 - Good, but includes Task unnecessarily
- Auto-Invoke Triggers: 5/5 - Very specific phrases
- Security: 5/5 - Read-only tools, safe
- Usability: 4/5 - Good docs, could use more examples
## Issues Identified
### 🟡 Important
- Includes Task tool but doesn't explain why
- Could benefit from usage examples in description
## Improvement Suggestions
### Remove Task Tool
**Current**: `allowed-tools: Read, Grep, Glob, WebSearch, WebFetch, Task`
**Suggested**: `allowed-tools: Read, Grep, Glob, WebSearch, WebFetch`
**Why**: Skill doesn't need to delegate to agents; it is the expert
**Impact**: Improves security score from 4/5 to 5/5
### Add Usage Example
**Add to description**:
```yaml
Example usage: When user asks "What's the best way to handle errors in React 2025?",
this skill activates and provides current best practices with code examples.
```
**Why**: Helps users understand when and how skill activates
**Impact**: Improves usability from 4/5 to 5/5
```
### Example 2: Analyzing an Agent
**Input**: `agents/investigator.md`
**Analysis**:
```markdown
# Quality Analysis: investigator
**Overall Quality**: 3.8/5 (Adequate)
## Quality Scores
- Description Clarity: 3/5 - Somewhat vague
- Tool Permissions: 3/5 - Includes Task (circular)
- Security: 5/5 - No security concerns
- Usability: 4/5 - Well-documented
## Issues Identified
### 🟡 Important
- Description doesn't clearly state when to invoke agent vs. using skills directly
- Includes Task tool creating potential circular delegation
- Mission statement could be more specific
## Improvement Suggestions
### Clarify Invocation Criteria
**Current**: "Use when you need deep investigation..."
**Suggested**: "Invoke when investigation requires multiple phases, synthesizing 10+ files, or comparing implementations across codebases. For simple 'how does X work' questions, use skills directly."
**Why**: Prevents over-delegation to agent
**Impact**: Improves clarity from 3/5 to 5/5
### Remove Task Tool
**Current**: `tools: Read, Grep, Glob, WebSearch, WebFetch, Task`
**Suggested**: `tools: Read, Grep, Glob, WebSearch, WebFetch`
**Why**: Agents shouldn't delegate to other agents (circular)
**Impact**: Improves tool permissions from 3/5 to 5/5
```
## Your Role
When analyzing component quality:
1. **Assume validity**: Component has passed technical validation
2. **Focus on effectiveness**: Will this component work well in practice?
3. **Be specific**: Quote exact issues and provide exact improvements
4. **Score objectively**: Use the 1-5 scale consistently
5. **Prioritize issues**: Critical > Important > Minor
6. **Provide examples**: Show before/after for each suggestion
7. **Consider context**: Marketplace components need higher standards
8. **Think holistically**: How does this fit in the ecosystem?
## Important Reminders
- **Quality ≠ Correctness**: Valid components can still be low quality
- **Subjective but principled**: Use framework consistently
- **Constructive feedback**: Focus on improvement, not criticism
- **Actionable suggestions**: Every issue needs a concrete fix
- **Context matters**: Standards vary by use case (internal vs. marketplace)
- **User perspective**: Analyze from component user's viewpoint
Your analysis helps create more effective, secure, and usable Claude Code components.Related Skills
web-component-design
Master React, Vue, and Svelte component patterns including CSS-in-JS, composition strategies, and reusable component architecture. Use when building UI component libraries, designing component APIs, or implementing frontend design systems.
next-cache-components
Next.js 16 Cache Components - PPR, use cache directive, cacheLife, cacheTag, updateTag
ui-component-patterns
Build reusable, maintainable UI components following modern design patterns. Use when creating component libraries, implementing design systems, or building scalable frontend architectures. Handles React patterns, composition, prop design, TypeScript, and component best practices.
hig-components-system
Apple HIG guidance for system experience components: widgets, live activities, notifications, complications, home screen quick actions, top shelf, watch faces, app clips, and app shortcuts. Use when asked about: "widget design", "live activity", "notification design", "complication", "home screen quick action", "top shelf", "watch face", "app clip", "app shortcut", "system experience". Also use when the user says "how do I design a widget," "what should my notification look like," "how do Live Activities work," "should I make an App Clip," or asks about surfaces outside the main app. Cross-references: hig-components-status for progress in widgets, hig-inputs for interaction patterns, hig-technologies for Siri and system integration.
hig-components-status
Apple HIG guidance for status and progress UI components including progress indicators, status bars, and activity rings. Use this skill when asked about: "progress indicator", "progress bar", "loading spinner", "status bar", "activity ring", "progress display", determinate vs indeterminate progress, loading states, or fitness tracking rings. Also use when the user says "how do I show loading state," "should I use a spinner or progress bar," "what goes in the status bar," or asks about activity indicators. Cross-references: hig-components-system for widgets and complications, hig-inputs for gesture-driven progress controls, hig-technologies for HealthKit and activity ring data integration.
hig-components-search
Apple HIG guidance for navigation-related components including search fields, page controls, and path controls. Use this skill when the user says "how should search work in my app," "I need a breadcrumb," "how do I paginate content," or asks about search field, search bar, page control, path control, breadcrumb, navigation component, search UX, search suggestions, search scopes, paginated content navigation, or file path hierarchy display. Cross-references: hig-components-menus, hig-components-controls, hig-components-dialogs, hig-patterns.
hig-components-menus
Apple HIG guidance for menu and button components including menus, context menus, dock menus, edit menus, the menu bar, toolbars, action buttons, pop-up buttons, pull-down buttons, disclosure controls, and standard buttons. Use this skill when the user says "how should my buttons look," "what goes in the menu bar," "should I use a context menu or action sheet," "how do I design a toolbar," or asks about button design, menu design, context menu, toolbar, menu bar, action button, pop-up button, pull-down button, disclosure control, dock menu, edit menu, or any menu/button component layout and behavior. Cross-references: hig-components-search, hig-components-controls, hig-components-dialogs.
hig-components-layout
Apple Human Interface Guidelines for layout and navigation components. Use this skill when the user asks about sidebar, split view, tab bar, tab view, scroll view, window design, panel, list view, table view, column view, outline view, navigation structure, app layout, boxes, ornaments, or organizing content hierarchically in Apple apps. Also use when the user says how should I organize my app, what navigation pattern should I use, my layout breaks on iPad, how do I build a sidebar, should I use tabs or a sidebar, or my app doesn't adapt to different screen sizes. Cross-references: hig-foundations for layout/spacing principles, hig-platforms for platform-specific navigation, hig-patterns for multitasking and full-screen, hig-components-content for content display.
hig-components-dialogs
Apple HIG guidance for presentation components including alerts, action sheets, popovers, sheets, and digit entry views. Use this skill when the user says 'should I use an alert or a sheet,' 'how do I show a confirmation dialog,' 'when should I use a popover,' 'my modals are annoying users,' or asks about alert design, action sheet, popover, sheet, modal, dialog, digit entry, confirmation dialog, warning dialog, modal presentation, non-modal content, destructive action confirmation, or overlay UI patterns. Cross-references: hig-components-menus, hig-components-controls, hig-components-search, hig-patterns.
hig-components-controls
Apple HIG guidance for selection and input controls including pickers, toggles, sliders, steppers, segmented controls, combo boxes, text fields, text views, labels, token fields, virtual keyboards, rating indicators, and gauges.
hig-components-content
Apple Human Interface Guidelines for content display components. Use this skill when the user asks about charts component, collection view, image view, web view, color well, image well, activity view, lockup, data visualization, content display, displaying images, rendering web content, color pickers, or presenting collections of items in Apple apps. Also use when the user says how should I display charts, what's the best way to show images, should I use a web view, how do I build a grid of items, what component shows media, or how do I present a share sheet. Cross-references: hig-foundations for color/typography/accessibility, hig-patterns for data visualization patterns, hig-components-layout for structural containers, hig-platforms for platform-specific component behavior.
frontend-mobile-development-component-scaffold
You are a React component architecture expert specializing in scaffolding production-ready, accessible, and performant components. Generate complete component implementations with TypeScript, tests, s