quality-reviewer
Deep code review with web research to verify against latest ecosystem. Use when user says 'double check against latest', 'verify versions', 'check security', 'review against docs', or needs deep analysis beyond automatic quality hook.
Best use case
quality-reviewer is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Deep code review with web research to verify against latest ecosystem. Use when user says 'double check against latest', 'verify versions', 'check security', 'review against docs', or needs deep analysis beyond automatic quality hook.
Deep code review with web research to verify against latest ecosystem. Use when user says 'double check against latest', 'verify versions', 'check security', 'review against docs', or needs deep analysis beyond automatic quality hook.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "quality-reviewer" skill to help with this workflow task. Context: Deep code review with web research to verify against latest ecosystem. Use when user says 'double check against latest', 'verify versions', 'check security', 'review against docs', or needs deep analysis beyond automatic quality hook.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/quality-reviewer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How quality-reviewer Compares
| Feature / Agent | quality-reviewer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Deep code review with web research to verify against latest ecosystem. Use when user says 'double check against latest', 'verify versions', 'check security', 'review against docs', or needs deep analysis beyond automatic quality hook.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
AI Agent for Product Research
Browse AI agent skills for product research, competitive analysis, customer discovery, and structured product decision support.
SKILL.md Source
# Quality Reviewer
Deep quality review with web research to verify code against the latest ecosystem state.
**Primary differentiator**: Web research to verify against current versions, documentation, and best practices.
**Triggers**:
- **Explicit web research request**: "double check against latest docs", "verify we're using latest version", "check for security issues"
- **Deep dive needed**: User wants analysis beyond automatic hook (performance, architecture alternatives, trade-offs)
- **No SAFEWORD.md/CLAUDE.md**: Projects without context files (automatic hook won't run, manual review needed)
- **Pre-change review**: User wants review before making changes (automatic hook only triggers after changes)
**Relationship to automatic quality hook**:
- **Automatic hook**: Fast quality check using existing knowledge + project context (guaranteed, runs on every change)
- **This skill**: Deep review with web research when verification against current ecosystem is needed (on-demand, 2-3 min)
## Review Protocol
### 1. Identify What Changed
Understand context:
- What files were just modified?
- What problem is being solved?
- What was the implementation approach?
### 2. Read Project Standards
```bash
ls CLAUDE.md SAFEWORD.md ARCHITECTURE.md .claude/
```
Read relevant standards:
- `CLAUDE.md` or `SAFEWORD.md` - Project-specific guidelines
- `ARCHITECTURE.md` - Architectural principles
### 3. Evaluate Correctness
**Will it work?**
- Does the logic make sense?
- Are there obvious bugs?
**Edge cases:**
- Empty inputs, null/undefined, boundary conditions (0, -1, max)?
- Concurrent access, network failures?
**Error handling:**
- Are errors caught appropriately?
- Helpful error messages?
- Cleanup handled (resources, connections)?
**Logic errors:**
- Off-by-one errors, race conditions, wrong assumptions?
### 4. Evaluate Anti-Bloat
- Are all dependencies necessary? Could we use stdlib/built-ins?
- Are abstractions solving real problems or imaginary ones?
- YAGNI: Is this feature actually needed now?
### 5. Evaluate Elegance
- Is the code easy to understand?
- Are names clear and descriptive?
- Is the intent obvious?
- Will this be easy to change later?
### 6. Check Standards Compliance
**Project standards** (from CLAUDE.md/SAFEWORD.md/ARCHITECTURE.md):
- Does it follow established patterns?
- Does it violate any documented principles?
**Library best practices:**
- Are we using libraries correctly?
- Are we following official documentation?
### 7. Verify Latest Versions - PRIMARY VALUE
**CRITICAL**: This is your main differentiator from automatic hook. ALWAYS check versions.
Search for: "[library name] latest stable version 2025"
Search for: "[library name] security vulnerabilities"
**Flag if outdated:**
- Major versions behind → WARN (e.g., React 17 when 19 is stable)
- Minor versions behind → NOTE (e.g., React 19.0.0 when 19.1.0 is stable)
- Security vulnerabilities → CRITICAL (must upgrade)
- Using latest → Confirm
**Common libraries**: React, TypeScript, Vite, Next.js, Node.js, Vitest, Playwright, Jest, esbuild
### 8. Verify Latest Documentation - PRIMARY VALUE
**CRITICAL**: This is your main differentiator from automatic hook. ALWAYS verify against current docs.
Fetch and check official documentation sites for the libraries in use.
**Look for:**
- Are we using deprecated APIs?
- Are there newer, better patterns?
- Did the library's recommendations change recently?
## Output Format
**Simple question** ("is it correct?"):
```text
**Correctness:** ✓ Logic is sound, edge cases handled, no obvious errors.
```
**Full review** ("double check and critique"):
```markdown
## Quality Review
**Correctness:** [✓/⚠️/❌] [Brief assessment]
**Anti-Bloat:** [✓/⚠️/❌] [Brief assessment]
**Elegance:** [✓/⚠️/❌] [Brief assessment]
**Standards:** [✓/⚠️/❌] [Brief assessment]
**Versions:** [✓/⚠️/❌] [Latest version check]
**Documentation:** [✓/⚠️/❌] [Current docs check]
**Verdict:** [APPROVE / REQUEST CHANGES / NEEDS DISCUSSION]
**Critical issues:** [List or "None"]
**Suggested improvements:** [List or "None"]
```
## Critical Reminders
1. **Primary value: Web research** - Verify against current ecosystem (versions, docs, security)
2. **Complement automatic hook** - Hook does fast check with existing knowledge, you do deep dive with web research
3. **Explicit triggers matter** - "double check against latest docs", "verify versions", "check security" = invoke web research
4. **Always check latest docs** - Verify patterns are current, not outdated
5. **Always verify versions** - Flag outdated dependencies
6. **Be thorough but concise** - Cover all areas but keep explanations brief
7. **Provide actionable feedback** - Specific line numbers, concrete suggestions
8. **Clear verdict** - Always end with APPROVE/REQUEST CHANGES/NEEDS DISCUSSION
9. **Separate critical vs nice-to-have** - User needs to know what's blocking vs optionalRelated Skills
data-quality-frameworks
Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.
doc-consistency-reviewer
文档一致性审核官,检查代码实现与文档说明的一致性。当用户请求审查文档与代码的一致性、检查 README/docs 是否过时、验证 API 文档准确性时使用此技能。适用于:(1) 审查 README 与实现一致性 (2) 检查 docs/ 目录文档是否过时 (3) 验证 API/配置文档准确性 (4) 生成文档一致性报告。触发词包括:文档审查、doc review、文档一致性、documentation consistency、检查文档过时、verify docs。
clean-code-reviewer
Analyze code quality based on "Clean Code" principles. Identify naming, function size, duplication, over-engineering, and magic number issues with severity ratings and refactoring suggestions. Use when the user requests code review, quality check, refactoring advice, Clean Code analysis, code smell detection, or mentions terms like 代码体检, 代码质量, 重构检查.
examples-code-reviewer
AI-powered code review tool that analyzes code for bugs, style issues, and improvements
when-verifying-quality-use-verification-quality
Comprehensive quality verification and validation through static analysis, dynamic testing, integration validation, and certification gates
verification-quality-assurance
Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.
quick-quality-check
Lightning-fast quality check using parallel command execution. Runs theater detection, linting, security scan, and basic tests in parallel for instant feedback on code quality.
move-code-quality
Analyzes Move language packages against the official Move Book Code Quality Checklist. Use this skill when reviewing Move code, checking Move 2024 Edition compliance, or analyzing Move packages for best practices. Activates automatically when working with .move files or Move.toml manifests.
code-quality
Expert at TypeScript strict mode, linting, formatting, code review standards. Use when checking code quality, fixing type errors, or enforcing standards.
analyzing-test-quality
Automatically activated when user asks about test quality, code coverage, test reliability, test maintainability, or wants to analyze their test suite. Provides framework-agnostic test quality analysis and improvement recommendations. Does NOT provide framework-specific patterns - use jest-testing or playwright-testing for those.
analyzing-response-quality
Expert at analyzing the quality of Claude's responses and outputs. Use when evaluating response completeness, accuracy, clarity, or effectiveness. Auto-invokes during self-reflection or when quality assessment is needed.
analyzing-component-quality
Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.