quality-verify

Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.

242 stars

Best use case

quality-verify is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.

Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "quality-verify" skill to help with this workflow task. Context: Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/quality-verify/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/abejitsu/quality-verify/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/quality-verify/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How quality-verify Compares

Feature / Agentquality-verifyStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Quality Verify Skill

## Purpose

Final validation that the formatted deliverable meets ALL quality standards before delivery. This is the last gate - if it passes here, it's ready to go.

## Quality Dimensions

The system checks against 6 quality dimensions. Evaluate each:

### 1. **Completeness**
- Does the deliverable have all required parts?
- Nothing missing or obviously incomplete?
- All requirements from the user met?

### 2. **Correctness**
- Is the code syntactically correct? (No errors)
- Are facts/information accurate?
- Does it do what was asked?
- No logical errors?

### 3. **Consistency**
- Formatting consistent throughout?
- Naming conventions consistent?
- Style consistent?
- Patterns applied consistently?

### 4. **Performance** (when applicable)
- Is it efficient? (Code shouldn't be obviously slow)
- Does it scale? (For large inputs/data)
- Any obvious performance issues?

### 5. **Security** (when applicable)
- No obvious vulnerabilities?
- Inputs validated/sanitized?
- No hardcoded secrets?
- Following security best practices?

### 6. **Maintainability**
- Is it readable?
- Is it documented?
- Would someone else understand it?
- Easy to modify later?

## Scoring System

Rate each dimension:

- **✓ Excellent** (90-100): Exceeds standards, professional quality
- **✓ Good** (75-89): Meets standards, ready to deliver
- **⚠ Acceptable** (60-74): Meets minimum standards, could be better
- **✗ Needs Work** (0-59): Below standards, needs revision

## Scoring Algorithm

```
Overall Score = Average of all applicable dimensions

0 Critical Issues = Base score
- 10 points per critical issue (e.g., code doesn't run, major security flaw)
- 5 points per major issue (e.g., missing section, formatting inconsistent)
- 2 points per minor issue (e.g., typo, minor inconsistency)

Final Score = Base score - deductions

80+ = Ready to Deliver ✓
60-79 = Minor fixes recommended
<60 = Major revision needed
```

## Process

1. Review the formatted deliverable
2. Load user's standards using StandardsRepository to understand what "good" means for this type
3. Evaluate against each quality dimension
4. Score each dimension
5. Calculate overall quality score
6. Identify any issues found
7. Provide detailed feedback

## Loading Standards

Use StandardsRepository to access quality criteria:

```javascript
const standards = standardsRepository.getStandards(context.projectType)
if (standards && standards.qualityCriteria) {
  // Check against their quality criteria definitions
  const criteria = standards.qualityCriteria
  // Verify deliverable meets: completeness, correctness, consistency, etc.
  verifyAgainstCriteria(deliverable, criteria)
} else {
  // Use general quality best practices
  verifyAgainstBestPractices(deliverable)
}
```

See `.claude/lib/standards-repository.md` for interface details.

## Output Format

```json
{
  "qualityScore": 92,
  "readyToDeliver": true,
  "dimensionScores": {
    "completeness": 95,
    "correctness": 90,
    "consistency": 88,
    "performance": 85,
    "security": 90,
    "maintainability": 95
  },
  "issuesFound": [
    "list of specific issues (if any)"
  ],
  "issuesSeverity": {
    "critical": [],
    "major": [],
    "minor": ["Missing one edge case test"]
  },
  "notes": "One minor issue found - everything else excellent quality",
  "summary": "Ready to deliver. Recommend adding edge case test.",
  "recommendations": [
    "Add test for empty array edge case"
  ]
}
```

## Success Criteria

### Score 85+
✓ Quality score above 85
✓ No critical issues
✓ Ready to deliver immediately

### Score 70-84
⚠ Good quality, minor issues
⚠ Should fix minor issues before delivery
⚠ Ask user: "Fix these, or deliver as-is?"

### Score <70
✗ Significant issues found
✗ Should not deliver in current state
✗ Recommend major revision

## Example Quality Checks

### Code Feature Quality Check

**Deliverable**: React dropdown component

**Checks**:
- ✓ Completeness: Has all required methods, props, event handlers
- ✓ Correctness: Code runs without errors, keyboard nav works
- ✓ Consistency: Naming consistent, formatting consistent
- ✓ Performance: No obvious inefficiencies, reasonable re-render count
- ✓ Security: Properly sanitizes user input, no XSS vulnerabilities
- ✓ Maintainability: Well-commented, clear variable names, easy to modify

**Score**: 94/100
**Issues**: None
**Recommendation**: Ready to deliver

### Documentation Quality Check

**Deliverable**: API endpoint documentation

**Checks**:
- ✓ Completeness: All endpoints documented, all parameters described
- ✓ Correctness: Information matches actual API behavior
- ✓ Consistency: Formatting consistent, examples follow same pattern
- ✓ Clarity: Easy to understand for new developers
- ⚠ Maintainability: Missing error response examples (minor)

**Score**: 82/100
**Issues**: ["Missing examples for error responses"]
**Recommendation**: Add error response examples, then deliver

## Decision Tree

```
Score 85+ → Ready to Deliver ✓
Score 70-84 → Ask about minor issues
Score <70 → Recommend major revision
```

## Notes for Implementation

- Be specific about issues found, not vague
- When recommending fixes, explain why they matter
- If user's standards are unclear, use general quality best practices
- Quality is subjective - but consistency is objective (did it follow their standards?)
- Better to be slightly harsh than let bad work through

Related Skills

data-quality-frameworks

242
from aiskillstore/marketplace

Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation rules, or establishing data contracts.

when-verifying-quality-use-verification-quality

242
from aiskillstore/marketplace

Comprehensive quality verification and validation through static analysis, dynamic testing, integration validation, and certification gates

verification-quality-assurance

242
from aiskillstore/marketplace

Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.

quick-quality-check

242
from aiskillstore/marketplace

Lightning-fast quality check using parallel command execution. Runs theater detection, linting, security scan, and basic tests in parallel for instant feedback on code quality.

move-code-quality

242
from aiskillstore/marketplace

Analyzes Move language packages against the official Move Book Code Quality Checklist. Use this skill when reviewing Move code, checking Move 2024 Edition compliance, or analyzing Move packages for best practices. Activates automatically when working with .move files or Move.toml manifests.

code-quality

242
from aiskillstore/marketplace

Expert at TypeScript strict mode, linting, formatting, code review standards. Use when checking code quality, fixing type errors, or enforcing standards.

verify-output

242
from aiskillstore/marketplace

Pattern for verifying your output matches required schema before completing. Use before writing final output to ensure validity.

verify

242
from aiskillstore/marketplace

Verifies builds, recovers from errors, and applies review fixes. Use when user mentions ビルド, build, 検証, verify, エラー復旧, error recovery, 指摘を適用, apply fixes, テスト実行, tests fail, lint errors occur, CI breaks, テスト失敗, lintエラー, 型エラー, ビルドエラー, CIが落ちた. Do NOT load for: 実装作業, レビュー, セットアップ, 新機能開発.

analyzing-test-quality

242
from aiskillstore/marketplace

Automatically activated when user asks about test quality, code coverage, test reliability, test maintainability, or wants to analyze their test suite. Provides framework-agnostic test quality analysis and improvement recommendations. Does NOT provide framework-specific patterns - use jest-testing or playwright-testing for those.

analyzing-response-quality

242
from aiskillstore/marketplace

Expert at analyzing the quality of Claude's responses and outputs. Use when evaluating response completeness, accuracy, clarity, or effectiveness. Auto-invokes during self-reflection or when quality assessment is needed.

analyzing-component-quality

242
from aiskillstore/marketplace

Expert at analyzing the quality and effectiveness of Claude Code components (agents, skills, commands, hooks). Assumes component is already technically valid. Evaluates description clarity, tool permissions, auto-invoke triggers, security, and usability to provide quality scores and improvement suggestions.

quality

242
from aiskillstore/marketplace

Code quality validation, formatting, linting, and pre-commit checks.