qe-quality-assessment

Evaluates code quality through complexity analysis, lint results, code smell detection, and test health metrics. Use when assessing deployment readiness, configuring quality gates, scoring a codebase for release, or generating quality reports with pass/fail verdicts.

Best use case

qe-quality-assessment is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Evaluates code quality through complexity analysis, lint results, code smell detection, and test health metrics. Use when assessing deployment readiness, configuring quality gates, scoring a codebase for release, or generating quality reports with pass/fail verdicts.

Teams using qe-quality-assessment should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/qe-quality-assessment/SKILL.md --create-dirs "https://raw.githubusercontent.com/proffesor-for-testing/agentic-qe/main/.claude/skills/qe-quality-assessment/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/qe-quality-assessment/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How qe-quality-assessment Compares

Feature / Agentqe-quality-assessmentStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Evaluates code quality through complexity analysis, lint results, code smell detection, and test health metrics. Use when assessing deployment readiness, configuring quality gates, scoring a codebase for release, or generating quality reports with pass/fail verdicts.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# QE Quality Assessment

## Purpose

Guide the use of v3's quality assessment capabilities including automated quality gates, metrics aggregation, trend analysis, and deployment readiness evaluation.

## Activation

- When evaluating code quality
- When setting up quality gates
- When assessing deployment readiness
- When tracking quality metrics
- When generating quality reports

## Quick Start

```bash
# Run quality assessment
aqe quality assess --scope src/ --gates all

# Check deployment readiness
aqe quality deploy-ready --environment production

# Generate quality report
aqe quality report --format dashboard --period 30d

# Compare quality between releases
aqe quality compare --from v1.0 --to v2.0
```

## Agent Workflow

```typescript
// Comprehensive quality assessment
Task("Assess code quality", `
  Evaluate quality for src/:
  - Code complexity (cyclomatic, cognitive)
  - Test coverage and mutation score
  - Security vulnerabilities
  - Code smells and technical debt
  - Documentation coverage
  Generate quality score and recommendations.
`, "qe-quality-analyzer")

// Deployment readiness check
Task("Check deployment readiness", `
  Evaluate if release v2.1.0 is ready for production:
  - All tests passing
  - Coverage thresholds met
  - No critical vulnerabilities
  - Performance benchmarks passed
  - Documentation updated
  Provide go/no-go recommendation.
`, "qe-deployment-advisor")
```

## Quality Dimensions

### 1. Code Quality Metrics

```typescript
await qualityAnalyzer.assessCode({
  scope: 'src/**/*.ts',
  metrics: {
    complexity: {
      cyclomatic: { max: 15, warn: 10 },
      cognitive: { max: 20, warn: 15 }
    },
    maintainability: {
      index: { min: 65 },
      duplication: { max: 3 }  // percent
    },
    documentation: {
      publicAPIs: { min: 80 },
      complexity: { min: 70 }
    }
  }
});
```

### 2. Quality Gates

```typescript
await qualityGate.evaluate({
  gates: {
    coverage: { min: 80, blocking: true },
    complexity: { max: 15, blocking: false },
    vulnerabilities: { critical: 0, high: 0, blocking: true },
    duplications: { max: 3, blocking: false },
    techDebt: { maxRatio: 5, blocking: false }
  },
  action: {
    onPass: 'proceed',
    onFail: 'block-merge',
    onWarn: 'notify'
  }
});
```

### 3. Deployment Readiness

```typescript
await deploymentAdvisor.assess({
  release: 'v2.1.0',
  criteria: {
    testing: {
      unitTests: 'all-pass',
      integrationTests: 'all-pass',
      e2eTests: 'critical-pass',
      performanceTests: 'baseline-met'
    },
    quality: {
      coverage: 80,
      noNewVulnerabilities: true,
      noRegressions: true
    },
    documentation: {
      changelog: true,
      apiDocs: true,
      releaseNotes: true
    }
  }
});
```

## Quality Score Calculation

```yaml
quality_score:
  components:
    test_coverage:
      weight: 0.25
      metrics: [statement, branch, function]

    code_quality:
      weight: 0.20
      metrics: [complexity, maintainability, duplication]

    security:
      weight: 0.25
      metrics: [vulnerabilities, dependencies]

    reliability:
      weight: 0.20
      metrics: [bug_density, flaky_tests, error_rate]

    documentation:
      weight: 0.10
      metrics: [api_coverage, readme, changelog]

  scoring:
    A: 90-100
    B: 80-89
    C: 70-79
    D: 60-69
    F: 0-59
```

## Quality Dashboard

```typescript
interface QualityDashboard {
  overallScore: number;  // 0-100
  grade: 'A' | 'B' | 'C' | 'D' | 'F';
  dimensions: {
    name: string;
    score: number;
    trend: 'improving' | 'stable' | 'declining';
    issues: Issue[];
  }[];
  gates: {
    name: string;
    status: 'pass' | 'fail' | 'warn';
    value: number;
    threshold: number;
  }[];
  trends: {
    period: string;
    scores: number[];
    alerts: Alert[];
  };
  recommendations: Recommendation[];
}
```

## CI/CD Integration

```yaml
# Quality gate in pipeline
quality_check:
  stage: verify
  script:
    - aqe quality assess --gates all --output report.json
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
  artifacts:
    reports:
      quality: report.json
  allow_failure:
    exit_codes:
      - 1  # Warnings only
```

## Run History

After each quality assessment, append results to `run-history.json` in this skill directory:
```bash
node -e "
const fs = require('fs');
const h = JSON.parse(fs.readFileSync('.claude/skills/qe-quality-assessment/run-history.json'));
h.runs.push({date: new Date().toISOString().split('T')[0], gate_result: 'PASS_OR_FAIL', failed_checks: []});
fs.writeFileSync('.claude/skills/qe-quality-assessment/run-history.json', JSON.stringify(h, null, 2));
"
```
Read `run-history.json` before each run — alert if quality gate failed 3 of last 5 runs.

## Skill Composition

- **Before assessment** → Run `/qe-coverage-analysis` and `/mutation-testing` first
- **If issues found** → Use `/test-failure-investigator` to diagnose failures
- **For PR review** → Combine with `/code-review-quality` for comprehensive review

## Gotchas

- NEVER trust agent-reported pass/fail status — 12 test failures were caught that agents claimed were passing (Nagual pattern, reward 0.92)
- Completion theater: agent hardcoded version '3.0.0' instead of reading from package.json — verify actual values in output
- Fix issues in priority waves (P0 → P1 → P2) with verification between each wave — don't fix everything in parallel
- quality-assessment domain has 53.7% success rate — expect failures and have fallback
- If HybridMemoryBackend initialization fails, run `aqe health` to diagnose, or `aqe init` to re-initialize

## Coordination

**Primary Agents**: qe-quality-analyzer, qe-deployment-advisor, qe-metrics-collector
**Coordinator**: qe-quality-coordinator
**Related Skills**: qe-coverage-analysis, security-testing

Related Skills

qe-verification-quality

298
from proffesor-for-testing/agentic-qe

Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.

qe-quality-metrics

298
from proffesor-for-testing/agentic-qe

Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

qe-code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

qe-agentic-quality-engineering

298
from proffesor-for-testing/agentic-qe

AI agents as force multipliers for quality work. Core skill for all 19 QE agents using PACT principles.

verification-quality

298
from proffesor-for-testing/agentic-qe

Verifies agent outputs against expected results and validates code changes pass quality checks before merge. Use when verifying agent outputs are correct, validating code changes before merge, or configuring automatic rollback for failed quality checks.

quality-metrics

298
from proffesor-for-testing/agentic-qe

Tracks quality metrics including defect density, test effectiveness ratio, DORA metrics, and mean time to detection. Use when establishing quality dashboards, defining KPIs, evaluating test suite effectiveness, or reporting quality trends to stakeholders.

code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

agentic-quality-engineering

298
from proffesor-for-testing/agentic-qe

Use when orchestrating QE agents, understanding PACT principles, configuring the AQE v3 fleet, or leveraging AI agents as force multipliers for quality work.

qe-visual-testing-advanced

298
from proffesor-for-testing/agentic-qe

Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.

qe-testability-scoring

298
from proffesor-for-testing/agentic-qe

AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.

qe-test-reporting-analytics

298
from proffesor-for-testing/agentic-qe

Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.

qe-test-idea-rewriting

298
from proffesor-for-testing/agentic-qe

Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.