qe-pr-review

Scope-aware GitHub PR review with user-friendly tone and trust tier validation

Best use case

qe-pr-review is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Scope-aware GitHub PR review with user-friendly tone and trust tier validation

Teams using qe-pr-review should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/qe-pr-review/SKILL.md --create-dirs "https://raw.githubusercontent.com/proffesor-for-testing/agentic-qe/main/.kiro/skills/qe-pr-review/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/qe-pr-review/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How qe-pr-review Compares

Feature / Agentqe-pr-reviewStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Scope-aware GitHub PR review with user-friendly tone and trust tier validation

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# PR Review Workflow

Review pull requests with correct AQE scope boundaries, clear communication, and actionable feedback.

## Arguments

- `<pr-number>` — GitHub PR number to review. If omitted, prompt the user.

## Steps

### 1. Read the Full Diff
```bash
gh pr diff <pr-number>
gh pr view <pr-number>
```
Read the complete diff and PR description. Do not skim — read every changed file.

### 2. Scope Check
- Only analyze AQE/QE skills (NOT Claude Flow platform skills)
- Platform skills to EXCLUDE: v3-*, flow-nexus-*, agentdb-*, reasoningbank-*, swarm-*, github-*, hive-mind-advanced, hooks-automation, iterative-loop, stream-chain, skill-builder, sparc-methodology, pair-programming, release, debug-loop, aqe-v2-v3-migration
- If the PR touches skills, verify the count/scope matches expectations (~78 AQE skills)
- Flag any platform skill changes that may have leaked into an AQE-focused PR

### 3. Summarize Changes
Write a user-friendly summary of what changed and why:
- Focus on outcomes, not implementation details
- Avoid overly technical jargon
- Keep it to 3-5 bullet points

### 4. Trust Tier Validation
For any skill changes, validate trust_tier assignments:
- **tier 3** = has eval infrastructure (evals/, schemas/, scripts/)
- **tier 2** = tested but no eval framework
- **tier 1** = untested
- Flag inconsistencies (e.g., a skill with evals at tier 2 should be tier 3)

### 5. Code Quality Review
Check for:
- Hardcoded version strings
- Production safety concerns (adapter changes, breaking changes)
- Missing test coverage for new code
- Security issues (exposed secrets, injection risks)

### 6. Post Review
```bash
gh pr review <pr-number> --body "review comments"
```

## Communication Rules

- Keep tone constructive and actionable
- Be outcome-focused: what should the author do, not what's wrong
- Group related comments together instead of posting many small ones
- If approving with minor suggestions, use APPROVE with comments, not REQUEST_CHANGES

Related Skills

qe-sherlock-review

298
from proffesor-for-testing/agentic-qe

Evidence-based investigative code review using deductive reasoning to determine what actually happened versus what was claimed. Use when verifying implementation claims, investigating bugs, validating fixes, or conducting root cause analysis. Elementary approach to finding truth through systematic observation.

qe-github-code-review

298
from proffesor-for-testing/agentic-qe

Comprehensive GitHub code review with AI-powered swarm coordination

qe-code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

qe-brutal-honesty-review

298
from proffesor-for-testing/agentic-qe

Unvarnished technical criticism combining Linus Torvalds' precision, Gordon Ramsay's standards, and James Bach's BS-detection. Use when code/tests need harsh reality checks, certification schemes smell fishy, or technical decisions lack rigor. No sugar-coating, just surgical truth about what's broken and why.

sherlock-review

298
from proffesor-for-testing/agentic-qe

Evidence-based investigative code review using deductive reasoning to determine what actually happened versus what was claimed. Use when verifying implementation claims, investigating bugs, validating fixes, or conducting root cause analysis. Elementary approach to finding truth through systematic observation.

pr-review

298
from proffesor-for-testing/agentic-qe

Use when reviewing a GitHub PR for quality, scope correctness, trust tier compliance, or generating user-friendly review feedback.

github-code-review

298
from proffesor-for-testing/agentic-qe

Comprehensive GitHub code review with AI-powered swarm coordination

code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

brutal-honesty-review

298
from proffesor-for-testing/agentic-qe

Unvarnished technical criticism combining Linus Torvalds' precision, Gordon Ramsay's standards, and James Bach's BS-detection. Use when code/tests need harsh reality checks, certification schemes smell fishy, or technical decisions lack rigor. No sugar-coating, just surgical truth about what's broken and why.

qe-visual-testing-advanced

298
from proffesor-for-testing/agentic-qe

Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.

qe-verification-quality

298
from proffesor-for-testing/agentic-qe

Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.

qe-testability-scoring

298
from proffesor-for-testing/agentic-qe

AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.