qe-requirements-validation
Validates acceptance criteria for testability, traces requirements to test cases, and generates BDD scenarios from user stories. Use when validating acceptance criteria, building requirements traceability matrices, managing Gherkin scenarios, or ensuring complete requirements coverage before development.
Best use case
qe-requirements-validation is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Validates acceptance criteria for testability, traces requirements to test cases, and generates BDD scenarios from user stories. Use when validating acceptance criteria, building requirements traceability matrices, managing Gherkin scenarios, or ensuring complete requirements coverage before development.
Teams using qe-requirements-validation should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/qe-requirements-validation/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How qe-requirements-validation Compares
| Feature / Agent | qe-requirements-validation | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Validates acceptance criteria for testability, traces requirements to test cases, and generates BDD scenarios from user stories. Use when validating acceptance criteria, building requirements traceability matrices, managing Gherkin scenarios, or ensuring complete requirements coverage before development.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# QE Requirements Validation
## Purpose
Guide the use of v3's requirements validation capabilities including acceptance criteria parsing, requirements traceability, BDD scenario generation, and coverage gap identification.
## Activation
- When validating requirements
- When tracing requirements to tests
- When generating BDD scenarios
- When assessing requirements coverage
- When reviewing acceptance criteria
## Quick Start
```bash
# Parse acceptance criteria
aqe requirements parse --source jira --project MYAPP
# Build traceability matrix
aqe requirements trace --requirements reqs/ --tests tests/
# Generate BDD scenarios
aqe requirements bdd --story US-123 --output features/
# Check requirements coverage
aqe requirements coverage --sprint current
```
## Agent Workflow
```typescript
// Requirements validation
Task("Validate acceptance criteria", `
Review acceptance criteria for sprint stories:
- Check SMART criteria (Specific, Measurable, Achievable, Relevant, Testable)
- Identify ambiguous requirements
- Flag missing edge cases
- Suggest improvements
`, "qe-acceptance-criteria")
// Traceability matrix
Task("Build traceability", `
Create requirements traceability matrix:
- Map user stories to test cases
- Identify untested requirements
- Find orphan tests (no linked requirement)
- Calculate coverage metrics
`, "qe-traceability-builder")
```
## Requirements Operations
### 1. Acceptance Criteria Validation
```typescript
await acceptanceCriteria.validate({
source: {
type: 'jira',
project: 'MYAPP',
stories: 'sprint=current'
},
validation: {
specific: true,
measurable: true,
achievable: true,
relevant: true,
testable: true
},
output: {
score: true,
issues: true,
suggestions: true
}
});
```
### 2. Traceability Matrix
```typescript
await traceabilityBuilder.build({
requirements: {
source: 'jira',
types: ['story', 'task', 'bug']
},
artifacts: {
tests: 'tests/**/*.test.ts',
code: 'src/**/*.ts',
documentation: 'docs/**/*.md'
},
output: {
matrix: true,
coverage: true,
gaps: true,
orphans: true
}
});
```
### 3. BDD Scenario Generation
```typescript
await bddGenerator.generate({
requirements: userStory,
format: 'gherkin',
scenarios: {
happyPath: true,
edgeCases: true,
errorCases: true,
dataVariations: true
},
output: {
featureFile: true,
stepDefinitions: 'skeleton'
}
});
```
### 4. Coverage Analysis
```typescript
await requirementsCoverage.analyze({
scope: 'sprint-23',
metrics: {
requirementsCovered: true,
testCasesCoverage: true,
automationCoverage: true,
riskAssessment: true
},
report: {
summary: true,
details: true,
recommendations: true
}
});
```
## Traceability Matrix
```typescript
interface TraceabilityMatrix {
requirements: {
id: string;
title: string;
type: string;
priority: string;
status: string;
linkedTests: string[];
linkedCode: string[];
coverage: 'full' | 'partial' | 'none';
}[];
tests: {
id: string;
name: string;
type: 'unit' | 'integration' | 'e2e';
linkedRequirements: string[];
automated: boolean;
}[];
coverage: {
requirementsCovered: number;
requirementsPartial: number;
requirementsUncovered: number;
orphanTests: number;
};
gaps: {
requirement: string;
missingCoverage: string[];
risk: 'high' | 'medium' | 'low';
}[];
}
```
## BDD Integration
```gherkin
# Generated feature file
Feature: User Registration
As a new user
I want to create an account
So that I can access the platform
@happy-path
Scenario: Successful registration with valid details
Given I am on the registration page
When I enter valid email "user@example.com"
And I enter valid password "SecurePass123!"
And I click the register button
Then I should see a success message
And I should receive a confirmation email
@edge-case
Scenario: Registration with existing email
Given a user exists with email "existing@example.com"
When I try to register with email "existing@example.com"
Then I should see an error "Email already registered"
```
## Requirements Quality
```yaml
quality_checks:
acceptance_criteria:
has_given_when_then: preferred
is_testable: required
is_measurable: required
no_ambiguity: required
user_story:
follows_template: "As a <role>, I want <feature>, so that <benefit>"
has_acceptance_criteria: required
estimated: preferred
completeness:
edge_cases_identified: required
error_scenarios_covered: required
non_functional_considered: preferred
```
## Sprint Integration
```typescript
await requirementsValidator.sprintReview({
sprint: 'current',
checks: {
storiesComplete: true,
criteriaValidated: true,
testsLinked: true,
coverageAdequate: true
},
gates: {
minCoverage: 80,
maxUntested: 2,
requireDemo: true
}
});
```
## Coordination
**Primary Agents**: qe-acceptance-criteria, qe-traceability-builder, qe-bdd-specialist
**Coordinator**: qe-requirements-coordinator
**Related Skills**: qe-test-generation, qe-quality-assessmentRelated Skills
qe-pentest-validation
Orchestrate security finding validation through graduated exploitation. 4-phase pipeline: recon (SAST/DAST), analysis (code review), validation (exploit proof), report (No Exploit, No Report gate). Eliminates false positives by proving exploitability.
validation-pipeline
Runs multi-stage validation gates with per-step scoring, pass/fail verdicts, and aggregate quality reports. Use when validating requirements, code, or artifacts through structured gate enforcement before merge or release.
pentest-validation
Use when validating security findings from SAST/DAST scans, proving exploitability of reported vulnerabilities, eliminating false positives, or running the 4-phase pentest pipeline (recon, analysis, validation, report).
qe-visual-testing-advanced
Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.
qe-verification-quality
Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.
qe-testability-scoring
AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.
qe-test-reporting-analytics
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
qe-test-idea-rewriting
Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.
qe-test-environment-management
Test environment provisioning, infrastructure as code for testing, Docker/Kubernetes for test environments, service virtualization, and cost optimization. Use when managing test infrastructure, ensuring environment parity, or optimizing testing costs.
qe-test-design-techniques
Systematic test design with boundary value analysis, equivalence partitioning, decision tables, state transition testing, and combinatorial testing. Use when designing comprehensive test cases, reducing redundant tests, or ensuring systematic coverage.
qe-test-data-management
Strategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.
qe-test-automation-strategy
Design and implement effective test automation with proper pyramid, patterns, and CI/CD integration. Use when building automation frameworks or improving test efficiency.