qe-learning-optimization
Optimizes QE agent performance through transfer learning, hyperparameter tuning, and pattern distillation across test domains. Use when improving agent accuracy, applying learned patterns to new projects, tuning quality thresholds, or implementing continuous improvement loops for AI-powered testing.
Best use case
qe-learning-optimization is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimizes QE agent performance through transfer learning, hyperparameter tuning, and pattern distillation across test domains. Use when improving agent accuracy, applying learned patterns to new projects, tuning quality thresholds, or implementing continuous improvement loops for AI-powered testing.
Teams using qe-learning-optimization should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/qe-learning-optimization/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How qe-learning-optimization Compares
| Feature / Agent | qe-learning-optimization | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimizes QE agent performance through transfer learning, hyperparameter tuning, and pattern distillation across test domains. Use when improving agent accuracy, applying learned patterns to new projects, tuning quality thresholds, or implementing continuous improvement loops for AI-powered testing.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# QE Learning Optimization
## Purpose
Guide the use of v3's learning optimization capabilities including transfer learning between agents, hyperparameter tuning, A/B testing, and continuous performance improvement.
## Activation
- When optimizing agent performance
- When transferring knowledge between agents
- When tuning learning parameters
- When running A/B tests
- When analyzing learning metrics
## Quick Start
```bash
# Transfer knowledge between agents
aqe learn transfer --from jest-generator --to vitest-generator
# Tune hyperparameters
aqe learn tune --agent defect-predictor --metric accuracy
# Run A/B test
aqe learn ab-test --hypothesis "new-algorithm" --duration 7d
# View learning metrics
aqe learn metrics --agent test-generator --period 30d
```
## Agent Workflow
```typescript
// Transfer learning
Task("Transfer test patterns", `
Transfer learned patterns from Jest test generator to Vitest:
- Map framework-specific syntax
- Adapt assertion styles
- Preserve test structure patterns
- Validate transfer accuracy
`, "qe-transfer-specialist")
// Metrics optimization
Task("Optimize prediction accuracy", `
Tune defect-predictor agent:
- Analyze current performance metrics
- Run Bayesian hyperparameter search
- Validate improvements on holdout set
- Deploy if accuracy improves >5%
`, "qe-metrics-optimizer")
```
## Learning Operations
### 1. Transfer Learning
```typescript
await transferSpecialist.transfer({
source: {
agent: 'qe-jest-generator',
knowledge: ['patterns', 'heuristics', 'optimizations']
},
target: {
agent: 'qe-vitest-generator',
adaptations: ['framework-syntax', 'api-differences']
},
strategy: 'fine-tuning',
validation: {
testSet: 'validation-samples',
minAccuracy: 0.9
}
});
```
### 2. Hyperparameter Tuning
```typescript
await metricsOptimizer.tune({
agent: 'defect-predictor',
parameters: {
learningRate: { min: 0.001, max: 0.1, type: 'log' },
batchSize: { values: [16, 32, 64, 128] },
patternThreshold: { min: 0.5, max: 0.95 }
},
optimization: {
method: 'bayesian',
objective: 'accuracy',
trials: 50,
parallelism: 4
}
});
```
### 3. A/B Testing
```typescript
await metricsOptimizer.abTest({
hypothesis: 'ML pattern matching improves test quality',
variants: {
control: { algorithm: 'rule-based' },
treatment: { algorithm: 'ml-enhanced' }
},
metrics: ['test-quality-score', 'generation-time'],
traffic: {
split: 50,
minSampleSize: 1000
},
duration: '7d',
significance: 0.05
});
```
### 4. Feedback Loop
```typescript
await metricsOptimizer.feedbackLoop({
agent: 'test-generator',
feedback: {
sources: ['user-corrections', 'test-results', 'code-reviews'],
aggregation: 'weighted',
frequency: 'real-time'
},
learning: {
strategy: 'incremental',
validationSplit: 0.2,
earlyStoppingPatience: 5
}
});
```
## Learning Metrics Dashboard
```typescript
interface LearningDashboard {
agent: string;
period: DateRange;
performance: {
current: MetricValues;
trend: 'improving' | 'stable' | 'declining';
percentile: number;
};
learning: {
samplesProcessed: number;
patternsLearned: number;
improvementRate: number;
};
experiments: {
active: Experiment[];
completed: ExperimentResult[];
};
recommendations: {
action: string;
expectedImpact: number;
confidence: number;
}[];
}
```
## Cross-Framework Transfer
```yaml
transfer_mappings:
jest_to_vitest:
syntax:
"describe": "describe"
"it": "it"
"expect": "expect"
"jest.mock": "vi.mock"
"jest.fn": "vi.fn"
patterns:
- mock-module
- async-testing
- snapshot-testing
mocha_to_jest:
syntax:
"describe": "describe"
"it": "it"
"chai.expect": "expect"
"sinon.stub": "jest.fn"
adaptations:
- assertion-style
- hook-naming
```
## Continuous Improvement
```typescript
await learningOptimizer.continuousImprovement({
agents: ['test-generator', 'coverage-analyzer', 'defect-predictor'],
schedule: {
metricCollection: 'hourly',
tuning: 'weekly',
majorUpdates: 'monthly'
},
thresholds: {
degradationAlert: 5, // percent
improvementTarget: 2, // percent per week
},
automation: {
autoTune: true,
autoRollback: true,
requireApproval: ['major-changes']
}
});
```
## Pattern Learning
```typescript
await patternLearner.learn({
sources: {
codeExamples: 'examples/**/*.ts',
testExamples: 'tests/**/*.test.ts',
userFeedback: 'feedback/*.json'
},
extraction: {
syntacticPatterns: true,
semanticPatterns: true,
contextualPatterns: true
},
storage: {
vectorDB: 'agentdb',
versioning: true
}
});
```
## Coordination
**Primary Agents**: qe-transfer-specialist, qe-metrics-optimizer, qe-pattern-learner
**Coordinator**: qe-learning-coordinator
**Related Skills**: qe-test-generation, qe-defect-intelligenceRelated Skills
V3 Performance Optimization
Achieve aggressive v3 performance targets: 2.49x-7.47x Flash Attention speedup, 150x-12,500x search improvements, 50-75% memory reduction. Comprehensive benchmarking and optimization suite.
V3 MCP Optimization
MCP server optimization and transport layer enhancement for claude-flow v3. Implements connection pooling, load balancing, tool registry optimization, and performance monitoring for sub-100ms response times.
AgentDB Performance Optimization
Optimize AgentDB performance with quantization (4-32x memory reduction), HNSW indexing (150x faster search), caching, and batch operations. Use when optimizing memory usage, improving search speed, or scaling to millions of vectors.
AgentDB Learning Plugins
Create and train AI learning plugins with AgentDB's 9 reinforcement learning algorithms. Includes Decision Transformer, Q-Learning, SARSA, Actor-Critic, and more. Use when building self-learning agents, implementing RL, or optimizing agent behavior through experience.
qe-visual-testing-advanced
Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.
qe-verification-quality
Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.
qe-testability-scoring
AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.
qe-test-reporting-analytics
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
qe-test-idea-rewriting
Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.
qe-test-environment-management
Test environment provisioning, infrastructure as code for testing, Docker/Kubernetes for test environments, service virtualization, and cost optimization. Use when managing test infrastructure, ensuring environment parity, or optimizing testing costs.
qe-test-design-techniques
Systematic test design with boundary value analysis, equivalence partitioning, decision tables, state transition testing, and combinatorial testing. Use when designing comprehensive test cases, reducing redundant tests, or ensuring systematic coverage.
qe-test-data-management
Strategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.