perf-theory-tester
Use when running controlled perf experiments to validate hypotheses.
Best use case
perf-theory-tester is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use when running controlled perf experiments to validate hypotheses.
Teams using perf-theory-tester should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/perf-theory-tester/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How perf-theory-tester Compares
| Feature / Agent | perf-theory-tester | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use when running controlled perf experiments to validate hypotheses.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# perf-theory-tester Test hypotheses using controlled experiments. Follow `docs/perf-requirements.md` as the canonical contract. ## Required Steps 1. Confirm baseline is clean. 2. Apply a single change tied to the hypothesis. 3. Run 2+ validation passes. 4. Revert to baseline before the next experiment. ## Output Format ``` hypothesis: <id> change: <summary> delta: <metrics> verdict: accept|reject|inconclusive evidence: - command: <benchmark command> - files: <changed files> ``` ## Constraints - One change per experiment. - No parallel benchmarks. - Record evidence for each run.
Related Skills
angular-performance
Angular performance: NgOptimizedImage, @defer, lazy loading, SSR. Trigger: When optimizing Angular app performance, images, or lazy loading.
perf-web-optimization
Optimize web performance: bundle size, images, caching, lazy loading, and overall page speed. Use when site is slow, reducing bundle size, fixing layout shifts, improving Time to Interactive, or optimizing for Lighthouse scores. Triggers on: web performance, bundle size, page speed, slow site, lazy loading. Do NOT use for Core Web Vitals-specific fixes (use core-web-vitals), running Lighthouse audits (use perf-lighthouse), or Astro-specific optimization (use perf-astro).
perf-lighthouse
Run Lighthouse audits locally via CLI or Node API, parse and interpret reports, and set performance budgets. Use when measuring site performance, understanding Lighthouse scores, setting up budgets, or integrating audits into CI. Triggers on: lighthouse, run lighthouse, lighthouse score, performance audit, performance budget. Do NOT use for fixing specific performance issues (use perf-web-optimization or core-web-vitals) or Astro-specific optimization (use perf-astro).
perf-astro
Astro-specific performance optimizations for 95+ Lighthouse scores. Covers critical CSS inlining, compression, font loading, and LCP optimization. Use when optimizing Astro site performance, improving Astro Lighthouse scores, or configuring astro-critters. Do NOT use for non-Astro sites (use perf-web-optimization or core-web-vitals) or running Lighthouse audits (use perf-lighthouse).
high-perf-browser
Optimize web performance through network protocols, resource loading, and browser rendering internals. Use when the user mentions "page load speed", "Core Web Vitals", "HTTP/2", "resource hints", "network latency", or "render blocking". Covers TCP/TLS optimization, caching strategies, WebSocket/SSE, and protocol selection. For UI visual performance, see refactoring-ui. For font loading, see web-typography.
perf-profiler
Use when profiling CPU/memory hot paths, generating flame graphs, or capturing JFR/perf evidence.
perf-investigation-logger
Use when appending structured perf investigation notes and evidence.
perf-code-paths
Use when mapping code paths, entrypoints, and likely hot files before profiling.
perf-analyzer
Use when synthesizing perf findings into evidence-backed recommendations and decisions.
perf-theory-gatherer
Use when generating performance hypotheses backed by git history and code evidence.
perf-benchmarker
Use when running performance benchmarks, establishing baselines, or validating regressions with sequential runs. Enforces 60s minimum runs (30s only for binary search) and no parallel benchmarks.
perf-baseline-manager
Use when managing perf baselines, consolidating results, or comparing versions. Ensures one baseline JSON per version.