when-analyzing-performance-use-performance-analysis
Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
Best use case
when-analyzing-performance-use-performance-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "when-analyzing-performance-use-performance-analysis" skill to help with this workflow task. Context: Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/when-analyzing-performance-use-performance-analysis/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How when-analyzing-performance-use-performance-analysis Compares
| Feature / Agent | when-analyzing-performance-use-performance-analysis | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agent for Product Research
Browse AI agent skills for product research, competitive analysis, customer discovery, and structured product decision support.
AI Agent for SaaS Idea Validation
Use AI agent skills for SaaS idea validation, market research, customer discovery, competitor analysis, and documenting startup hypotheses.
SKILL.md Source
# Performance Analysis SOP
## Overview
Comprehensive performance analysis for Claude Flow swarms including bottleneck detection, profiling, benchmarking, and actionable optimization recommendations.
## Agents & Responsibilities
### performance-analyzer
**Role:** Analyze system performance and identify issues
**Responsibilities:**
- Collect performance metrics
- Analyze resource utilization
- Identify bottlenecks
- Generate analysis reports
### performance-benchmarker
**Role:** Run performance benchmarks and comparisons
**Responsibilities:**
- Execute benchmark suites
- Compare performance across configurations
- Establish performance baselines
- Validate improvements
### perf-analyzer
**Role:** Deep performance profiling and optimization
**Responsibilities:**
- Profile code execution
- Analyze memory usage
- Optimize critical paths
- Recommend improvements
## Phase 1: Establish Baseline
### Objective
Measure current performance and establish baseline metrics.
### Scripts
```bash
# Collect baseline metrics
npx claude-flow@alpha performance baseline \
--duration 300 \
--interval 5 \
--output baseline-metrics.json
# Run benchmark suite
npx claude-flow@alpha benchmark run \
--type swarm \
--iterations 10 \
--output benchmark-results.json
# Profile system resources
npx claude-flow@alpha performance profile \
--include-cpu \
--include-memory \
--include-network \
--output resource-profile.json
# Collect agent metrics
npx claude-flow@alpha agent metrics --all --format json > agent-metrics.json
# Store baseline
npx claude-flow@alpha memory store \
--key "performance/baseline" \
--file baseline-metrics.json
# Generate baseline report
npx claude-flow@alpha performance report \
--type baseline \
--metrics baseline-metrics.json \
--output baseline-report.md
```
### Key Baseline Metrics
**Swarm-Level:**
- Total throughput (tasks/min)
- Average latency (ms)
- Resource utilization (%)
- Error rate (%)
- Coordination overhead (ms)
**Agent-Level:**
- Task completion rate
- Response time (ms)
- CPU usage (%)
- Memory usage (MB)
- Idle time (%)
**System-Level:**
- Total CPU usage (%)
- Total memory usage (MB)
- Network bandwidth (MB/s)
- Disk I/O (MB/s)
### Memory Patterns
```bash
# Store performance baseline
npx claude-flow@alpha memory store \
--key "performance/baseline/timestamp" \
--value "$(date -Iseconds)"
npx claude-flow@alpha memory store \
--key "performance/baseline/metrics" \
--value '{
"throughput": 145.2,
"latency": 38.5,
"utilization": 0.78,
"errorRate": 0.012,
"timestamp": "'$(date -Iseconds)'"
}'
```
## Phase 2: Profile System
### Objective
Deep profiling of system components to identify performance characteristics.
### Scripts
```bash
# Profile swarm execution
npx claude-flow@alpha performance profile-swarm \
--duration 300 \
--sample-rate 100 \
--output swarm-profile.json
# Profile individual agents
for AGENT in $(npx claude-flow@alpha agent list --format json | jq -r '.[].id'); do
npx claude-flow@alpha performance profile-agent \
--agent-id "$AGENT" \
--duration 60 \
--output "profiles/agent-$AGENT.json"
done
# Profile memory usage
npx claude-flow@alpha memory profile \
--show-hotspots \
--show-leaks \
--output memory-profile.json
# Profile network communication
npx claude-flow@alpha performance profile-network \
--show-latency \
--show-bandwidth \
--output network-profile.json
# Generate flamegraph
npx claude-flow@alpha performance flamegraph \
--input swarm-profile.json \
--output flamegraph.svg
# Analyze CPU hotspots
npx claude-flow@alpha performance hotspots \
--type cpu \
--threshold 5 \
--output cpu-hotspots.json
```
### Profiling Analysis
```bash
# Identify slow functions
SLOW_FUNCTIONS=$(jq '[.profile[] | select(.time > 100)]' swarm-profile.json)
# Identify memory hogs
MEMORY_HOGS=$(jq '[.memory[] | select(.usage > 100)]' memory-profile.json)
# Identify network bottlenecks
NETWORK_ISSUES=$(jq '[.network[] | select(.latency > 50)]' network-profile.json)
echo "Slow Functions: $(echo $SLOW_FUNCTIONS | jq length)"
echo "Memory Hogs: $(echo $MEMORY_HOGS | jq length)"
echo "Network Issues: $(echo $NETWORK_ISSUES | jq length)"
```
## Phase 3: Analyze Issues
### Objective
Identify and categorize performance issues and bottlenecks.
### Scripts
```bash
# Run comprehensive analysis
npx claude-flow@alpha performance analyze \
--input swarm-profile.json \
--detect-bottlenecks \
--detect-memory-leaks \
--detect-deadlocks \
--output analysis-results.json
# Identify bottlenecks by type
npx claude-flow@alpha performance bottlenecks \
--categorize \
--priority-order \
--output bottleneck-report.json
# Analyze agent performance
npx claude-flow@alpha agent analyze-performance \
--all \
--identify-underperformers \
--output agent-analysis.json
# Analyze coordination overhead
npx claude-flow@alpha performance coordination-overhead \
--calculate \
--breakdown \
--output coordination-analysis.json
# Root cause analysis
npx claude-flow@alpha performance root-cause \
--issue "high-latency" \
--trace-back \
--output root-cause-analysis.json
```
### Issue Classification
**Critical Issues:**
- Deadlocks
- Memory leaks
- Complete performance degradation
- System instability
**High Priority:**
- Bottlenecks causing >30% slowdown
- High error rates (>5%)
- Resource exhaustion
- Coordination failures
**Medium Priority:**
- Moderate slowdowns (10-30%)
- Suboptimal resource utilization
- Inefficient algorithms
- Poor load balancing
**Low Priority:**
- Minor optimizations (<10% impact)
- Code style issues
- Documentation gaps
### Memory Patterns
```bash
# Store analysis results
npx claude-flow@alpha memory store \
--key "performance/analysis/issues" \
--value '{
"critical": 0,
"high": 3,
"medium": 8,
"low": 12,
"timestamp": "'$(date -Iseconds)'"
}'
# Store bottleneck information
npx claude-flow@alpha memory store \
--key "performance/analysis/bottlenecks" \
--file bottleneck-report.json
```
## Phase 4: Optimize Performance
### Objective
Apply optimizations based on analysis and measure improvements.
### Scripts
```bash
# Get optimization recommendations
npx claude-flow@alpha performance recommend \
--based-on analysis-results.json \
--prioritize \
--output recommendations.json
# Apply automatic optimizations
npx claude-flow@alpha performance optimize \
--recommendations recommendations.json \
--auto-apply safe-optimizations
# Manual optimizations
# 1. Fix identified bottlenecks
# 2. Optimize hot paths
# 3. Reduce coordination overhead
# 4. Improve resource utilization
# Optimize swarm topology
npx claude-flow@alpha swarm optimize-topology \
--based-on analysis-results.json
# Optimize agent allocation
npx claude-flow@alpha agent rebalance \
--strategy performance-optimized
# Optimize memory usage
npx claude-flow@alpha memory optimize \
--reduce-footprint \
--clear-unused
# Apply neural optimizations
npx claude-flow@alpha neural train \
--pattern convergent \
--iterations 10
```
### Optimization Techniques
**Parallelization:**
```bash
# Increase parallelism for independent tasks
npx claude-flow@alpha swarm configure \
--max-parallel-tasks 8
```
**Caching:**
```bash
# Enable result caching
npx claude-flow@alpha performance cache \
--enable \
--strategy lru \
--max-size 1000
```
**Load Balancing:**
```bash
# Rebalance agent workloads
npx claude-flow@alpha swarm rebalance \
--strategy adaptive \
--target-variance 0.1
```
**Resource Allocation:**
```bash
# Optimize resource allocation
npx claude-flow@alpha agent configure --all \
--memory-limit auto \
--cpu-limit auto
```
## Phase 5: Validate Results
### Objective
Measure improvements and validate optimization effectiveness.
### Scripts
```bash
# Collect post-optimization metrics
npx claude-flow@alpha performance baseline \
--duration 300 \
--output optimized-metrics.json
# Run comparison benchmark
npx claude-flow@alpha benchmark run \
--type swarm \
--iterations 10 \
--output optimized-benchmark.json
# Compare before/after
npx claude-flow@alpha performance compare \
--baseline baseline-metrics.json \
--current optimized-metrics.json \
--output improvement-report.json
# Calculate improvements
THROUGHPUT_IMPROVEMENT=$(jq '.improvements.throughput.percentage' improvement-report.json)
LATENCY_IMPROVEMENT=$(jq '.improvements.latency.percentage' improvement-report.json)
echo "Throughput improved by: $THROUGHPUT_IMPROVEMENT%"
echo "Latency improved by: $LATENCY_IMPROVEMENT%"
# Validate improvements meet targets
npx claude-flow@alpha performance validate \
--improvements improvement-report.json \
--targets performance-targets.json
# Generate final report
npx claude-flow@alpha performance report \
--type comprehensive \
--include-baseline \
--include-analysis \
--include-optimizations \
--include-results \
--output final-performance-report.md
# Archive performance data
npx claude-flow@alpha performance archive \
--output performance-archive-$(date +%Y%m%d).tar.gz
```
### Validation Criteria
**Minimum Improvements:**
- Throughput: +15%
- Latency: -20%
- Resource utilization: More balanced (variance <10%)
- Error rate: -50% or <1%
**Validation Checks:**
```bash
# Check if improvements meet targets
if (( $(echo "$THROUGHPUT_IMPROVEMENT >= 15" | bc -l) )); then
echo "✓ Throughput target met"
else
echo "✗ Throughput target not met"
fi
if (( $(echo "$LATENCY_IMPROVEMENT >= 20" | bc -l) )); then
echo "✓ Latency target met"
else
echo "✗ Latency target not met"
fi
```
## Success Criteria
- [ ] Baseline established
- [ ] System profiled
- [ ] Issues identified and categorized
- [ ] Optimizations applied
- [ ] Improvements validated
### Performance Targets
- Throughput improvement: ≥15%
- Latency reduction: ≥20%
- Resource utilization variance: <10%
- Error rate: <1%
- Optimization overhead: <5%
## Best Practices
1. **Baseline First:** Always establish baseline before optimizing
2. **Measure Everything:** Comprehensive metrics collection
3. **Identify Bottlenecks:** Focus on critical path
4. **Incremental Optimization:** Apply optimizations incrementally
5. **Validate Improvements:** Always measure after optimizing
6. **Document Changes:** Record all optimization actions
7. **Regression Testing:** Ensure optimizations don't break functionality
8. **Continuous Monitoring:** Track performance over time
## Common Issues & Solutions
### Issue: No Performance Improvement
**Symptoms:** Metrics unchanged after optimization
**Solution:** Re-analyze bottlenecks, verify optimizations applied correctly
### Issue: Performance Regression
**Symptoms:** Performance worse after optimization
**Solution:** Rollback changes, re-evaluate optimization strategy
### Issue: Inconsistent Results
**Symptoms:** Performance varies significantly between runs
**Solution:** Increase measurement duration, check for external factors
## Integration Points
- **advanced-swarm:** For topology optimization
- **swarm-orchestration:** For coordination optimization
- **cascade-orchestrator:** For workflow optimization
## References
- Performance Analysis Methodologies
- Profiling Techniques
- Optimization Patterns
- Benchmarking Best PracticesRelated Skills
web-performance-seo
Fix PageSpeed Insights/Lighthouse accessibility "!" errors caused by contrast audit failures (CSS filters, OKLCH/OKLAB, low opacity, gradient text, image backgrounds). Use for accessibility-driven SEO/performance debugging and remediation.
log-analysis
Analyze application logs to identify errors, performance issues, and security anomalies. Use when debugging issues, monitoring system health, or investigating incidents. Handles various log formats including Apache, Nginx, application logs, and JSON logs.
wireshark-network-traffic-analysis
This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". It provides comprehensive techniques for network packet capture, filtering, and analysis using Wireshark.
wireshark-analysis
This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "dete...
web-performance-optimization
Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance
team-composition-analysis
This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity allocation", or requests organizational design and headcount planning for a startup.
stride-analysis-patterns
Apply STRIDE methodology to systematically identify threats. Use when analyzing system security, conducting threat modeling sessions, or creating security documentation.
performance-testing-review-multi-agent-review
Use when working with performance testing review multi agent review
performance-testing-review-ai-review
You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Leverage AI tools (GitHub Copilot, Qodo, GPT-5, C
performance-profiling
Performance profiling principles. Measurement, analysis, and optimization techniques.
performance-engineer
Expert performance engineer specializing in modern observability, application optimization, and scalable system performance. Masters OpenTelemetry, distributed tracing, load testing, multi-tier caching, Core Web Vitals, and performance monitoring. Handles end-to-end optimization, real user monitoring, and scalability patterns. Use PROACTIVELY for performance optimization, observability, or scalability challenges.
market-sizing-analysis
This skill should be used when the user asks to "calculate TAM", "determine SAM", "estimate SOM", "size the market", "calculate market opportunity", "what's the total addressable market", or requests market sizing analysis for a startup or business opportunity.