Performance Reviewer

## Overview

25 stars

Best use case

Performance Reviewer is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

## Overview

Teams using Performance Reviewer should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/performance-reviewer/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/TerminalSkills/skills/performance-reviewer/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/performance-reviewer/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How Performance Reviewer Compares

Feature / AgentPerformance ReviewerStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

## Overview

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Performance Reviewer

## Overview

This skill analyzes code changes for performance regressions and optimization opportunities. It catches common issues like N+1 database queries, unnecessary re-renders in React components, missing database indexes, unoptimized loops, and bundle size increases before they reach production.

## Instructions

### Analyzing a Diff or PR

1. Get the diff: `git diff main...HEAD` or `git diff <base>...<head>`
2. For each changed file, evaluate against these performance categories:

**Database & Queries:**
- Look for queries inside loops (N+1 pattern)
- Check for missing `WHERE` clauses or full table scans
- Identify missing indexes on columns used in `WHERE`, `JOIN`, or `ORDER BY`
- Flag `SELECT *` when only specific columns are needed
- Watch for unbounded queries without `LIMIT`

**Frontend & Rendering:**
- React: Check for missing `useMemo`/`useCallback` on expensive computations passed as props
- Look for state updates that trigger unnecessary re-renders of large component trees
- Flag inline object/array creation in render (creates new reference every render)
- Check for large bundle imports (`import moment` → suggest `dayjs`)

**Algorithm & Data Structures:**
- Flag O(n²) or worse algorithms when O(n log n) alternatives exist
- Look for repeated array searches that should use a Set or Map
- Identify string concatenation in loops (suggest StringBuilder/join)

**Memory & Resources:**
- Check for missing cleanup in `useEffect` (event listeners, intervals, subscriptions)
- Look for growing arrays/objects that are never trimmed
- Flag missing connection pool limits or unclosed file handles

**Network & I/O:**
- Identify sequential API calls that could be parallelized (`Promise.all`)
- Check for missing pagination on list endpoints
- Flag missing caching for expensive or repeated operations

### Output Format

For each issue found, report:
- **File and line number**
- **Category** (Database, Frontend, Algorithm, Memory, Network)
- **Severity** (Critical, Warning, Info)
- **What's wrong** (specific description)
- **Suggested fix** (concrete code suggestion)

### Severity Guidelines

- **Critical**: Will cause visible degradation in production (N+1 in a loop, O(n²) on large datasets, memory leak)
- **Warning**: May cause issues at scale (missing index, unnecessary re-renders, sequential API calls)
- **Info**: Optimization opportunity, not urgent (bundle size suggestion, minor refactor)

## Examples

### Example 1: N+1 Query Detection

**Input diff:**
```python
# api/views.py
def list_orders(request):
    orders = Order.objects.filter(user=request.user)
    result = []
    for order in orders:
        items = OrderItem.objects.filter(order=order)  # ← query per order
        result.append({
            "id": order.id,
            "total": order.total,
            "items": [{"name": i.product.name, "qty": i.quantity} for i in items]
        })
    return JsonResponse(result, safe=False)
```

**Output:**
```
🔴 Critical — Database N+1 Query
  File: api/views.py:5-8
  Issue: OrderItem query executes once per order, plus each i.product.name
         triggers another query. For 50 orders with 3 items each = 201 queries.
  Fix: Use select_related and prefetch_related:
    orders = Order.objects.filter(user=request.user).prefetch_related(
        Prefetch('items', queryset=OrderItem.objects.select_related('product'))
    )
  Result: Reduces to 3 queries regardless of order count.
```

### Example 2: React Re-render Issue

**Input diff:**
```jsx
function UserList({ users, onSelect }) {
  return (
    <div>
      {users.map(user => (
        <UserCard
          key={user.id}
          user={user}
          style={{ padding: 16, margin: 8 }}
          onClick={() => onSelect(user.id)}
        />
      ))}
    </div>
  );
}
```

**Output:**
```
🟡 Warning — Unnecessary Re-renders
  File: components/UserList.jsx:6-7
  Issue: Inline object `style={{ padding: 16, margin: 8 }}` and arrow function
         `() => onSelect(user.id)` create new references every render, causing
         all UserCard components to re-render even when users haven't changed.
  Fix:
    const cardStyle = useMemo(() => ({ padding: 16, margin: 8 }), []);
    const handleClick = useCallback((id) => onSelect(id), [onSelect]);
    // Then: style={cardStyle} onClick={() => handleClick(user.id)}
  Impact: With 100+ users, prevents ~100 unnecessary DOM diffing operations per parent render.
```

## Guidelines

- Focus on issues introduced by the diff, not pre-existing problems
- Prioritize Critical issues — don't bury them in a list of Info suggestions
- Always provide concrete fix suggestions, not just "optimize this"
- Consider the scale: an O(n²) loop on a 5-element array is fine; on user-generated data it's not
- When suggesting caching, specify what to cache and invalidation strategy
- Don't flag micro-optimizations that harm readability for negligible gain

Related Skills

exa-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Exa API performance with search type selection, caching, and parallelization. Use when experiencing slow responses, implementing caching strategies, or optimizing request throughput for Exa integrations. Trigger with phrases like "exa performance", "optimize exa", "exa latency", "exa caching", "exa slow", "exa fast".

evernote-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Evernote integration performance. Use when improving response times, reducing API calls, or scaling Evernote integrations. Trigger with phrases like "evernote performance", "optimize evernote", "evernote speed", "evernote caching".

elevenlabs-performance-tuning

25
from ComeOnOliver/skillshub

Optimize ElevenLabs TTS latency with model selection, streaming, caching, and audio format tuning. Use when experiencing slow TTS responses, implementing real-time voice features, or optimizing audio generation throughput. Trigger: "elevenlabs performance", "optimize elevenlabs", "elevenlabs latency", "elevenlabs slow", "fast TTS", "reduce elevenlabs latency", "TTS streaming".

documenso-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Documenso integration performance with caching, batching, and efficient patterns. Use when improving response times, reducing API calls, or optimizing bulk document operations. Trigger with phrases like "documenso performance", "optimize documenso", "documenso caching", "documenso batch operations".

detecting-performance-regressions

25
from ComeOnOliver/skillshub

Automatically detect performance regressions in CI/CD pipelines by comparing metrics against baselines. Use when validating builds or analyzing performance trends. Trigger with phrases like "detect performance regression", "compare performance metrics", or "analyze performance degradation".

detecting-performance-bottlenecks

25
from ComeOnOliver/skillshub

Execute this skill enables AI assistant to detect and resolve performance bottlenecks in applications. it analyzes cpu, memory, i/o, and database performance to identify areas of concern. use this skill when you need to diagnose slow application performance, op... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.

deepgram-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Deepgram API performance for faster transcription and lower latency. Use when improving transcription speed, reducing latency, or optimizing audio processing pipelines. Trigger: "deepgram performance", "speed up deepgram", "optimize transcription", "deepgram latency", "deepgram faster", "deepgram throughput".

databricks-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance. Trigger with phrases like "databricks performance", "spark tuning", "databricks slow", "optimize databricks", "cluster performance".

customerio-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Customer.io API performance for high throughput. Use when improving response times, implementing connection pooling, batching, caching, or regional routing. Trigger: "customer.io performance", "optimize customer.io", "customer.io latency", "customer.io connection pooling".

cursor-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Cursor IDE performance: reduce memory usage, speed up indexing, tune AI features, and manage extensions for large codebases. Triggers on "cursor performance", "cursor slow", "cursor optimization", "cursor memory", "speed up cursor", "cursor lag".

coreweave-performance-tuning

25
from ComeOnOliver/skillshub

Optimize CoreWeave GPU inference latency and throughput. Use when reducing inference latency, maximizing GPU utilization, or tuning batch sizes and concurrency. Trigger with phrases like "coreweave performance", "coreweave latency", "coreweave throughput", "optimize coreweave inference".

cohere-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Cohere API performance with caching, batching, model selection, and streaming. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Cohere Chat, Embed, and Rerank. Trigger with phrases like "cohere performance", "optimize cohere", "cohere latency", "cohere caching", "cohere slow", "cohere batch".