cursor-performance-tuning
Optimize Cursor IDE performance: reduce memory usage, speed up indexing, tune AI features, and manage extensions for large codebases. Triggers on "cursor performance", "cursor slow", "cursor optimization", "cursor memory", "speed up cursor", "cursor lag".
Best use case
cursor-performance-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize Cursor IDE performance: reduce memory usage, speed up indexing, tune AI features, and manage extensions for large codebases. Triggers on "cursor performance", "cursor slow", "cursor optimization", "cursor memory", "speed up cursor", "cursor lag".
Teams using cursor-performance-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cursor-performance-tuning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How cursor-performance-tuning Compares
| Feature / Agent | cursor-performance-tuning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize Cursor IDE performance: reduce memory usage, speed up indexing, tune AI features, and manage extensions for large codebases. Triggers on "cursor performance", "cursor slow", "cursor optimization", "cursor memory", "speed up cursor", "cursor lag".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cursor Performance Tuning
Diagnose and fix Cursor IDE performance issues. Covers editor optimization, indexing tuning, extension auditing, AI feature configuration, and strategies for large codebases.
## Performance Diagnostic Workflow
```
Step 1: Identify bottleneck
├── Editor lag? → Step 2 (Editor settings)
├── High CPU? → Step 3 (Extension audit)
├── Slow AI? → Step 4 (AI tuning)
└── Memory? → Step 5 (Memory management)
Step 2: Editor settings
├── Disable minimap, breadcrumbs
├── Reduce file watcher scope
└── Increase memory limits
Step 3: Extension audit
├── Profile running extensions
├── Disable heavy extensions
└── Use workspace-scoped disabling
Step 4: AI feature tuning
├── Optimize .cursorignore
├── Use faster models
└── Manage chat history
Step 5: Memory management
├── Close unused workspace folders
├── Limit open editor tabs
└── Clear caches
```
## Editor Optimization
### settings.json Performance Settings
```json
{
// Disable visual features for speed
"editor.minimap.enabled": false,
"editor.renderWhitespace": "none",
"editor.guides.bracketPairs": false,
"breadcrumbs.enabled": false,
"editor.occurrencesHighlight": "off",
"editor.matchBrackets": "never",
"editor.folding": false,
"editor.glyphMargin": false,
// Reduce file watching scope
"files.watcherExclude": {
"**/node_modules/**": true,
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/dist/**": true,
"**/build/**": true,
"**/coverage/**": true,
"**/.next/**": true,
"**/target/**": true
},
// Exclude from search and explorer
"files.exclude": {
"**/node_modules": true,
"**/.git": true,
"**/dist": true,
"**/build": true
},
// Memory limits
"files.maxMemoryForLargeFilesMB": 4096,
// Reduce auto-save overhead
"files.autoSave": "onFocusChange",
// Limit search results
"search.maxResults": 5000
}
```
### Disable Animations
```json
{
"workbench.list.smoothScrolling": false,
"editor.smoothScrolling": false,
"editor.cursorSmoothCaretAnimation": "off",
"terminal.integrated.smoothScrolling": false
}
```
## Extension Audit
### Profile Running Extensions
`Cmd+Shift+P` > `Developer: Show Running Extensions`
This shows:
- Extension name
- Activation time (ms)
- Profile CPU time
Sort by activation time. Extensions taking > 500ms are worth investigating.
### Process Explorer
`Cmd+Shift+P` > `Developer: Open Process Explorer`
Shows per-process CPU and memory usage:
- Main window
- Extension host (all extensions combined)
- Individual extension processes
- Terminal processes
### Common High-Impact Extensions
| Extension | Impact | Mitigation |
|-----------|--------|------------|
| **GitLens** | CPU: high on large repos | Disable for repos > 50K commits or use lightweight mode |
| **Prettier** | CPU: triggers on every save | Set `"editor.formatOnSave": false`, format manually |
| **TypeScript** | Memory: large projects | Increase `"typescript.tsserver.maxTsServerMemory": 4096` |
| **ESLint** | CPU: validates on type | Set `"eslint.run": "onSave"` instead of "onType" |
| **Spell Checker** | CPU: large files | Add exclusion patterns for generated files |
| **Import Cost** | CPU: recalculates on change | Disable for projects with many imports |
### Disable Per Workspace
Right-click extension > `Disable (Workspace)`. This keeps the extension available for other projects while removing it from the current slow one.
## AI Feature Tuning
### Indexing Optimization
The biggest performance lever for AI features:
```gitignore
# .cursorignore -- aggressive exclusion for large projects
node_modules/
dist/
build/
.next/
out/
target/
coverage/
.turbo/
.cache/
__pycache__/
*.pyc
venv/
.venv/
# Generated code
*.min.js
*.min.css
*.bundle.js
*.d.ts.map
*.tsbuildinfo
# Data files
*.csv
*.json.gz
*.parquet
*.sqlite
*.sql
# Lock files
package-lock.json
yarn.lock
pnpm-lock.yaml
Cargo.lock
# Media
*.png
*.jpg
*.gif
*.svg
*.mp4
*.woff2
# Documentation build output
docs/dist/
docs/.vitepress/dist/
```
### Tab Completion Speed
Tab completion is fast by design (~100ms), but can feel slow if:
- The file is very large (> 10K lines): split the file
- Many extensions are running: audit extensions
- Network is slow: Tab requires network for model inference
### Chat/Composer Response Time
| Factor | Impact | Fix |
|--------|--------|-----|
| Model choice | Opus/o1 are slower than Sonnet/GPT-4o | Use faster models for simple tasks |
| Context size | More @-mentions = slower | Use @Files not @Codebase when possible |
| Conversation length | Long chats slow down | Start new chat frequently |
| Server load | Peak hours are slower | Use off-peak or BYOK |
### Managing Chat History
Long chat sessions consume memory and slow down responses:
```
Signs of chat-related slowdown:
- Typing lag in the chat input
- Editor becomes sluggish after extended chat session
- AI responses take progressively longer
Fix:
1. Start a new chat (Cmd+N in chat panel)
2. Close old chat tabs
3. One topic per chat session
```
## Large Codebase Strategies
### For Projects > 50K Files
```
1. Open specific packages, not the whole monorepo
cursor packages/api/ # Not: cursor .
2. Aggressive .cursorignore (see above)
3. Multi-root workspace with only active packages
File > Add Folder to Workspace (selectively)
4. Disable codebase indexing if not needed
Cursor Settings > Features > Codebase Indexing > off
(You lose @Codebase but gain performance)
5. Increase system resources
Close other Electron apps (Slack, Teams, Discord)
Increase swap space on Linux
```
### Linux File Watcher Limits
```bash
# Check current limit
cat /proc/sys/fs/inotify/max_user_watches
# Increase (required for large projects)
echo "fs.inotify.max_user_watches=524288" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
```
### Memory Monitoring
```bash
# macOS: Monitor Cursor memory usage
top -pid $(pgrep -f "Cursor")
# Linux: Monitor Cursor processes
ps aux | grep -i cursor | sort -rn -k4
# If memory exceeds 4GB consistently:
# 1. Close unused workspace folders
# 2. Limit open editor tabs to ~20
# 3. Restart Cursor daily during heavy use
```
## Cache Management
### Clear Caches
```bash
# macOS
rm -rf ~/Library/Application\ Support/Cursor/Cache/
rm -rf ~/Library/Application\ Support/Cursor/CachedData/
rm -rf ~/Library/Application\ Support/Cursor/Code\ Cache/
# Linux
rm -rf ~/.config/Cursor/Cache/
rm -rf ~/.config/Cursor/CachedData/
rm -rf ~/.config/Cursor/Code\ Cache/
```
Restart Cursor after clearing. Caches rebuild automatically.
### Database Maintenance
Cursor stores extension data in SQLite databases. If the storage directory grows large:
```bash
# Check size (macOS)
du -sh ~/Library/Application\ Support/Cursor/
# If > 2GB, clearing Cache/ and CachedData/ usually reclaims most space
```
## Enterprise Considerations
- **Baseline performance**: Establish performance baselines for standard project sizes on team hardware
- **Hardware recommendations**: 16GB RAM minimum for large projects, 32GB for monorepos
- **Network performance**: AI features require low-latency internet. VPN routing can add 200-500ms per request
- **Standardized settings**: Distribute performance-optimized `settings.json` to all team members
## Resources
- [VS Code Performance Tips](https://code.visualstudio.com/docs/editor/editingevolved#_performance)
- [Cursor Forum - Performance](https://forum.cursor.com/c/help)
- [Codebase Indexing](https://docs.cursor.com/context/codebase-indexing)Related Skills
validating-performance-budgets
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
tuning-hyperparameters
Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.
analyzing-query-performance
This skill enables Claude to analyze and optimize database query performance. It activates when the user discusses query performance issues, provides an EXPLAIN plan, or asks for optimization recommendations. The skill leverages the query-performance-analyzer plugin to interpret EXPLAIN plans, identify performance bottlenecks (e.g., slow queries, missing indexes), and suggest specific optimization strategies. It is useful for improving database query execution speed and resource utilization.
providing-performance-optimization-advice
Provide comprehensive prioritized performance optimization recommendations for frontend, backend, and infrastructure. Use when analyzing bottlenecks or seeking improvement strategies. Trigger with phrases like "optimize performance", "improve speed", or "performance recommendations".
profiling-application-performance
Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
detecting-performance-regressions
This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.
performance-lighthouse-runner
Performance Lighthouse Runner - Auto-activating skill for Frontend Development. Triggers on: performance lighthouse runner, performance lighthouse runner Part of the Frontend Development skill category.
performance-baseline-creator
Performance Baseline Creator - Auto-activating skill for Performance Testing. Triggers on: performance baseline creator, performance baseline creator Part of the Performance Testing skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".