brightdata-performance-tuning
Optimize Bright Data API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Bright Data integrations. Trigger with phrases like "brightdata performance", "optimize brightdata", "brightdata latency", "brightdata caching", "brightdata slow", "brightdata batch".
Best use case
brightdata-performance-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize Bright Data API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Bright Data integrations. Trigger with phrases like "brightdata performance", "optimize brightdata", "brightdata latency", "brightdata caching", "brightdata slow", "brightdata batch".
Teams using brightdata-performance-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/brightdata-performance-tuning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How brightdata-performance-tuning Compares
| Feature / Agent | brightdata-performance-tuning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize Bright Data API performance with caching, batching, and connection pooling. Use when experiencing slow API responses, implementing caching strategies, or optimizing request throughput for Bright Data integrations. Trigger with phrases like "brightdata performance", "optimize brightdata", "brightdata latency", "brightdata caching", "brightdata slow", "brightdata batch".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Bright Data Performance Tuning
## Overview
Optimize Bright Data scraping performance through connection pooling, response caching, concurrent request tuning, and smart product selection. Web Unlocker latency is typically 5-30s due to CAPTCHA solving; Scraping Browser sessions are 10-60s.
## Prerequisites
- Bright Data zone configured
- Understanding of async patterns
- Redis or file cache available (optional)
## Latency Benchmarks
| Product | P50 | P95 | P99 | Notes |
|---------|-----|-----|-----|-------|
| Web Unlocker (simple) | 3s | 8s | 15s | No CAPTCHA |
| Web Unlocker (CAPTCHA) | 10s | 25s | 45s | With CAPTCHA solving |
| Scraping Browser | 8s | 20s | 40s | Full browser render |
| SERP API (sync) | 2s | 5s | 10s | Search results |
| Residential Proxy | 1s | 3s | 8s | Raw proxy, no unblocking |
## Instructions
### Step 1: Choose the Right Product
```typescript
// Product selection matrix
function selectProduct(target: { js: boolean; captcha: boolean; structured: boolean }) {
if (target.structured) return 'serp_api'; // Pre-parsed JSON
if (!target.js && !target.captcha) return 'residential'; // Fastest
if (target.js) return 'scraping_browser'; // Browser rendering
return 'web_unlocker'; // Best default
}
```
### Step 2: Connection Pooling with Keep-Alive
```typescript
import { Agent } from 'https';
import axios from 'axios';
// Reuse TCP connections to brd.superproxy.io
const httpsAgent = new Agent({
keepAlive: true,
maxSockets: 25, // Match your concurrency limit
maxFreeSockets: 5,
timeout: 120000,
rejectUnauthorized: false,
});
const client = axios.create({
proxy: { host: 'brd.superproxy.io', port: 33335, auth: { username: proxyUser, password: proxyPass } },
httpsAgent,
timeout: 60000,
});
```
### Step 3: Response Caching Layer
```typescript
// src/brightdata/cache.ts — avoid re-scraping identical URLs
import { createHash } from 'crypto';
import { LRUCache } from 'lru-cache';
const memoryCache = new LRUCache<string, string>({
max: 500, // Max cached pages
maxSize: 100_000_000, // 100MB total
sizeCalculation: (v) => Buffer.byteLength(v),
ttl: 3600000, // 1 hour
});
export async function cachedScrape(
url: string,
scraper: (url: string) => Promise<string>,
ttlMs?: number
): Promise<string> {
const key = createHash('sha256').update(url).digest('hex');
const cached = memoryCache.get(key);
if (cached) {
console.log(`Cache HIT: ${url}`);
return cached;
}
const html = await scraper(url);
memoryCache.set(key, html, { ttl: ttlMs });
console.log(`Cache MISS: ${url} (${Buffer.byteLength(html)} bytes)`);
return html;
}
```
### Step 4: Concurrent Scraping with Backpressure
```typescript
import PQueue from 'p-queue';
// Tune concurrency based on your plan and target site
const scrapeQueue = new PQueue({
concurrency: 10, // Concurrent proxy connections
interval: 1000, // Per second window
intervalCap: 15, // Max new requests per second
});
async function scrapeMany(urls: string[]): Promise<Map<string, string>> {
const results = new Map<string, string>();
await Promise.allSettled(
urls.map(url =>
scrapeQueue.add(async () => {
const html = await cachedScrape(url, (u) => client.get(u).then(r => r.data));
results.set(url, html);
})
)
);
console.log(`Scraped ${results.size}/${urls.length} successfully`);
return results;
}
```
### Step 5: Use Async API for Bulk Jobs
For 100+ URLs, use the Web Scraper API instead of individual proxy requests:
```typescript
// Bulk collection — one API call, Bright Data handles parallelism
async function bulkScrape(urls: string[]) {
const response = await fetch(
`https://api.brightdata.com/datasets/v3/trigger?dataset_id=${DATASET_ID}&format=json`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.BRIGHTDATA_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(urls.map(url => ({ url }))),
}
);
return response.json(); // Returns snapshot_id for status polling
}
// 1000 URLs via one trigger vs 1000 individual proxy requests
```
### Step 6: Performance Monitoring
```typescript
class ScrapeMetrics {
private timings: number[] = [];
private errors = 0;
private cacheHits = 0;
record(durationMs: number) { this.timings.push(durationMs); }
recordError() { this.errors++; }
recordCacheHit() { this.cacheHits++; }
report() {
const sorted = [...this.timings].sort((a, b) => a - b);
return {
count: sorted.length,
errors: this.errors,
cacheHits: this.cacheHits,
p50: sorted[Math.floor(sorted.length * 0.5)] || 0,
p95: sorted[Math.floor(sorted.length * 0.95)] || 0,
p99: sorted[Math.floor(sorted.length * 0.99)] || 0,
};
}
}
```
## Output
- Right product selection per use case
- Connection pooling reducing TCP overhead
- Response cache avoiding duplicate scrapes
- Concurrent scraping with backpressure control
- Bulk API for large-scale jobs
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Slow scrapes | CAPTCHA solving overhead | Expected for Web Unlocker; use cache |
| Connection exhausted | Too many concurrent | Reduce p-queue concurrency |
| Memory pressure | Large cached pages | Set maxSize on LRU cache |
| Timeout storms | All requests hitting slow site | Add circuit breaker |
## Resources
- [Bright Data Products](https://brightdata.com/products)
- [Web Scraper API](https://docs.brightdata.com/scraping-automation/web-data-apis/web-scraper-api/overview)
- [p-queue](https://github.com/sindresorhus/p-queue)
## Next Steps
For cost optimization, see `brightdata-cost-tuning`.Related Skills
validating-performance-budgets
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
tuning-hyperparameters
Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.
analyzing-query-performance
This skill enables Claude to analyze and optimize database query performance. It activates when the user discusses query performance issues, provides an EXPLAIN plan, or asks for optimization recommendations. The skill leverages the query-performance-analyzer plugin to interpret EXPLAIN plans, identify performance bottlenecks (e.g., slow queries, missing indexes), and suggest specific optimization strategies. It is useful for improving database query execution speed and resource utilization.
providing-performance-optimization-advice
Provide comprehensive prioritized performance optimization recommendations for frontend, backend, and infrastructure. Use when analyzing bottlenecks or seeking improvement strategies. Trigger with phrases like "optimize performance", "improve speed", or "performance recommendations".
profiling-application-performance
Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
detecting-performance-regressions
This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.
performance-lighthouse-runner
Performance Lighthouse Runner - Auto-activating skill for Frontend Development. Triggers on: performance lighthouse runner, performance lighthouse runner Part of the Frontend Development skill category.
performance-baseline-creator
Performance Baseline Creator - Auto-activating skill for Performance Testing. Triggers on: performance baseline creator, performance baseline creator Part of the Performance Testing skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".