clay-performance-tuning
Optimize Clay table enrichment throughput, reduce processing time, and improve hit rates. Use when experiencing slow enrichment, poor email find rates, or needing to process large tables efficiently. Trigger with phrases like "clay performance", "optimize clay", "clay slow", "clay throughput", "clay fast enrichment", "clay batch optimization".
Best use case
clay-performance-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize Clay table enrichment throughput, reduce processing time, and improve hit rates. Use when experiencing slow enrichment, poor email find rates, or needing to process large tables efficiently. Trigger with phrases like "clay performance", "optimize clay", "clay slow", "clay throughput", "clay fast enrichment", "clay batch optimization".
Teams using clay-performance-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/clay-performance-tuning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How clay-performance-tuning Compares
| Feature / Agent | clay-performance-tuning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize Clay table enrichment throughput, reduce processing time, and improve hit rates. Use when experiencing slow enrichment, poor email find rates, or needing to process large tables efficiently. Trigger with phrases like "clay performance", "optimize clay", "clay slow", "clay throughput", "clay fast enrichment", "clay batch optimization".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Clay Performance Tuning
## Overview
Optimize Clay table processing speed, enrichment hit rates, and credit efficiency. Clay processes enrichment columns sequentially per row, and each enrichment column makes external API calls. Performance tuning focuses on reducing wasted enrichments, ordering columns optimally, and managing table auto-run behavior.
## Prerequisites
- Clay table with enrichment columns configured
- Understanding of which providers are in your waterfall
- Access to Clay table settings and column configuration
## Instructions
### Step 1: Order Enrichment Columns by Speed
Clay runs enrichment columns left-to-right. Place fast columns first:
| Column Type | Typical Speed | Position |
|-------------|---------------|----------|
| Company lookup (Clearbit) | ~100ms | First (fastest) |
| Email finder (single provider) | ~200ms | Second |
| Email waterfall (multi-provider) | 1-10s | Middle |
| Claygent AI research | 5-30s | Later |
| HTTP API (outbound call) | Variable | Last |
| AI text generation | 2-5s | After Claygent |
**Why order matters:** Fast columns populate data that slow columns may need as input (e.g., company name feeds into Claygent research prompt).
### Step 2: Add Conditional Run Rules
Prevent enrichments from running on rows that won't yield results:
```
# In Clay column settings > "Only run if" condition:
# Email waterfall: only run if we have enough input data
ISNOTEMPTY(domain) AND ISNOTEMPTY(first_name) AND ISNOTEMPTY(last_name)
# Claygent: only run for high-value prospects
ICP Score >= 60 AND ISNOTEMPTY(Company Name)
# CRM push: only run for enriched, qualified leads
ICP Score >= 70 AND ISNOTEMPTY(Work Email)
```
This prevents:
- Waterfall enrichment on rows with missing domains (wasted credits)
- Claygent research on low-value prospects (expensive AI credits)
- CRM pushes for incomplete records
### Step 3: Optimize Input Data Before Import
```typescript
// src/clay/pre-process.ts — clean data before sending to Clay
interface RawLead {
domain?: string;
email?: string;
first_name?: string;
last_name?: string;
}
function preProcessForClay(rows: RawLead[]): {
ready: RawLead[];
filtered: { row: RawLead; reason: string }[];
stats: { total: number; ready: number; filtered: number; deduped: number };
} {
const personalDomains = new Set([
'gmail.com', 'yahoo.com', 'hotmail.com', 'outlook.com',
'icloud.com', 'aol.com', 'protonmail.com', 'mail.com',
]);
const seen = new Set<string>();
const ready: RawLead[] = [];
const filtered: { row: RawLead; reason: string }[] = [];
let deduped = 0;
for (const row of rows) {
// Normalize domain
const domain = row.domain?.toLowerCase().trim().replace(/^(https?:\/\/)?(www\.)?/, '').replace(/\/.*$/, '');
// Filter invalid
if (!domain || !domain.includes('.')) {
filtered.push({ row, reason: 'invalid domain' });
continue;
}
if (personalDomains.has(domain)) {
filtered.push({ row, reason: 'personal email domain' });
continue;
}
if (!row.first_name?.trim() || !row.last_name?.trim()) {
filtered.push({ row, reason: 'missing name' });
continue;
}
// Deduplicate
const key = `${domain}:${row.first_name?.toLowerCase()}:${row.last_name?.toLowerCase()}`;
if (seen.has(key)) {
deduped++;
continue;
}
seen.add(key);
ready.push({ ...row, domain });
}
return {
ready,
filtered,
stats: {
total: rows.length,
ready: ready.length,
filtered: filtered.length,
deduped,
},
};
}
// Usage
const { ready, stats } = preProcessForClay(rawLeads);
console.log(`Pre-processing: ${stats.total} total -> ${stats.ready} ready (${stats.filtered} filtered, ${stats.deduped} deduped)`);
// Typical result: 30-50% of rows filtered, saving that many credits
```
### Step 4: Limit Waterfall Depth
Each additional waterfall provider adds 1-5 seconds per row and burns credits if the previous providers already found data:
```yaml
# Before: 5-provider waterfall (slow, expensive)
# Each provider: ~2 credits, ~2s
# Worst case: 10 credits, 10s per row
waterfall_deep:
providers: [apollo, hunter, prospeo, dropcontact, findymail]
max_time_per_row: "~10s"
max_credits_per_row: 10
# After: 2-provider waterfall (fast, cheap)
# Covers 80%+ of findable emails with 2 providers
waterfall_optimized:
providers: [apollo, hunter]
max_time_per_row: "~4s"
max_credits_per_row: 4
coverage_loss: "~5-10%"
```
**Rule of thumb:** Apollo + one backup provider covers 80-85% of findable work emails. Adding more providers gives diminishing returns.
### Step 5: Use Table-Level Auto-Update Controls
```yaml
# Table Settings in Clay UI:
table_auto_update: ON # Parent switch: if OFF, nothing auto-runs
column_settings:
company_lookup:
auto_run: ON # Runs on every new row
email_waterfall:
auto_run: ON # Runs on every new row (if condition met)
condition: "ISNOTEMPTY(domain)"
claygent_research:
auto_run: OFF # Manual trigger only (expensive)
crm_push:
auto_run: ON # Auto-push qualified leads
condition: "ICP Score >= 70"
```
### Step 6: Schedule Large Imports for Off-Peak
Clay's enrichment providers respond faster during off-peak hours (US nighttime):
```typescript
// src/clay/scheduler.ts
function shouldProcessNow(rowCount: number): { proceed: boolean; reason: string } {
const hour = new Date().getUTCHours();
const isOffPeak = hour >= 2 && hour <= 8; // 2am-8am UTC
if (rowCount < 100) {
return { proceed: true, reason: 'Small batch — process anytime' };
}
if (rowCount >= 1000 && !isOffPeak) {
return {
proceed: false,
reason: `Large batch (${rowCount} rows). Schedule for 02:00-08:00 UTC for faster provider responses.`,
};
}
return { proceed: true, reason: isOffPeak ? 'Off-peak — optimal time' : 'Medium batch — acceptable' };
}
```
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Table stuck processing | Provider rate limit hit | Wait for reset or reduce concurrency |
| Slow enrichment (>10s/row) | Deep waterfall (5+ providers) | Reduce to 2-3 providers |
| Low hit rate (<40%) | Bad input data | Pre-validate and filter before import |
| Credits burning with no results | No conditional run rules | Add "Only run if" conditions to columns |
| Enrichment re-runs on edit | Table auto-update triggered | Turn off auto-update during bulk edits |
## Output
- Optimized table with conditional enrichment rules
- Pre-processed input data (30-50% credit savings typical)
- Column order optimized for speed
- Waterfall depth reduced to 2-3 providers
## Resources
- [Clay University -- Table Management Settings](https://university.clay.com/docs/table-management-settings)
- [Clay University -- Actions & Data Credits](https://university.clay.com/docs/actions-data-credits)
## Next Steps
For cost optimization, see `clay-cost-tuning`.Related Skills
validating-performance-budgets
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
tuning-hyperparameters
Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.
analyzing-query-performance
This skill enables Claude to analyze and optimize database query performance. It activates when the user discusses query performance issues, provides an EXPLAIN plan, or asks for optimization recommendations. The skill leverages the query-performance-analyzer plugin to interpret EXPLAIN plans, identify performance bottlenecks (e.g., slow queries, missing indexes), and suggest specific optimization strategies. It is useful for improving database query execution speed and resource utilization.
providing-performance-optimization-advice
Provide comprehensive prioritized performance optimization recommendations for frontend, backend, and infrastructure. Use when analyzing bottlenecks or seeking improvement strategies. Trigger with phrases like "optimize performance", "improve speed", or "performance recommendations".
profiling-application-performance
Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
detecting-performance-regressions
This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.
performance-lighthouse-runner
Performance Lighthouse Runner - Auto-activating skill for Frontend Development. Triggers on: performance lighthouse runner, performance lighthouse runner Part of the Frontend Development skill category.
performance-baseline-creator
Performance Baseline Creator - Auto-activating skill for Performance Testing. Triggers on: performance baseline creator, performance baseline creator Part of the Performance Testing skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".