clay-data-handling
Implement GDPR/CCPA-compliant data handling for Clay enrichment pipelines. Use when handling PII from enrichments, implementing data retention policies, or ensuring regulatory compliance for Clay-enriched lead data. Trigger with phrases like "clay data", "clay PII", "clay GDPR", "clay data retention", "clay privacy", "clay CCPA", "clay compliance".
Best use case
clay-data-handling is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Implement GDPR/CCPA-compliant data handling for Clay enrichment pipelines. Use when handling PII from enrichments, implementing data retention policies, or ensuring regulatory compliance for Clay-enriched lead data. Trigger with phrases like "clay data", "clay PII", "clay GDPR", "clay data retention", "clay privacy", "clay CCPA", "clay compliance".
Teams using clay-data-handling should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/clay-data-handling/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How clay-data-handling Compares
| Feature / Agent | clay-data-handling | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Implement GDPR/CCPA-compliant data handling for Clay enrichment pipelines. Use when handling PII from enrichments, implementing data retention policies, or ensuring regulatory compliance for Clay-enriched lead data. Trigger with phrases like "clay data", "clay PII", "clay GDPR", "clay data retention", "clay privacy", "clay CCPA", "clay compliance".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Clay Data Handling
## Overview
Manage lead data through Clay enrichment pipelines in compliance with GDPR, CCPA, and data privacy best practices. Clay enriches records with PII (emails, phone numbers, LinkedIn profiles, job titles), requiring careful handling of consent, retention, and export controls.
## Prerequisites
- Clay account with enriched tables
- Understanding of GDPR/CCPA requirements for B2B data
- Data retention policy defined by your legal team
- CRM or database for enriched data storage
## Instructions
### Step 1: Classify Enriched Data by Sensitivity
```typescript
// src/clay/data-classification.ts
enum DataSensitivity {
PUBLIC = 'public', // Company name, industry, employee count
BUSINESS = 'business', // Work email, job title, LinkedIn URL
PERSONAL = 'personal', // Phone number, personal email
RESTRICTED = 'restricted' // Home address, personal phone
}
const FIELD_CLASSIFICATION: Record<string, DataSensitivity> = {
company_name: DataSensitivity.PUBLIC,
industry: DataSensitivity.PUBLIC,
employee_count: DataSensitivity.PUBLIC,
domain: DataSensitivity.PUBLIC,
work_email: DataSensitivity.BUSINESS,
job_title: DataSensitivity.BUSINESS,
linkedin_url: DataSensitivity.BUSINESS,
first_name: DataSensitivity.BUSINESS,
last_name: DataSensitivity.BUSINESS,
phone_number: DataSensitivity.PERSONAL,
personal_email: DataSensitivity.RESTRICTED,
home_address: DataSensitivity.RESTRICTED,
};
function classifyRow(row: Record<string, unknown>): Record<DataSensitivity, string[]> {
const classified: Record<DataSensitivity, string[]> = {
public: [], business: [], personal: [], restricted: [],
};
for (const [field, value] of Object.entries(row)) {
if (value == null) continue;
const sensitivity = FIELD_CLASSIFICATION[field] || DataSensitivity.BUSINESS;
classified[sensitivity].push(field);
}
return classified;
}
```
### Step 2: Validate Input Data Before Enrichment
```typescript
// src/clay/data-validation.ts
import { z } from 'zod';
const ClayInputSchema = z.object({
domain: z.string().min(3).refine(d => d.includes('.'), 'Invalid domain'),
first_name: z.string().min(1).max(100),
last_name: z.string().min(1).max(100),
email: z.string().email().optional(),
source: z.string().optional(),
consent_basis: z.enum(['legitimate_interest', 'consent', 'contract']).optional(),
});
function validateForEnrichment(rows: unknown[]): {
valid: z.infer<typeof ClayInputSchema>[];
invalid: { row: unknown; errors: string[] }[];
} {
const valid: z.infer<typeof ClayInputSchema>[] = [];
const invalid: { row: unknown; errors: string[] }[] = [];
for (const row of rows) {
const result = ClayInputSchema.safeParse(row);
if (result.success) {
valid.push(result.data);
} else {
invalid.push({
row,
errors: result.error.issues.map(i => `${i.path.join('.')}: ${i.message}`),
});
}
}
return { valid, invalid };
}
```
### Step 3: Deduplicate Before Enrichment
```typescript
// src/clay/dedup.ts — prevent credit waste on duplicates
function deduplicateLeads(
rows: Record<string, unknown>[],
keyFields: string[] = ['domain', 'first_name', 'last_name'],
): { unique: Record<string, unknown>[]; duplicates: number } {
const seen = new Set<string>();
const unique: Record<string, unknown>[] = [];
let duplicates = 0;
for (const row of rows) {
const key = keyFields
.map(f => String(row[f] || '').toLowerCase().trim())
.join(':');
if (seen.has(key)) {
duplicates++;
continue;
}
seen.add(key);
unique.push(row);
}
return { unique, duplicates };
}
```
### Step 4: Add Retention Metadata to Enriched Data
```typescript
// src/clay/retention.ts
interface EnrichedRecordWithRetention {
// Original enriched data
[key: string]: unknown;
// Retention metadata
_enriched_at: string; // ISO timestamp
_retention_expires: string; // ISO timestamp
_enrichment_source: string; // 'clay'
_consent_basis: string; // Legal basis for processing
_data_subject_rights: string; // How to handle deletion requests
}
function addRetentionMetadata(
enrichedRow: Record<string, unknown>,
retentionDays: number = 365,
consentBasis: string = 'legitimate_interest',
): EnrichedRecordWithRetention {
const now = new Date();
const expires = new Date(now.getTime() + retentionDays * 24 * 60 * 60 * 1000);
return {
...enrichedRow,
_enriched_at: now.toISOString(),
_retention_expires: expires.toISOString(),
_enrichment_source: 'clay',
_consent_basis: consentBasis,
_data_subject_rights: 'Contact privacy@yourcompany.com for deletion/access requests',
};
}
```
### Step 5: GDPR-Compliant Export
```typescript
// src/clay/export.ts
/** Strip PII for analytics/reporting exports */
function anonymizeForAnalytics(row: Record<string, unknown>): Record<string, unknown> {
const anonymized = { ...row };
// Hash identifiers instead of including plaintext
if (anonymized.work_email) {
anonymized.email_hash = crypto.createHash('sha256')
.update(String(anonymized.work_email).toLowerCase())
.digest('hex');
delete anonymized.work_email;
}
// Remove all personal identifiers
delete anonymized.first_name;
delete anonymized.last_name;
delete anonymized.phone_number;
delete anonymized.linkedin_url;
delete anonymized.personal_email;
return anonymized;
}
/** Full export for CRM push (with consent tracking) */
function exportForCRM(row: Record<string, unknown>): Record<string, unknown> {
return {
...row,
processing_consent: row._consent_basis || 'legitimate_interest',
enrichment_date: row._enriched_at,
data_source: 'clay_enrichment',
};
}
```
### Step 6: Data Subject Rights Implementation
```typescript
// src/clay/data-rights.ts — handle GDPR deletion/access requests
async function handleDeletionRequest(email: string): Promise<{
tablesAffected: string[];
recordsDeleted: number;
}> {
// In Clay: manually delete rows containing this email
// In your database: automated deletion
console.log(`Processing deletion request for ${email}`);
// 1. Find all records
const records = await db.query('SELECT * FROM enriched_leads WHERE email = ?', [email]);
// 2. Delete from database
await db.query('DELETE FROM enriched_leads WHERE email = ?', [email]);
// 3. Log for compliance audit
await db.query('INSERT INTO deletion_log (email_hash, deleted_at, record_count) VALUES (?, ?, ?)', [
crypto.createHash('sha256').update(email).digest('hex'),
new Date().toISOString(),
records.length,
]);
// 4. Note: Clay table rows must be deleted manually in Clay UI
return {
tablesAffected: ['enriched_leads'],
recordsDeleted: records.length,
};
}
```
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| High duplicate rate | Same list imported twice | Run dedup before sending to Clay |
| Invalid emails in export | Bad source data | Validate with Zod before import |
| Expired data in CRM | No retention cleanup | Schedule weekly expiration check |
| Missing consent basis | No legal basis tracked | Add consent_basis to all records |
| GDPR deletion incomplete | Data in multiple systems | Track all systems in data map |
## Resources
- [GDPR Official Text](https://gdpr.eu/what-is-gdpr/)
- [CCPA Requirements](https://oag.ca.gov/privacy/ccpa)
- [Clay Community](https://community.clay.com)
## Next Steps
For access control, see `clay-enterprise-rbac`.Related Skills
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
College Basketball Data (CBB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
validating-database-integrity
Process use when you need to ensure database integrity through comprehensive data validation. This skill validates data types, ranges, formats, referential integrity, and business rules. Trigger with phrases like "validate database data", "implement data validation rules", "enforce data integrity constraints", or "validate data formats".
forecasting-time-series-data
This skill enables Claude to forecast future values based on historical time series data. It analyzes time-dependent data to identify trends, seasonality, and other patterns. Use this skill when the user asks to predict future values of a time series, analyze trends in data over time, or requires insights into time-dependent data. Trigger terms include "forecast," "predict," "time series analysis," "future values," and requests involving temporal data.
generating-test-data
This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.
test-data-builder
Test Data Builder - Auto-activating skill for Test Automation. Triggers on: test data builder, test data builder Part of the Test Automation skill category.
splitting-datasets
Process split datasets into training, validation, and testing sets for ML model development. Use when requesting "split dataset", "train-test split", or "data partitioning". Trigger with relevant phrases based on skill purpose.
scanning-database-security
Process use when you need to work with security and compliance. This skill provides security scanning and vulnerability detection with comprehensive guidance and automation. Trigger with phrases like "scan for vulnerabilities", "implement security controls", or "audit security".
preprocessing-data-with-automated-pipelines
Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.
optimizing-database-connection-pooling
Process use when you need to work with connection management. This skill provides connection pooling and management with comprehensive guidance and automation. Trigger with phrases like "manage connections", "configure pooling", or "optimize connection usage".
modeling-nosql-data
This skill enables Claude to design NoSQL data models. It activates when the user requests assistance with NoSQL database design, including schema creation, data modeling for MongoDB or DynamoDB, or defining document structures. Use this skill when the user mentions "NoSQL data model", "design MongoDB schema", "create DynamoDB table", or similar phrases related to NoSQL database architecture. It assists in understanding NoSQL modeling principles like embedding vs. referencing, access pattern optimization, and sharding key selection.
monitoring-database-transactions
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".