bamboohr-performance-tuning
Optimize BambooHR API performance with caching, batch reports, incremental sync, and connection pooling. Use when experiencing slow API responses, implementing caching, or optimizing sync throughput. Trigger with phrases like "bamboohr performance", "optimize bamboohr", "bamboohr latency", "bamboohr caching", "bamboohr slow", "bamboohr batch".
Best use case
bamboohr-performance-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize BambooHR API performance with caching, batch reports, incremental sync, and connection pooling. Use when experiencing slow API responses, implementing caching, or optimizing sync throughput. Trigger with phrases like "bamboohr performance", "optimize bamboohr", "bamboohr latency", "bamboohr caching", "bamboohr slow", "bamboohr batch".
Teams using bamboohr-performance-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/bamboohr-performance-tuning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How bamboohr-performance-tuning Compares
| Feature / Agent | bamboohr-performance-tuning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize BambooHR API performance with caching, batch reports, incremental sync, and connection pooling. Use when experiencing slow API responses, implementing caching, or optimizing sync throughput. Trigger with phrases like "bamboohr performance", "optimize bamboohr", "bamboohr latency", "bamboohr caching", "bamboohr slow", "bamboohr batch".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# BambooHR Performance Tuning
## Overview
Optimize BambooHR API performance through request reduction, caching, incremental sync, and connection pooling. The biggest wins come from eliminating N+1 query patterns using custom reports and the changed-since endpoint.
## Prerequisites
- BambooHR API client configured
- Redis or in-memory cache available (optional)
- Performance monitoring in place
## Instructions
### Step 1: Eliminate N+1 Queries with Custom Reports
The single biggest performance improvement: use `POST /reports/custom` instead of individual employee GETs.
```typescript
// BAD: 501 API calls for 500 employees
const dir = await client.getDirectory(); // 1 call
for (const emp of dir.employees) {
await client.getEmployee(emp.id, ['salary', 'hireDate']); // 500 calls
}
// GOOD: 1 API call for all employees with all needed fields
const report = await client.customReport([
'firstName', 'lastName', 'department', 'jobTitle',
'hireDate', 'workEmail', 'status', 'location',
'supervisor', 'employeeNumber',
]);
// 1 call, returns all employees with all fields
```
**Performance impact:** 500x reduction in API calls. Custom reports return all active employees in one request.
### Step 2: Incremental Sync with Changed-Since
```typescript
import { readFileSync, writeFileSync } from 'fs';
const LAST_SYNC_FILE = '.bamboohr-last-sync';
async function incrementalSync(client: BambooHRClient): Promise<string[]> {
// Read last sync timestamp
let lastSync: string;
try {
lastSync = readFileSync(LAST_SYNC_FILE, 'utf-8').trim();
} catch {
lastSync = new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString(); // Default: 24h ago
}
// GET /employees/changed/?since=... — returns only changed employee IDs
const changed = await client.request<{
employees: Record<string, { id: string; lastChanged: string }>;
}>('GET', `/employees/changed/?since=${lastSync}`);
const changedIds = Object.keys(changed.employees || {});
console.log(`${changedIds.length} employees changed since ${lastSync}`);
if (changedIds.length === 0) return [];
// Fetch only changed employees' details
// For large sets, use custom report with filter; for small sets, individual GETs
if (changedIds.length > 20) {
// Bulk: use custom report (returns all, then filter client-side)
const report = await client.customReport([
'firstName', 'lastName', 'department', 'status',
]);
const changedData = report.employees.filter(e =>
changedIds.includes(e.id?.toString()),
);
// Process changedData...
} else {
// Small set: individual GETs are fine
for (const id of changedIds) {
const emp = await client.getEmployee(id, ['firstName', 'lastName', 'department', 'status']);
// Process emp...
}
}
// Save sync timestamp
writeFileSync(LAST_SYNC_FILE, new Date().toISOString());
return changedIds;
}
```
**Also available for table data:**
```typescript
// GET /employees/changed/tables/{tableName}?since=...
const changedJobs = await client.request<any>(
'GET', `/employees/changed/tables/jobInfo?since=${lastSync}`,
);
// Returns { employees: { "123": { lastChanged: "..." }, ... } }
```
### Step 3: Response Caching
```typescript
import { LRUCache } from 'lru-cache';
// BambooHR directory data changes infrequently — cache aggressively
const cache = new LRUCache<string, any>({
max: 500,
ttl: 5 * 60 * 1000, // 5 minutes for directory data
});
async function cachedRequest<T>(
key: string,
fetcher: () => Promise<T>,
ttlMs?: number,
): Promise<T> {
const cached = cache.get(key) as T | undefined;
if (cached) {
console.log(`Cache hit: ${key}`);
return cached;
}
const result = await fetcher();
cache.set(key, result, { ttl: ttlMs });
return result;
}
// Usage
const directory = await cachedRequest(
'directory',
() => client.getDirectory(),
5 * 60 * 1000, // Cache for 5 min
);
// Single employee — shorter cache
const employee = await cachedRequest(
`employee:${id}`,
() => client.getEmployee(id, fields),
60 * 1000, // Cache for 1 min
);
```
**Redis caching for multi-instance deployments:**
```typescript
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function redisCached<T>(
key: string,
fetcher: () => Promise<T>,
ttlSec = 300,
): Promise<T> {
const cached = await redis.get(`bamboohr:${key}`);
if (cached) return JSON.parse(cached);
const result = await fetcher();
await redis.setex(`bamboohr:${key}`, ttlSec, JSON.stringify(result));
return result;
}
// Invalidate on webhook
async function invalidateCache(employeeId: string) {
await redis.del(`bamboohr:employee:${employeeId}`);
await redis.del('bamboohr:directory'); // Directory includes this employee
}
```
### Step 4: Connection Pooling
```typescript
import { Agent } from 'https';
// Reuse TCP connections for BambooHR API calls
const keepAliveAgent = new Agent({
keepAlive: true,
maxSockets: 5, // Max 5 parallel connections
maxFreeSockets: 2,
timeout: 30_000,
keepAliveMsecs: 10_000,
});
// Pass to fetch via undici or node-fetch
// For native fetch in Node 20+, connection pooling is automatic
```
### Step 5: Request Batching with DataLoader
```typescript
import DataLoader from 'dataloader';
// Batch individual employee GETs into a custom report
const employeeLoader = new DataLoader<string, Record<string, string>>(
async (ids) => {
// One custom report instead of N individual GETs
const report = await client.customReport([
'id', 'firstName', 'lastName', 'department', 'jobTitle',
]);
const byId = new Map(report.employees.map(e => [e.id, e]));
return ids.map(id => byId.get(id) || new Error(`Employee ${id} not found`));
},
{
maxBatchSize: 100,
batchScheduleFn: cb => setTimeout(cb, 50), // Batch window: 50ms
cache: true,
},
);
// Usage — automatically batched into one API call
const [emp1, emp2, emp3] = await Promise.all([
employeeLoader.load('1'),
employeeLoader.load('2'),
employeeLoader.load('3'),
]);
```
### Step 6: Performance Monitoring
```typescript
class BambooHRMetrics {
private requests: { duration: number; status: number; endpoint: string }[] = [];
record(endpoint: string, status: number, durationMs: number) {
this.requests.push({ duration: durationMs, status, endpoint });
// Keep last 1000 requests
if (this.requests.length > 1000) this.requests.shift();
}
summary() {
const durations = this.requests.map(r => r.duration).sort((a, b) => a - b);
const errors = this.requests.filter(r => r.status >= 400);
return {
totalRequests: this.requests.length,
errorRate: (errors.length / Math.max(this.requests.length, 1) * 100).toFixed(1) + '%',
p50: durations[Math.floor(durations.length * 0.5)] || 0,
p95: durations[Math.floor(durations.length * 0.95)] || 0,
p99: durations[Math.floor(durations.length * 0.99)] || 0,
topEndpoints: this.topEndpoints(),
};
}
private topEndpoints() {
const counts = new Map<string, number>();
for (const r of this.requests) {
counts.set(r.endpoint, (counts.get(r.endpoint) || 0) + 1);
}
return [...counts.entries()].sort((a, b) => b[1] - a[1]).slice(0, 5);
}
}
```
## Output
- N+1 queries eliminated via custom reports (500x reduction)
- Incremental sync using changed-since endpoint
- Multi-tier caching (LRU in-memory + Redis)
- Connection pooling with keep-alive
- DataLoader-based request batching
- Performance metrics with p50/p95/p99
## Performance Reference
| Optimization | Before | After | Improvement |
|-------------|--------|-------|-------------|
| Custom reports vs N+1 | 501 calls | 1 call | 500x |
| Incremental sync | Full pull | Delta only | 10-100x |
| Directory caching (5 min) | Every request | 1/5 min | 50x |
| Connection pooling | New conn/request | Reused | 2-3x latency |
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Cache stampede | All caches expire simultaneously | Stagger TTLs with jitter |
| Stale data | Cache TTL too long | Invalidate on webhook events |
| DataLoader timeout | Custom report too slow | Reduce batch size |
| Memory pressure | LRU cache too large | Set `max` entries limit |
## Resources
- [BambooHR API Technical Overview](https://documentation.bamboohr.com/docs/api-details)
- [DataLoader Documentation](https://github.com/graphql/dataloader)
- [LRU Cache Documentation](https://github.com/isaacs/node-lru-cache)
## Next Steps
For cost optimization, see `bamboohr-cost-tuning`.Related Skills
validating-performance-budgets
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
tuning-hyperparameters
Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.
analyzing-query-performance
This skill enables Claude to analyze and optimize database query performance. It activates when the user discusses query performance issues, provides an EXPLAIN plan, or asks for optimization recommendations. The skill leverages the query-performance-analyzer plugin to interpret EXPLAIN plans, identify performance bottlenecks (e.g., slow queries, missing indexes), and suggest specific optimization strategies. It is useful for improving database query execution speed and resource utilization.
providing-performance-optimization-advice
Provide comprehensive prioritized performance optimization recommendations for frontend, backend, and infrastructure. Use when analyzing bottlenecks or seeking improvement strategies. Trigger with phrases like "optimize performance", "improve speed", or "performance recommendations".
profiling-application-performance
Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
detecting-performance-regressions
This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.
performance-lighthouse-runner
Performance Lighthouse Runner - Auto-activating skill for Frontend Development. Triggers on: performance lighthouse runner, performance lighthouse runner Part of the Frontend Development skill category.
performance-baseline-creator
Performance Baseline Creator - Auto-activating skill for Performance Testing. Triggers on: performance baseline creator, performance baseline creator Part of the Performance Testing skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".