Best use case
Cache Strategy is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Cache Strategy should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cache-strategy/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Cache Strategy Compares
| Feature / Agent | Cache Strategy | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cache Strategy
## Overview
This skill helps you design and implement multi-layer caching strategies for high-traffic APIs. It covers choosing the right caching pattern for your data access profile, configuring TTLs, preventing cache stampedes, and setting up cache invalidation that actually works in production.
## Instructions
### 1. Analyze the caching opportunity
Before adding caching, identify what to cache by examining query patterns:
```typescript
// Instrument your API routes to log response times and call frequency
// Look for: high frequency + low change rate = best cache candidates
// Example analysis output:
// GET /api/products → 12,000 req/min, changes every 30min → CACHE (TTL: 5min)
// GET /api/products/:id → 8,000 req/min, changes on update → CACHE (invalidate on write)
// POST /api/orders → 200 req/min, always unique → DO NOT CACHE
// GET /api/user/profile → 3,000 req/min, changes rarely → CACHE (TTL: 15min)
```
### 2. Implement cache-aside pattern (most common)
The application checks cache first, falls back to database, then populates cache:
```typescript
import Redis from "ioredis";
const redis = new Redis({ host: "localhost", port: 6379, maxRetriesPerRequest: 3 });
async function getCached<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds: number = 300
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const data = await fetcher();
await redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
return data;
}
// Usage in route handler
app.get("/api/products/:id", async (req, res) => {
const product = await getCached(
`product:${req.params.id}`,
() => db.products.findById(req.params.id),
600 // 10 minutes
);
res.json(product);
});
```
### 3. Prevent cache stampedes
When a popular key expires, hundreds of requests hit the database simultaneously:
```typescript
async function getCachedWithLock<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds: number = 300
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, "1", "EX", 10, "NX");
if (acquired) {
try {
const data = await fetcher();
await redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
return data;
} finally {
await redis.del(lockKey);
}
}
// Another process is refreshing — wait and retry
await new Promise((r) => setTimeout(r, 100));
return getCachedWithLock(key, fetcher, ttlSeconds);
}
```
### 4. Cache invalidation strategies
Pattern-based invalidation for related data:
```typescript
async function invalidatePattern(pattern: string): Promise<void> {
let cursor = "0";
do {
const [nextCursor, keys] = await redis.scan(cursor, "MATCH", pattern, "COUNT", 100);
if (keys.length > 0) await redis.del(...keys);
cursor = nextCursor;
} while (cursor !== "0");
}
// On product update: invalidate product and any list containing it
async function onProductUpdate(productId: string): Promise<void> {
await redis.del(`product:${productId}`);
await invalidatePattern("products:list:*");
}
```
## Examples
### Example 1: API response caching
**Prompt:** "Add Redis caching to my Express API. The /api/catalog endpoint is called 15,000 times per minute but the data only changes when an admin publishes updates. I want sub-10ms response times."
**Agent output:**
- Adds `src/cache/redis-client.ts` with connection pooling and retry config
- Wraps `/api/catalog` with cache-aside pattern, 5-minute TTL
- Adds stampede prevention with distributed locking
- Creates `src/cache/invalidation.ts` — called from the admin publish endpoint to bust catalog cache
- Adds cache hit/miss metrics via response headers (`X-Cache: HIT` / `X-Cache: MISS`)
### Example 2: Multi-layer caching
**Prompt:** "Our product API serves 50,000 RPM. Add in-memory cache for the hottest 1,000 items and Redis for everything else. Products change when inventory updates."
**Agent output:**
- Adds `src/cache/memory-lru.ts` using an LRU cache with 1,000 max entries and 60-second TTL
- Adds `src/cache/tiered-cache.ts` that checks memory → Redis → database in sequence
- Creates `src/events/inventory-handler.ts` that invalidates both cache layers on inventory change
- Adds `/admin/cache/stats` endpoint showing hit rates for each layer
## Guidelines
- **Cache-aside is the default** — use write-through only when you need guaranteed cache freshness on writes.
- **Never cache without a TTL** — even "permanent" data should have a long TTL (1 hour+) as a safety net.
- **Use key namespacing** — prefix keys like `products:v2:{id}` so you can version your cache schema.
- **Monitor hit rate** — below 80% means your TTL is too short or your data changes too fast for caching.
- **Serialize carefully** — JSON.parse/stringify is fine for most cases but consider MessagePack for large payloads.
- **Plan for Redis downtime** — your app should degrade gracefully to direct database queries, not crash.
- **Avoid caching user-specific data in shared caches** without proper key isolation — data leaks are a security incident.Related Skills
versioning-strategy-helper
Versioning Strategy Helper - Auto-activating skill for API Development. Triggers on: versioning strategy helper, versioning strategy helper Part of the API Development skill category.
redis-cache-manager
Redis Cache Manager - Auto-activating skill for Backend Development. Triggers on: redis cache manager, redis cache manager Part of the Backend Development skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
memcached-config-helper
Memcached Config Helper - Auto-activating skill for Backend Development. Triggers on: memcached config helper, memcached config helper Part of the Backend Development skill category.
managing-api-cache
Implement intelligent API response caching with Redis, Memcached, and CDN integration. Use when optimizing API performance with caching. Trigger with phrases like "add caching", "optimize API performance", or "implement cache layer".
elasticache-config
Elasticache Config - Auto-activating skill for AWS Skills. Triggers on: elasticache config, elasticache config Part of the AWS Skills skill category.
brand-strategy
A 7-part brand strategy framework for building comprehensive brand foundations. Trigger with phrases like "create brand strategy", "build brand brief", "define brand positioning", "brand messaging", "audience architecture", "brand truth", or "go-to-market brand plan". Use when working with brand strategy.
api-response-cacher
Api Response Cacher - Auto-activating skill for API Integration. Triggers on: api response cacher, api response cacher Part of the API Integration skill category.
api-caching-strategy
Api Caching Strategy - Auto-activating skill for API Development. Triggers on: api caching strategy, api caching strategy Part of the API Development skill category.
gtm-positioning-strategy
Find and own a defensible market position. Use when messaging sounds like competitors, conversion is weak despite awareness, repositioning a product, or testing positioning claims. Includes Crawl-Walk-Run rollout methodology and the word change that improved enterprise deal progression.
next-cache-components
Next.js 16 Cache Components - PPR, use cache directive, cacheLife, cacheTag, updateTag
mock-strategy-guide
Guides users on creating mock implementations for testing with traits, providing test doubles, and avoiding tight coupling to test infrastructure. Activates when users need to test code with external dependencies.